id
stringlengths 2
8
| url
stringlengths 31
133
| title
stringlengths 1
79
| text
stringlengths 10
201k
|
---|---|---|---|
19865 | https://en.wikipedia.org/wiki/March%206 | March 6 |
Events
Pre-1600
12 BCE – The Roman emperor Augustus is named Pontifex Maximus, incorporating the position into that of the emperor.
632 – The Farewell Sermon (Khutbah, Khutbatul Wada') of the Islamic prophet Muhammad.
845 – The 42 Martyrs of Amorium are killed after refusing to convert to Islam.
961 – Byzantine conquest of Chandax by Nikephoros Phokas, end of the Emirate of Crete.
1204 – The Siege of Château Gaillard ends in a French victory over King John of England, who loses control of Normandy to King Philip II Augustus.
1323 – Treaty of Paris of 1323 is signed.
1454 – Thirteen Years' War: Delegates of the Prussian Confederation pledge allegiance to King Casimir IV of Poland who agrees to commit his forces in aiding the Confederation's struggle for independence from the Teutonic Knights.
1521 – Ferdinand Magellan arrives at Guam.
1601–1900
1665 – The first joint Secretary of the Royal Society, Henry Oldenburg, publishes the first issue of Philosophical Transactions of the Royal Society, the world's longest-running scientific journal.
1788 – The First Fleet arrives at Norfolk Island in order to found a convict settlement.
1820 – The Missouri Compromise is signed into law by President James Monroe. The compromise allows Missouri to enter the Union as a slave state, brings Maine into the Union as a free state, and makes the rest of the northern part of the Louisiana Purchase territory slavery-free.
1834 – York, Upper Canada, is incorporated as Toronto.
1836 – Texas Revolution: Battle of the Alamo: After a thirteen-day siege by an army of 3,000 Mexican troops, the 187 Texas volunteers, including frontiersman Davy Crockett and colonel Jim Bowie, defending the Alamo are killed and the fort is captured.
1857 – The Supreme Court of the United States rules 7–2 in the Dred Scott v. Sandford case that the Constitution does not confer citizenship on black people.
1869 – Dmitri Mendeleev presents the first periodic table to the Russian Chemical Society.
1882 – The Serbian kingdom is re-founded.
1899 – Bayer registers "Aspirin" as a trademark.
1901–present
1901 – Anarchist assassin tries to kill German Emperor Wilhelm II.
1912 – Italo-Turkish War: Italian forces become the first to use airships in war, as two dirigibles drop bombs on Turkish troops encamped at Janzur, from an altitude of 6,000 feet.
1930 – International Unemployment Day demonstrations globally initiated by the Comintern.
1933 – Great Depression: President Franklin D. Roosevelt declares a "bank holiday", closing all U.S. banks and freezing all financial transactions.
1943 – Norman Rockwell published Freedom from Want in The Saturday Evening Post with a matching essay by Carlos Bulosan as part of the Four Freedoms series.
1943 – World War II: Generalfeldmarschall Erwin Rommel launches the Battle of Medenine in an attempt to slow down the British Eight Army. It fails, and he leaves Africa three days later.
1943 – World War II: The Battle of Fardykambos, one of the first major battles between the Greek Resistance and the occupying Royal Italian Army, ends with the surrender of an entire Italian battalion, the bulk of the garrison of the town of Grevena, leading to its liberation a fortnight later.
1944 – World War II: Soviet Air Forces bomb an evacuated town of Narva in German-occupied Estonia, destroying the entire historical Swedish-era town.
1945 – World War II: Cologne is captured by American troops. On the same day, Operation Spring Awakening, the last major German offensive of the war, begins.
1946 – Ho Chi Minh signs an agreement with France which recognizes Vietnam as an autonomous state in the Indochinese Federation and the French Union.
1951 – Cold War: The trial of Ethel and Julius Rosenberg begins.
1953 – Georgy Malenkov succeeds Joseph Stalin as Premier of the Soviet Union and First Secretary of the Communist Party of the Soviet Union.
1957 – Ghana becomes the first Sub-Saharan country to gain independence from the British.
1964 – Nation of Islam leader Elijah Muhammad officially gives boxing champion Cassius Clay the name Muhammad Ali.
1964 – Constantine II becomes the last King of Greece.
1965 – Premier Tom Playford of South Australia loses power after 27 years in office.
1967 – Cold War: Joseph Stalin's daughter Svetlana Alliluyeva defects to the United States.
1968 – Three rebels are executed by Rhodesia, the first executions since UDI, prompting international condemnation.
1970 – An explosion at the Weather Underground safe house in Greenwich Village kills three.
1975 – For the first time the Zapruder film of the assassination of John F. Kennedy is shown in motion to a national TV audience by Robert J. Groden and Dick Gregory.
1975 – Algiers Accord: Iran and Iraq announce a settlement of their border dispute.
1984 – In the United Kingdom, a walkout at Cortonwood Colliery in Brampton Bierlow signals the start of a strike that lasted almost a year and involved the majority of the country's miners.
1987 – The British ferry capsizes in about 90 seconds, killing 193.
1988 – Three Provisional Irish Republican Army volunteers are shot dead by the SAS in Gibraltar in Operation Flavius.
1992 – The Michelangelo computer virus begins to affect computers.
2003 – Air Algérie Flight 6289 crashes at the Aguenar – Hadj Bey Akhamok Airport in Tamanrasset, Algeria, killing 102 out of the 103 people on board.
2008 – A suicide bomber kills 68 people (including first responders) in Baghdad on the same day that a gunman kills eight students in Jerusalem.
2018 – Forbes names Jeff Bezos as the world's richest person, for the first time, at $112 billion net worth.
Births
Pre-1600
1340 – John of Gaunt (probable; d. 1399)
1405 – John II of Castile (d. 1454)
1459 – Jakob Fugger, German merchant and banker (d. 1525)
1475 – Michelangelo, Italian painter and sculptor (d. 1564)
1483 – Francesco Guicciardini, Italian historian and politician (d. 1540)
1493 – Juan Luis Vives, Spanish scholar and humanist (d. 1540)
1495 – Luigi Alamanni, Italian poet and diplomat (d. 1556)
1536 – Santi di Tito, Italian painter (d. 1603)
1601–1900
1619 – Cyrano de Bergerac, French author and playwright (d. 1655)
1663 – Francis Atterbury, English bishop and poet (d. 1732)
1706 – George Pocock, English admiral (d. 1792)
1716 – Pehr Kalm, Swedish-Finnish botanist and explorer (d. 1779)
1724 – Henry Laurens, English-American merchant and politician, 5th President of the Continental Congress (d. 1792)
1761 – Antoine-François Andréossy, French general and diplomat (d. 1828)
1779 – Antoine-Henri Jomini, Swiss-French general (d. 1869)
1780 – Lucy Barnes, American writer (d. 1809)
1785 – Karol Kurpiński, Polish composer and conductor (d. 1857)
1787 – Joseph von Fraunhofer, German physicist and astronomer (d. 1826)
1806 – Elizabeth Barrett Browning, English-Italian poet and translator (d. 1861)
1812 – Aaron Lufkin Dennison, American businessman, co-founded the Waltham Watch Company (d. 1895)
1817 – Princess Clémentine of Orléans (d. 1907)
1818 – William Claflin, American businessman and politician, 27th Governor of Massachusetts (d. 1905)
1823 – Charles I of Württemberg (d. 1891)
1826 – Annie Feray Mutrie, British painter (d. 1893)
1831 – Philip Sheridan, Irish-American general (d. 1888)
1834 – George du Maurier, French-English author and illustrator (d. 1896)
1841 – Viktor Burenin, Russian author, poet, playwright, and critic (d. 1926)
1849 – Georg Luger, Austrian gun designer, designed the Luger pistol (d. 1923)
1864 – Richard Rushall, British businessman (d. 1953)
1870 – Oscar Straus, Viennese composer and conductor (d. 1954)
1871 – Afonso Costa, Portuguese lawyer and politician, 59th Prime Minister of Portugal (d. 1937)
1872 – Ben Harney, American pianist and composer (d. 1938)
1877 – Rose Fyleman, English writer and poet (d. 1957)
1879 – Jimmy Hunter, New Zealand rugby player (d. 1962)
1882 – F. Burrall Hoffman, American architect, co-designed Villa Vizcaya (d. 1980)
1882 – Guy Kibbee, American actor and singer (d. 1956)
1884 – Molla Mallory, Norwegian-American tennis player (d. 1959)
1885 – Ring Lardner, American journalist and author (d. 1933)
1892 – Bert Smith, English international footballer (d. 1969)
1893 – Furry Lewis, American singer-songwriter and guitarist (d. 1981)
1893 – Ella P. Stewart, pioneering Black American pharmacist (d. 1987)
1895 – Albert Tessier, Canadian priest and historian (d. 1976)
1898 – Gus Sonnenberg, American football player and wrestler (d. 1944)
1900 – Gina Cigna, French-Italian soprano and actress (d. 2001)
1900 – Lefty Grove, American baseball player (d. 1975)
1900 – Henri Jeanson, French journalist and author (d. 1970)
1901–present
1903 – Empress Kōjun of Japan (d. 2000)
1904 – José Antonio Aguirre, Spanish lawyer and politician, 1st President of the Basque Country (d. 1960)
1905 – Bob Wills, American Western swing musician, songwriter, and bandleader (d. 1975)
1906 – Lou Costello, American actor and comedian (d. 1959)
1909 – Obafemi Awolowo, Nigerian lawyer and politician (d. 1987)
1909 – Stanisław Jerzy Lec, Polish poet and author (d. 1966)
1910 – Emma Bailey, American auctioneer and author (d. 1999)
1912 – Mohammed Burhanuddin, Indian spiritual leader, 52nd Da'i al-Mutlaq (d. 2014)
1913 – Ella Logan, Scottish-American singer and actress (d. 1969)
1917 – Donald Davidson, American philosopher and academic (d. 2003)
1917 – Will Eisner, American illustrator and publisher (d. 2005)
1917 – Frankie Howerd, English comedian (d. 1992)
1918 – Howard McGhee, American trumpeter (d. 1987)
1920 – Lewis Gilbert, English director, producer, and screenwriter (d. 2018)
1921 – Leo Bretholz, Austrian-American holocaust survivor and author (d. 2014)
1923 – Ed McMahon, American comedian, game show host, and announcer (d. 2009)
1923 – Wes Montgomery, American guitarist and songwriter (d. 1968)
1924 – Ottmar Walter, German footballer (d. 2013)
1924 – William H. Webster, American lawyer and jurist, 14th Director of Central Intelligence
1926 – Ann Curtis, American swimmer (d. 2012)
1926 – Alan Greenspan, American economist and politician
1926 – Ray O'Connor, Australian politician, 22nd Premier of Western Australia (d. 2013)
1926 – Andrzej Wajda, Polish director, producer, and screenwriter (d. 2016)
1927 – William J. Bell, American screenwriter and producer (d. 2005)
1927 – Gordon Cooper, American engineer, pilot, and astronaut (d. 2004)
1927 – Gabriel García Márquez, Colombian journalist and author, Nobel Prize laureate (d. 2014)
1929 – Tom Foley, American lawyer and politician, 57th Speaker of the United States House of Representatives (d. 2013)
1929 – David Sheppard, English cricketer and bishop (d. 2005)
1930 – Lorin Maazel, French-American violinist, composer, and conductor (d. 2014)
1932 – Marc Bazin, Haitian lawyer and politician, 49th President of Haiti (d. 2010)
1932 – Bronisław Geremek, Polish historian and politician, Polish Minister of Foreign Affairs (d. 2008)
1933 – Ted Abernathy, American baseball player (d. 2004)
1933 – William Davis, German-English journalist and economist (d. 2019)
1933 – Augusto Odone, Italian economist and inventor of Lorenzo's oil (d. 2013)
1934 – Red Simpson, American singer-songwriter (d. 2016)
1935 – Ron Delany, Irish runner and coach
1935 – Derek Kevan, English footballer (d. 2013)
1936 – Bob Akin, American race car driver and journalist (d. 2002)
1936 – Marion Barry, American lawyer and politician, 2nd Mayor of the District of Columbia (d. 2014)
1936 – Choummaly Sayasone, Laotian politician, 5th President of Laos
1937 – Ivan Boesky, American businessman
1937 – Valentina Tereshkova, Russian general, pilot, and astronaut
1938 – Keishu Tanaka, Japanese politician, 17th Japanese Minister of Justice
1939 – Kit Bond, American lawyer and politician, 47th Governor of Missouri
1939 – Adam Osborne, Thai-Indian engineer and businessman, founded the Osborne Computer Corporation (d. 2003)
1940 – Ken Danby, Canadian painter (d. 2007)
1940 – Joanna Miles, French-born American actress
1940 – R. H. Sikes, American golfer
1940 – Willie Stargell, American baseball player and coach (d. 2001)
1940 – Jeff Wooller, English accountant and banker
1941 – Peter Brötzmann, German saxophonist and clarinet player
1941 – Marilyn Strathern, Welsh anthropologist and academic
1942 – Ben Murphy, American actor
1944 – Richard Corliss, American journalist and critic (d. 2015)
1944 – Kiri Te Kanawa, New Zealand soprano and actress
1944 – Mary Wilson, American singer (d. 2021)
1945 – Angelo Castro Jr., Filipino actor and journalist (d. 2012)
1946 – David Gilmour, English singer-songwriter and guitarist
1946 – Richard Noble, Scottish race car driver and businessman
1947 – Kiki Dee, English singer-songwriter
1947 – Dick Fosbury, American high jumper
1947 – Anna Maria Horsford, American actress
1947 – Rob Reiner, American actor, director, producer, and activist
1947 – Jean Seaton, English historian and academic
1947 – John Stossel, American journalist and author
1948 – Stephen Schwartz, American composer and producer
1949 – Shaukat Aziz, Pakistani economist and politician, 15th Prime Minister of Pakistan
1949 – Martin Buchan, Scottish footballer and manager
1950 – Arthur Roche, English archbishop
1951 – Gerrie Knetemann, Dutch cyclist (d. 2004)
1952 – Denis Napthine, Australian politician, 47th Premier of Victoria
1953 – Madhav Kumar Nepal, Nepali banker and politician, 34th Prime Minister of Nepal
1953 – Carolyn Porco, American astronomer and academic
1953 – Phil Alvin, American singer-songwriter and guitarist
1954 – Jeff Greenwald, American author, photographer, and monologist
1954 – Harald Schumacher, German footballer and manager
1955 – Cyprien Ntaryamira, Burundian politician, 5th President of Burundi (d. 1994)
1955 – Alberta Watson, Canadian actress (d. 2015)
1956 – Peter Roebuck, English cricketer, journalist, and sportcaster (d. 2011)
1956 – Steve Vizard, Australian television host, actor, and producer
1960 – Sleepy Floyd, American basketball player and coach
1962 – Alison Nicholas, British golfer
1963 – D. L. Hughley, American actor, producer, and screenwriter
1964 – Linda Pearson, Scottish sport shooter
1965 – Allan Bateman, Welsh rugby player
1965 – Jim Knight, English politician
1966 – Alan Davies, English comedian, actor and screenwriter
1967 – Julio Bocca, Argentinian ballet dancer and director
1967 – Connie Britton, American actress
1967 – Glenn Greenwald, American journalist and author
1967 – Shuler Hensley, American actor and singer
1968 – Moira Kelly, American actress and director
1971 – Darrick Martin, American basketball player and coach
1972 – Shaquille O'Neal, American basketball player, actor, and rapper
1972 – Jaret Reddick, American singer-songwriter, guitarist, and actor
1973 – Michael Finley, American basketball player
1973 – Peter Lindgren, Swedish guitarist and songwriter
1973 – Greg Ostertag, American basketball player
1973 – Trent Willmon, American singer-songwriter and guitarist
1974 – Guy Garvey, English singer-songwriter and guitarist
1974 – Matthew Guy, Australian politician
1974 – Brad Schumacher, American swimmer
1974 – Beanie Sigel, American rapper
1975 – Aracely Arámbula, Mexican actress and singer
1975 – Yannick Nézet-Séguin, Canadian pianist and conductor
1976 – Ken Anderson, American wrestler and actor
1977 – Nantie Hayward, South African cricketer
1977 – Giorgos Karagounis, Greek international footballer
1977 – Shabani Nonda, DR Congolese footballer
1977 – Marcus Thames, American baseball player and coach
1978 – Sage Rosenfels, American football player
1978 – Chad Wicks, American wrestler
1979 – Clint Barmes, American baseball player
1979 – Érik Bédard, Canadian baseball player
1979 – David Flair, American wrestler
1979 – Tim Howard, American soccer player
1980 – Emílson Cribari, Brazilian footballer
1981 – Ellen Muth, American actress
1983 – Andranik Teymourian, Armenian-Iranian footballer
1984 – Daniël de Ridder, Dutch footballer
1984 – Eskil Pedersen, Norwegian politician
1984 – Chris Tomson, American drummer
1985 – Bakaye Traoré, French-Malian footballer
1986 – Jake Arrieta, American baseball player
1986 – Francisco Cervelli, Venezuelan-Italian baseball player
1986 – Ross Detwiler, American baseball player
1986 – Eli Marienthal, American actor
1986 – Charlie Mulgrew, Scottish footballer
1987 – Kevin-Prince Boateng, Ghanaian-German footballer
1987 – Chico Flores, Spanish footballer
1988 – Agnes Carlsson, Swedish singer
1988 – Marina Erakovic, New Zealand tennis player
1988 – Simon Mignolet, Belgian footballer
1989 – Agnieszka Radwańska, Polish tennis player
1990 – Derek Drouin, Canadian athlete
1991 – Lex Luger, American keyboard player and producer
1991 – Emma McDougall, English footballer (d. 2013)
1991 – Tyler Gregory Okonma, American rapper
1993 – Andrés Rentería, Colombian footballer
1994 – Marcus Smart, American basketball player
1995 – Georgi Kitanov, Bulgarian footballer
1996 – Christian Coleman, American sprinter
1996 – Tyrell Fuimaono, Australian rugby player
1996 – Timo Werner, German footballer
Deaths
Pre-1600
190 – Liu Bian (poisoned by Dong Zhuo) (b. 176)
653 – Li Ke, prince of the Tang Dynasty (b. 619)
766 – Chrodegang, Frankish bishop and saint
903 – Lu Guangqi, Chinese official and chancellor
903 – Su Jian, Chinese official and chancellor
1070 – Ulric I, Margrave of Carniola
1251 – Rose of Viterbo, Italian saint (b. 1235)
1353 – Roger Grey, 1st Baron Grey de Ruthyn
1466 – Alvise Loredan, Venetian admiral and statesman (b. 1393)
1490 – Ivan the Young, Ruler of Tver (b. 1458)
1491 – Richard Woodville, 3rd Earl Rivers
1531 – Pedro Arias Dávila, Spanish explorer and diplomat (b. 1440)
1601–1900
1616 – Francis Beaumont, English playwright (b. 1584)
1754 – Henry Pelham, English politician, Prime Minister of the United Kingdom (b. 1694)
1758 – Henry Vane, 1st Earl of Darlington, English politician, Lord Lieutenant of Durham (b. 1705)
1764 – Philip Yorke, 1st Earl of Hardwicke, English lawyer and politician, Lord Chancellor of the United Kingdom (b. 1690)
1796 – Guillaume Thomas François Raynal, French historian and author (b. 1713)
1836 – Deaths at the Battle of the Alamo:
James Bonham, American lawyer and soldier (b. 1807)
James Bowie, American colonel (b. 1796)
Davy Crockett, American soldier and politician (b. 1786)
William B. Travis, American lieutenant colonel and lawyer (b. 1809)
1854 – Charles Vane, 3rd Marquess of Londonderry, Irish colonel and diplomat, Under-Secretary of State for War and the Colonies (b. 1778)
1866 – William Whewell, English priest, historian, and philosopher (b. 1794)
1867 – Charles Farrar Browne, American-English author and educator (b. 1834)
1888 – Louisa May Alcott, American novelist and poet (b. 1832)
1895 – Camilla Collett, Norwegian novelist and activist (b. 1813)
1899 – Kaʻiulani of Hawaii (b. 1875)
1900 – Gottlieb Daimler, German engineer and businessman, co-founded Daimler-Motoren-Gesellschaft (b. 1834)
1901–present
1905 – John Henninger Reagan, American surveyor, judge, and politician, 3rd Confederate States of America Secretary of the Treasury (b. 1818)
1905 – Makar Yekmalyan, Armenian composer (b. 1856)
1919 – Oskars Kalpaks, Latvian colonel (b. 1882)
1920 – Ömer Seyfettin, Turkish author and educator (b. 1884)
1932 – John Philip Sousa, American conductor and composer (b. 1854)
1933 – Anton Cermak, Czech-American lawyer and politician, 44th Mayor of Chicago (b. 1873)
1935 – Oliver Wendell Holmes Jr., American colonel, lawyer, and jurist (b. 1841)
1939 – Ferdinand von Lindemann, German mathematician and academic (b. 1852)
1941 – Francis Aveling, Canadian priest, psychologist, and author (b. 1875)
1941 – Gutzon Borglum, American sculptor and academic, designed Mount Rushmore (b. 1867)
1948 – Ross Lockridge Jr., American author, poet, and academic (b. 1914)
1948 – Alice Woodby McKane, First Black woman doctor in Savannah, Georgia (b. 1865)
1950 – Albert François Lebrun, French engineer and politician, 15th President of France (b. 1871)
1951 – Ivor Novello, Welsh singer-songwriter and actor (b. 1893)
1951 – Volodymyr Vynnychenko, Ukrainian playwright and politician, Prime Minister of Ukraine (b. 1880)
1952 – Jürgen Stroop, German general (b. 1895)
1955 – Mammad Amin Rasulzade, Azerbaijani scholar and politician (b. 1884)
1961 – George Formby, English singer-songwriter and actor (b. 1904)
1964 – Paul of Greece (b. 1901)
1965 – Margaret Dumont, American actress (b. 1889)
1967 – John Haden Badley, English author and educator, founded the Bedales School (b. 1865)
1967 – Nelson Eddy, American actor and singer (b. 1901)
1967 – Zoltán Kodály, Hungarian composer, linguist, and philosopher (b. 1882)
1970 – William Hopper, American actor (b. 1915)
1973 – Pearl S. Buck, American novelist, essayist, short story writer, Nobel Prize laureate (b. 1892)
1974 – Ernest Becker, American anthropologist and author (b. 1924)
1976 – Maxie Rosenbloom, American boxer (b. 1903)
1977 – Alvin R. Dyer, American religious leader (b. 1903)
1978 – Dennis Viollet, English-American soccer player and manager (b. 1933)
1981 – George Geary, English cricketer and coach (b. 1893)
1981 – Rambhau Mhalgi, Indian politician and member of the Lok Sabha (b. 1921)
1982 – Ayn Rand, Russian-American philosopher, author, and playwright (b. 1905)
1984 – Billy Collins Jr., American boxer (b. 1961)
1984 – Martin Niemöller, German pastor and theologian (b. 1892)
1984 – Homer N. Wallin, American admiral (b. 1893)
1984 – Henry Wilcoxon, Dominican-American actor and producer (b. 1905)
1986 – Georgia O'Keeffe, American painter (b. 1887)
1988 – Mairéad Farrell, Provisional IRA volunteer (b. 1957)
1988 – Daniel McCann, Provisional IRA volunteer (b. 1957)
1988 – Seán Savage, Provisional IRA volunteer (b. 1965)
1994 – Melina Mercouri, Greek actress and politician, 9th Greek Minister of Culture (b. 1920)
1997 – Cheddi Jagan, Guyanese politician, 4th President of Guyana (b. 1918)
1997 – Michael Manley, Jamaican soldier, pilot, and politician, 4th Prime Minister of Jamaica (b. 1924)
1997 – Ursula Torday, English author (b. 1912)
1999 – Isa bin Salman Al Khalifa, Bahrain king (b. 1933)
2000 – John Colicos, Canadian actor (b. 1928)
2002 – Bryan Fogarty, Canadian ice hockey player (b. 1969)
2004 – Hercules, American wrestler (b. 1957)
2004 – Frances Dee, American actress (b. 1909)
2005 – Hans Bethe, German-American physicist and academic, Nobel Prize laureate (b. 1906)
2005 – Danny Gardella, American baseball player and trainer (b. 1920)
2005 – Tommy Vance, English radio host (b. 1943)
2005 – Teresa Wright, American actress (b. 1918)
2005 – Gladys Marín, Chilean activist and political figure (b.1938)
2006 – Anne Braden, American journalist and activist (b. 1924)
2006 – Kirby Puckett, American baseball player and sportscaster (b. 1960)
2007 – Jean Baudrillard, French photographer and theorist (b. 1929)
2007 – Ernest Gallo, American businessman, co-founded E & J Gallo Winery (b. 1909)
2008 – Peter Poreku Dery, Ghanaian cardinal (b. 1918)
2009 – Francis Magalona, Filipino rapper, producer, and actor (b. 1964)
2010 – Endurance Idahor, Nigerian footballer (b. 1984)
2010 – Mark Linkous, American singer-songwriter, guitarist, and producer (b. 1962)
2010 – Betty Millard, American philanthropist and activist (b. 1911)
2012 – Francisco Xavier do Amaral, East Timorese politician, 1st President of East Timor (b. 1937)
2012 – Donald M. Payne, American businessman and politician (b. 1934)
2012 – Helen Walulik, American baseball player (b. 1929)
2013 – Chorão, Brazilian singer-songwriter (Charlie Brown Jr.) (b. 1970)
2013 – Stompin' Tom Connors, Canadian singer-songwriter and guitarist (b. 1936)
2013 – Alvin Lee, English singer-songwriter and guitarist (b. 1944)
2013 – W. Wallace Cleland, American biochemist and academic (b. 1930)
2014 – Alemayehu Atomsa, Ethiopian educator and politician (b. 1969)
2014 – Frank Jobe, American soldier and surgeon (b. 1925)
2014 – Sheila MacRae, English-American actress, singer, and dancer (b. 1921)
2014 – Martin Nesbitt, American lawyer and politician (b. 1946)
2014 – Manlio Sgalambro, Italian philosopher, author, and poet (b. 1924)
2015 – Fred Craddock, American minister and academic (b. 1928)
2015 – Ram Sundar Das, Indian lawyer and politician, 18th Chief Minister of Bihar (b. 1921)
2015 – Enrique "Coco" Vicéns, Puerto Rican-American basketball player and politician (b. 1926)
2016 – Nancy Reagan, American actress, 42nd First Lady of the United States (b. 1921)
2016 – Sheila Varian, American horse trainer and breeder (b. 1937)
2017 – Robert Osborne, American actor and historian (b. 1932)
2018 – Peter Nicholls, Australian science fiction critic and encyclopedist (b. 1939)
2021 – Lou Ottens, Dutch engineer and inventor (b.1926)
2021 – Graham Pink, British nurse (b. 1929)
Holidays and observances
Christian feast day:
Chrodegang
Colette
Fridolin
Kyneburga, Kyneswide and Tibba
Marcian of Tortona
William W. Mayo and Charles Frederick Menninger (Episcopal Church (USA))
Olegarius
March 6 (Eastern Orthodox liturgics)
European Day of the Righteous, commemorates those who have stood up against crimes against humanity and totalitarianism with their own moral responsibility. (Europe)
Foundation Day (Norfolk Island), the founding of Norfolk Island in 1788.
Independence Day (Ghana), celebrates the independence of Ghana from the UK in 1957.
References
External links
BBC: On This Day
Historical Events on March 6
Today in Canadian History
Days of the year
March |
19866 | https://en.wikipedia.org/wiki/Morona%20River | Morona River | The Morona River is a tributary to the Marañón River, and flows parallel to the Pastaza River and immediately to the west of it, and is the last stream of any importance on the northern side of the Amazon before reaching the Pongo de Manseriche.
It is formed from a multitude of water-courses which descend the slopes of the Ecuadorian Andes south of the gigantic volcano of Sangay; but it soon reaches the plain, which commences where it receives its Cusulima branch. The Morona is navigable for small craft for about 300 miles above its mouth, but it is extremely tortuous. Canoes may ascend many of its branches, especially the Cusuhma and the Miazal, the latter almost to the base of Sangay. The Morona has been the scene of many rude explorations, with the hope of finding it serviceable as a commercial route between the inter-Andean tableland of Ecuador and the Amazon river.
References
Tributaries of the Amazon River
Rivers of Ecuador
Rivers of Peru
International rivers of South America |
19867 | https://en.wikipedia.org/wiki/Max%20Newman | Max Newman | Maxwell Herman Alexander Newman, FRS, (7 February 1897 – 22 February 1984), generally known as Max Newman, was a British mathematician and codebreaker. His work in World War II led to the construction of Colossus, the world's first operational, programmable electronic computer, and he established the Royal Society Computing Machine Laboratory at the University of Manchester, which produced the world's first working, stored-program electronic computer in 1948, the Manchester Baby.
Education and early life
Newman was born Maxwell Herman Alexander Neumann in Chelsea, London, England, to a Jewish family, on 7 February 1897. His father was Herman Alexander Neumann, originally from the German city of Bromberg (now in Poland), who had emigrated with his family to London at the age of 15. Herman worked as a secretary in a company, and married Sarah Ann Pike, an Irish schoolteacher, in 1896.
The family moved to Dulwich in 1903, and Newman attended Goodrich Road school, then City of London School from 1908. At school, he excelled in classics and in mathematics. He played chess and the piano well.
Newman won a scholarship to study mathematics at St John's College, Cambridge in 1915, and in 1916 gained a First in Part I of the Cambridge Mathematical Tripos.
World War I
Newman's studies were interrupted by World War I. His father was interned as an enemy alien after the start of the war in 1914, and upon his release he returned to Germany. In 1916, Herman changed his name by deed poll to the anglicised "Newman" and Sarah did likewise in 1920. In January 1917, Newman took up a teaching post at Archbishop Holgate's Grammar School in York, leaving in April 1918. He spent some months in the Royal Army Pay Corps, and then taught at Chigwell School for six months in 1919 before returning to Cambridge. He was called up for military service in February 1918, but claimed conscientious objection due to his beliefs and his father's country of origin, and thereby avoided any direct role in the fighting.
Between the wars
Graduation
Newman resumed his interrupted studies in October 1919, and graduated in 1921 as a Wrangler (equivalent to a First) in Part II of the Mathematical Tripos, and gained distinction in Schedule B (the equivalent of Part III). His dissertation considered the use of "symbolic machines" in physics, foreshadowing his later interest in computing machines.
Early academic career
On 5 November 1923, Newman was elected a Fellow of St John's. He worked on the foundations of combinatorial topology, and proposed that a notion of equivalence be defined using only three elementary "moves". Newman's definition avoided difficulties that had arisen from previous definitions of the concept. Publishing over twenty papers established his reputation as an "expert in modern topology". Newman wrote Elements of the topology of plane sets of points, a work on general topology and undergraduate text. He also published papers on mathematical logic, and solved a special case of Hilbert's fifth problem.
He was appointed a lecturer in mathematics at Cambridge in 1927. His 1935 lectures on the Foundations of Mathematics and Gödel's theorem inspired Alan Turing to embark on his work on the Entscheidungsproblem (decision problem) that had been posed by Hilbert and Ackermann in 1928. Turing's solution involved proposing a hypothetical programmable computing machine. In spring 1936, Newman was presented by Turing with a draft of "On Computable Numbers with an Application to the Entscheidungsproblem". He realised the paper's importance and helped ensure swift publication. Newman subsequently arranged for Turing to visit Princeton where Alonzo Church was working on the same problem but using his Lambda calculus. During this period, Newman started to share Turing's dream of building a stored-program computing machine.
During this time at Cambridge, he developed close friendships with Patrick Blackett, Henry Whitehead and Lionel Penrose.
In September 1937, Newman and his family accepted an invitation to work for six months at Princeton. At Princeton, he worked on the Poincaré Conjecture and, in his final weeks there, presented a proof. However, in July 1938, after he returned to Cambridge, Newman discovered that his proof was fatally flawed.
In 1939, Newman was elected a Fellow of the Royal Society.
Family life
In December 1934, he married Lyn Lloyd Irvine, a writer, with Patrick Blackett as best man. They had two sons, Edward (born 1935) and William (born 1939).
World War II
The United Kingdom declared war on Germany on 3 September 1939. Newman's father was Jewish, which was of particular concern in the face of Nazi Germany, and Lyn, Edward and William were evacuated to America in July 1940, where they spent three years before returning to England in October 1943. After Oswald Veblen — maintaining 'that every able-bodied man ought to be carrying a gun or hand-grenade and fight for his country'— opposed moves to bring him to Princeton, Newman remained at Cambridge and at first continued research and lecturing.
Government Code and Cypher School
By spring 1942, Newman was considering involvement in war work. He made enquiries. After Patrick Blackett recommended him to the Director of Naval Intelligence, Newman was sounded out by Frank Adcock in connection with the Government Code and Cypher School at Bletchley Park.
Newman was cautious, concerned to ensure that the work would be sufficiently interesting and useful, and there was also the possibility that his father's German nationality would rule out any involvement in top-secret work. The potential issues were resolved by the summer, and he agreed to arrive at Bletchley Park on 31 August 1942. Newman was invited by F. L. (Peter) Lucas to work on Enigma but decided to join Tiltman's group working on Tunny.
Tunny
Newman was assigned to the Research Section and set to work on a German teleprinter cipher known as "Tunny". He joined the "Testery" in October. Newman enjoyed the company but disliked the work and found that it was not suited to his talents. He persuaded his superiors that Tutte's method could be mechanised, and he was assigned to develop a suitable machine in December 1942. Shortly afterwards, Edward Travis (then operational head of Bletchley Park) asked Newman to lead research into mechanised codebreaking.
The Newmanry
When the war ended, Newman was presented with a silver tankard inscribed 'To MHAN from the Newmanry, 1943–45'.
Heath Robinson
Construction started in January 1943, and the first prototype was delivered in June 1943. It was operated in Newman's new section, termed the "Newmanry", was housed initially in Hut 11 and initially staffed by himself, Donald Michie, two engineers, and 16 Wrens. The Wrens nicknamed the machine the "Heath Robinson", after the cartoonist of the same name who drew humorous drawings of absurd mechanical devices.
Colossus
The Robinson machines were limited in speed and reliability. Tommy Flowers of the Post Office Research Station, Dollis Hill had experience of thermionic valves and built an electronic machine, the Colossus computer which was installed in the Newmanry. This was a great success and ten were in use by the end of the war.
Later academic career
Fielden Chair, Victoria University of Manchester
In September 1945, Newman was appointed head of the Mathematics Department and to the Fielden Chair of Pure Mathematics at the University of Manchester.
Computing Machine Laboratory
Newman lost no time in establishing the renowned Royal Society Computing Machine Laboratory at the University. In February 1946, he wrote to John von Neumann, expressing his desire to build a computing machine. The Royal Society approved Newman's grant application in July 1946. Frederic Calland Williams and Thomas Kilburn, experts in electronic circuit design, were recruited from the Telecommunications Research Establishment. Kilburn and Williams built Baby, the world's first electronic stored-program digital computer based on Alan Turing's and John von Neumann's ideas.
After the Automatic Computing Engine suffered delays and set backs, Turing accepted Newman's offer and joined the Computer Machine Laboratory in May 1948 as Deputy Director (there being no Director). Turing joined Kilburn and Williams to work on Baby's successor, the Manchester Mark I. Collaboration between the University and Ferranti later produced the Ferranti Mark I, the first mass-produced computer to go on sale.
Retirement
Newman retired in 1964 to live in Comberton, near Cambridge. After Lyn's death in 1973, he married Margaret Penrose, widow of his friend Lionel Penrose, father of Sir Roger Penrose.
He continued to do research on combinatorial topology during a period when England was a major centre of activity notably Cambridge under the leadership of Christopher Zeeman. Newman made important contributions leading to an invitation to present his work at the 1962 International Congress of Mathematicians in Stockholm at the age of 65, and proved a Generalized Poincaré conjecture for topological manifolds in 1966.
At the age of 85, Newman began to suffer from Alzheimer's disease. He died in Cambridge two years later.
Honours
Fellow of the Royal Society, elected 1939
Royal Society Sylvester Medal, awarded 1958
London Mathematical Society, President 1949–1951
LMS De Morgan Medal, awarded 1962
D.Sc. University of Hull, awarded 1968
The Newman Building at Manchester was named in his honour. The building housed the pure mathematicians from the Victoria University of Manchester between moving out of the Mathematics Tower in 2004 and July 2007 when the School of Mathematics moved into its new Alan Turing Building, where a lecture room is named in his honour.
In 1946, Newman declined the offer of an OBE as he considered the offer derisory. Alan Turing had been appointed an OBE six months earlier and Newman felt that it was inadequate recognition of Turing's contribution to winning the war, referring to it as the "ludicrous treatment of Turing".
See also
List of pioneers in computer science
References
External links
Archival materials
The Max Newman Digital Archive has digital copies of materials from the library of St. John's College, Cambridge.
Pre-computer cryptographers
Fellows of the Royal Society
Fellows of St John's College, Cambridge
20th-century English mathematicians
Bletchley Park people
People educated at the City of London School
Alumni of St John's College, Cambridge
People from Chelsea, London
Academics of the University of Manchester
English conscientious objectors
English Jews
English people of German-Jewish descent
1897 births
1984 deaths
Foreign Office personnel of World War II
People from Comberton
British Army personnel of World War I
Royal Army Pay Corps soldiers |
19868 | https://en.wikipedia.org/wiki/Measure | Measure | Measure may refer to:
Measurement, the assignment of a number to a characteristic of an object or event
Law
Ballot measure, proposed legislation in the United States
Church of England Measure, legislation of the Church of England
Measure of the National Assembly for Wales, primary legislation in Wales
Assembly Measure of the Northern Ireland Assembly (1973)
Science and mathematics
Measure (data warehouse), a property on which calculations can be made
Measure (mathematics), a systematic way to assign a number to each suitable subset of that set
Measure (physics), a way to integrate over all possible histories of a system in quantum field theory
Measure (termination), in computer program termination analysis
Measuring coalgebra, a coalgebra constructed from two algebras
Measure (Apple), an iOS augmented reality app
Other uses
Measure (album), by Matt Pond PA, 2000, and its title track
Measure (bartending) or jigger, a bartending tool used to measure liquor
Measure (journal), an international journal of formal poetry
"Measures" (Justified), a 2012 episode of the TV series Justified
Measure (music), or bar, in musical notation
Measure (typography), line length in characters per line
Coal measures, the coal-bearing part of the Upper Carboniferous System
The Measure (SA), an American punk rock band
Bar (Music), a time segment in musical notation
See also
Countermeasure, a measure or action taken to counter or offset another one
Quantity, a property that can exist as a multitude or magnitude
Measure for Measure, a play by William Shakespeare |
19869 | https://en.wikipedia.org/wiki/Massachusetts%20Bay%20Transportation%20Authority | Massachusetts Bay Transportation Authority | The Massachusetts Bay Transportation Authority (abbreviated MBTA and known colloquially as "the T") is the public agency responsible for operating most public transportation services in Greater Boston, Massachusetts. Earlier modes of public transportation in Boston were independently owned and operated; many were first folded into a single agency with the formation of the Metropolitan Transit Authority (MTA) in 1947. The MTA was replaced in 1964 with the present-day MBTA, which was established as an individual department within the Commonwealth of Massachusetts before becoming a division of the Massachusetts Department of Transportation (MassDOT) in 2009.
The MBTA and Philadelphia's Southeastern Pennsylvania Transportation Authority (SEPTA) are the only US transit agencies that operate all five major types of terrestrial mass transit vehicles: light rail vehicles (the Ashmont–Mattapan High-Speed and Green Lines); rapid transit trains (the Blue, Orange, and Red Lines); regional rail trains (the Commuter Rail); electric trolleybuses (the Silver Line and several routes in the northern suburbs of Boston); and motor buses (MBTA bus). In 2016, the system averaged 1,277,200 passengers per weekday, of which heavy rail averaged 552,500 and the light-rail lines 226,500, making it the fourth-busiest subway system and the busiest light rail system in the United States. As of late 2019, average weekday ridership of the commuter rail system was 119,800, making it the sixth-busiest commuter rail system in the U.S.
The MBTA is the largest consumer of electricity in Massachusetts, and the second-largest land owner (after the Department of Conservation and Recreation).
In 2007, its CNG bus fleet was the largest consumer of alternative fuels in the state. The MBTA operates an independent law enforcement agency, the Massachusetts Bay Transportation Authority Police.
History
Mass transportation in Boston was provided by private companies, often granted charters by the state legislature for limited monopolies, with powers of eminent domain to establish a right-of-way, until the creation of the MTA in 1947. Development of mass transportation both followed and shaped economic and population patterns.
Railways
Shortly after the steam locomotive became practical for mass transportation, the private Boston and Lowell Railroad was chartered in 1830. The rail, which opened in 1835, connected Boston to Lowell, a major northerly mill town in northeast Massachusetts' Merrimack Valley, via one of the oldest railroads in North America. This marked the beginning of the development of American intercity railroads, which in Massachusetts would later become the MBTA Commuter Rail system and the Green Line D branch.
Streetcars
Starting with the opening of the Cambridge Railroad on March 26, 1856, a profusion of streetcar lines appeared in Boston under chartered companies. Despite the change of companies, Boston is the city with the oldest continuously working streetcar system in the world. Many of these companies consolidated, and animal-drawn vehicles were converted to electric propulsion.
Subways and elevated railways
Streetcar congestion in downtown Boston led to the subways in 1897 and elevated rail in 1901. The Tremont Street subway was the first rapid transit tunnel in the United States. Grade-separation added capacity and avoided delays caused by cross streets. The first elevated railway and the first rapid transit line in Boston were built three years before the first underground line of the New York City Subway, but 34 years after the first London Underground lines, and long after the first elevated railway in New York City; its Ninth Avenue El started operations on July 1, 1868 in Manhattan as an elevated cable car line.
Various extensions and branches were added at both ends, bypassing more surface tracks. As grade-separated lines were extended, street-running lines were cut back for faster downtown service. The last elevated heavy rail or "El" segments in Boston were at the extremities of the Orange Line: its northern end was relocated in 1975 from Everett to Malden, MA, and its southern end was relocated into the Southwest Corridor in 1987. However, the Green Line's Causeway Street Elevated remained in service until 2004, when it was relocated into a tunnel with an incline to reconnect to the Lechmere Viaduct. The Lechmere Viaduct and a short section of steel-framed elevated at its northern end remain in service, though the elevated section will be cut back slightly and connected to a northwards viaduct extension in 2017 as part of the Green Line Extension.
Public enterprise
The old elevated railways proved to be an eyesore and required several sharp curves in Boston's twisty streets. The Atlantic Avenue Elevated was closed in 1938 amidst declining ridership and was demolished in 1942. As rail passenger service became increasingly unprofitable, largely due to rising automobile ownership, government takeover prevented abandonment and dismantlement. The MTA purchased and took over subway, elevated, streetcar, and bus operations from the Boston Elevated Railway in 1947.
In the 1950s, the MTA ran new subway extensions, while the last two streetcar lines running into the Pleasant Street Portal of the Tremont Street Subway were substituted with buses in 1953 and 1962. In 1958, the MTA purchased the Highland branch from the Boston and Albany Railroad, reopening a year later as rapid transit line (now the Green Line D branch).
While the operations of the MTA were relatively stable by the early 1960s, the privately operated commuter rail lines were in freefall. The New Haven Railroad, New York Central Railroad, and Boston and Maine Railroad were all financially struggling; deferred maintenance was hurting the mainlines while most branch lines had been discontinued. The 1945 Coolidge Commission plan assumed that most of the commuter rail lines would be replaced by shorter rapid transit extensions, or simply feed into them at reduced service levels. Passenger service on the entire Old Colony Railroad system serving the southeastern part of the state was abandoned by the New Haven Railroad in 1959, triggering calls for state intervention. Between January 1963 and March 1964, the Mass Transportation Commission tested different fare and service levels on the B&M and New Haven systems. Determining that commuter rail operations were important but could not be financially self-sustaining, the MTC recommended an expansion of the MTA to commuter rail territory.
On August 3, 1964, the MBTA succeeded the MTA, with an enlarged service area intended to subsidize continued commuter rail operations. The original 14-municipality MTA district was expanded to 78 cities and towns. Several lines were briefly cut back while contracts with out-of-district towns were reached, but, except for the outer portions of the Central Mass branch (cut back from Hudson to South Sudbury), West Medway branch (cut back from West Medway to Millis), Blackstone Line (cut back from Blackstone to Franklin), and B&M New Hampshire services (cut back from Portsmouth to Newburyport), these cuts were temporary; however, service on three branch lines (all of them with only one round trip daily: one morning rush-hour trip in to Boston, and one evening rush-hour trip back out to the suburbs) was dropped permanently between 1965 and 1976 (the Millis (the new name of the truncated West Medway branch) and Dedham Branches were discontinued in 1967, while the Central Mass branch was abandoned in 1971). The MBTA bought the Penn Central (New York Central and New Haven) commuter rail lines in January 1973, Penn Central equipment in April 1976, and all B&M commuter assets in December 1976; these purchases served to make the system state-owned with the private railroads retained solely as operators. Only two branch lines were abandoned after 1976: service on the Lexington branch (also with only one round trip daily) was discontinued in January 1977 after a snowstorm blocked the line, while the Lowell Line's full-service Woburn branch was eliminated in January 1981 due to poor track conditions.
The MBTA assigned colors to its four rapid transit lines in 1965, and lettered the branches of the Green Line from north to south. Shortages of streetcars, among other factors, caused bustitution of rail service on two branches of the Green Line. The A branch ceased operating entirely in 1969 and was replaced by the 57 bus, while the E branch was truncated from Arborway to Heath Street in 1985, with the section between Heath Street and Arborway being replaced by the 39 bus.
The MBTA purchased bus routes in the outer suburbs to the north and south from the Eastern Massachusetts Street Railway in 1968. As with the commuter rail system, many of the outlying routes were dropped shortly before or after the takeover due to low ridership and high operating costs.
In the 1970s, the MBTA received a boost from the Boston Transportation Planning Review area-wide re-evaluation of the role of mass transit relative to highways. Producing a moratorium on highway construction inside Route 128, numerous mass transit lines were planned for expansion by the Voorhees-Skidmore, Owings and Merrill-ESL consulting team. The removal of elevated lines continued, and the closure of the Washington Street Elevated in 1987 brought the end of rapid transit service to the Roxbury neighborhood. Between 1971 and 1985, the Red Line was extended both north and south, providing not only additional subway system coverage, but also major parking structures at several of the terminal and intermediate stations.
In 1981, seventeen people and one corporation were indicted for their roles in a number of kickback schemes at the MBTA. Massachusetts Secretary of Transportation and MBTA Chairman Barry Locke was convicted of five counts of bribery and sentenced to 7 to 10 years in prison.
21st century
By 1999, the district was expanded further to 175 cities and towns, adding most that were served by or adjacent to commuter rail lines, though the MBTA did not assume responsibility for local service in those communities adjacent to or served by commuter rail. In 2016, the Town of Bourne voted to join the MBTA district, bringing the number of MBTA communities to 176.
A turning point in funding occurred in 2000. Prior to July 1, 2000, the MBTA was reimbursed by the Commonwealth of Massachusetts for all costs above revenue collected (net cost of service). Beginning on that date, the T was granted a dedicated revenue stream consisting of amounts assessed on served cities and towns, along with a dedicated 20% portion of the 5% state sales tax. The MBTA now had to live within this "forward funding" budget.
The Commonwealth assigned to the MBTA responsibility for increasing public transit to compensate for increased automobile pollution from the Big Dig. However, these projects have strained the MBTA's limited resources, since the Big Dig project did not include funding for these improvements. Since 1988, the MBTA has been the fastest expanding transit system in the country, even as Greater Boston has been one of the slowest growing metropolitan areas in the United States. The MBTA subsequently went into debt, and rates underwent an appreciable hike on January 1, 2007.
In 2006, the creation of the MetroWest Regional Transit Authority saw several towns subtract their MWRTA assessment from their MBTA assessment, though the amount of funding the MBTA received remained the same. The next year, the MBTA started commuter rail service to the Greenbush section of Scituate, the third branch of the Old Colony service. Rhode Island also paid for extensions of the Providence/Stoughton Line to T.F. Green Airport in 2010 and Wickford Junction in 2012. A new station on the Fairmount Line, the Talbot Avenue station, opened in November 2012.
On June 26, 2009, Governor Deval Patrick signed a law to place the MBTA along with other state transportation agencies within the administrative authority of the Massachusetts Department of Transportation (MassDOT), with the MBTA now part of the Mass Transit division (MassTrans).
The 2009 transportation law continued the MBTA corporate structure and changed the MBTA board membership to the five Governor-appointed members of the Mass DOT Board.
Charlie Baker administration (2015–present)
In February 2015, there was record breaking snowfall in Boston from the 2014–15 North American winter causing severe delays on all MBTA subway lines. and many long-term operational and financial problems with the entire MBTA system coming under greater public attention, Massachusetts Governor Charlie Baker indicated at the time that he was reluctant to discuss the financing issues but that he would "have more to say about that in a couple of weeks." Baker subsequently announced the formation of a special advisory panel to diagnose the MBTA's problems and write a report recommending proposals to address them. The special advisory panel formed the previous February released its report in April 2015. The next month, Baker appointed a new MassDOT Board of Directors and proposed a five-year winter resiliency plan with $83 million being spent to update infrastructure, purchase new equipment, and improve operations during severe weather. A new state law established the MBTA Fiscal and Management Control Board, effective July 17, 2015, with expanded powers to reform the agency during a five-year period. Its term was extended by another year in 2020.
Ground was broken for the $38.5 million renovation of Ruggles Station, in Roxbury, in August 2017. This was followed by the start of construction on the Green Line Extension the following June. In April 2018, the MBTA Silver Line began operating a route from Chelsea to South Station.
A Red Line derailment that resulted in train delays for several months brought more attention to capital maintenance problems at the T. After complaints from many riders and business groups, the governor proposed adding $50 million for an independent team to speed up inspections and capital projects, and general efforts to speed up existing capital spending from $1 billion to $1.5 billion per year. Replacement of the Red Line signal system were accelerated, including equipment that was damaged in the derailment. Baker proposed allocating to the MBTA $2.7 billion from the state's five-year transportation bond bill plus more money from the proposed multi-state Transportation Climate Initiative.
A December 2019 report by the MBTA's Fiscal and Management Control Board panel found "safety is not the priority at the T, but it must be." The report said "there is a general feeling that fiscal controls over the years may have gone too far, which coupled with staff cutting has resulted in the inability to accomplish required maintenance and inspections, or has hampered work keeping legacy system assets fully functional."
COVID-19 pandemic
In February 2020, the COVID-19 pandemic began to impact Massachusetts. When the stay-at-home order was issued the following month, businesses closed or sent staff to work from home, and people were advised to avoid riding public transit unless necessary. At the lowest point, MBTA ridership dropped about 78% on buses, 92% on the subway, on 71% paratransit, and 97% on commuter rail. Bus and subway ran on a modified Saturday schedule; commuter rail was on a reduced schedule and ferries were shut down completely. To facilitate social distancing from drivers, buses started running fare-free with rear-door-only boarding, passengers were required to wear face masks (except small children and people with relevant medical conditions), and the agency began frequently sanitizing vehicles and stations. Driver availability was limited as some employees contracted the virus. The T received $827 million in federal aid for FY2020 and FY2021 to make up for increased costs and lost revenue.
In June, the MBTA announced that commuter rail tickets and passes valid as of March 10 would be valid for 90 days, starting on June 22. It also made various fare changes to encourage riders to shift from potentially crowded bus or subway, including discounted ten-ride tickets, half-price tickets for youth, and Zone 1A fares extended to Lynn and River Works stations.
2021 budget proposal
Due to the COVID-19 pandemic, ridership on the MBTA has declined by 87% which has forced Massachusetts legislators and the MBTA to potentially implement a plan that would eliminate weekend commuter rail services and shut it down after 9 p.m. on weeknights, eliminate 25 bus routes, stop subway and bus services at midnight, among other changes to scale back services. This plan, if implemented through a vote by the Fiscal and Management Control Board, would save Massachusetts more than $130 million. The loss in services could potentially mean that up to 1,700 riders will not be able to take the bus and 733 riders will not be able to take the train. Supporters of this plan believe that this is the best plan as the majority of ridership has decreased due to the pandemic and it is not feasible to continue providing services that would not be used, especially if there are alternatives to using public transportation, such as personal vehicles. Through saving money by cutting services, the city is planning on using the money for services once the pandemic has ended. Supporters claim that reduced services will still be sufficient for those who still rely on public transport during the pandemic. Changes would not be implemented right away, rather, they could slowly be introduced, with the earliest changes coming in January 2021 and other changes coming as late as summer 2021. This allows the MBTA to adjust service needs based on ridership needs. On the other hand, opponents argue that by reducing services, it will be harder for riders, who are typically low-income or people of color to get to essential jobs. Riders will be forced to find other ways of transportation, which could mean using personal vehicles, leading to an increase in dependence on the automobility paradigm. Opponents argue that public transportation should be treated as a public good, which means asking wealthier people and corporations to pay their share for the upkeep of transportation as a way to achieve mobility justice.
Services
Buses
The MBTA bus system is the nation's sixth largest by ridership and comprises over 150 routes across the Greater Boston area. The area served by the MBTA's bus operations is somewhat larger than its subway and light rail service area, but is significantly smaller than that served by the MBTA's commuter rail operation. At least eight other regional transit authorities also provide bus services within that larger area, these being the Rhode Island Public Transit Authority, Brockton Area Transit Authority, Cape Ann Transportation Authority, Greater Attleboro Taunton Regional Transit Authority, Lowell Regional Transit Authority, Merrimack Valley Regional Transit Authority, Montachusett Regional Transit Authority, and Worcester Regional Transit Authority. All of these authorities have their own fare structures and some subcontract operation to private bus companies. In many cases, their buses serve as feeders to the MBTA commuter rail.
Within MBTA's bus service area, transfers from the subway are free if using a CharlieCard (for local buses); transfers to the subway require paying the difference between bus and the higher subway fare (for local buses; if not using a CharlieCard, full subway fare must be paid in addition to full bus fare). Bus-to-bus transfers (for local buses) are free unless paying cash. Many of the outlying routes run express along major highways to downtown. The buses are colored yellow on maps and in station decor.
The Silver Line is the MBTA's first service designated as bus rapid transit (BRT), even though it lacks many of the characteristics of bus rapid transit. The first segment began operations in 2002, replacing the 49 bus, which in turn replaced the Washington Street Elevated section of the Orange Line. A full subway fare was charged, with free transfers to the subways downtown until January 1, 2007, when the fare system was revised to categorize the service as a "bus" for fare purposes. The "Washington Street" segment runs along various downtown streets, and mostly in dedicated bus lanes on Washington Street itself. Two Washington Street routes start at Nubian station in Roxbury; the SL5 terminates at Downtown Crossing on Temple Place , while the SL4 terminates at South Station on Essex Street.
The "Waterfront" section opened at the end of 2004, and connects South Station to Logan Airport with route SL1 via Ted Williams Tunnel and South Boston (Design Center area) with route SL2. A new service to Chelsea opened April 21, 2018 via the same tunnel that SL1 uses and stops at the Airport Station of the Blue Line. The buses that run the Waterfront section are 2004-05 dual-mode buses, trackless trolley in the Silver Line tunnel and diesel outside. Service to Logan Airport began in June 2005. The Waterfront segment is classified as a "subway" for fare purposes. A transfer between segments is possible at South Station.
A "Phase III" tunneled segment was proposed to connect the two segments for through service, but it was controversial due to high cost and the fact that many did not consider Phase I to be adequate replacement service for the old Elevated. All Phase III tunneling proposals have been suspended due to lack of funds, as has the Urban Ring, which was intended to expand upon existing crosstown buses.
The MBTA contracts with private bus companies to provide subsidized service on certain routes outside of the usual fare structure. These are known collectively as the HI-RIDE Commuter Bus service, and are not numbered or mapped in the same way as integral bus services.
Four routes connecting to Harvard Station (Red Line) still run as trackless trolleys; there was once a much larger trackless trolley system. (See Trolleybuses in Greater Boston.)
In FY2005, there were on average 363,500 weekday boardings of MBTA-operated buses and trackless trolleys (not including the Silver Line), or 31.8% of the MBTA system. Another 4,400 boardings (0.38%) occurred on subsidized bus routes operated by private carriers.
In June 2020 in the aftermath of COVID-19 pandemic, the MBTA had begun providing real-time information on crowding. The information would be available on the MBTA website, E Ink screens, and in the Transit app. At conception the service is was only available on bus routes 1, 15, 16, 22, 23, 31, 32, 109, and 110, and it remains unclear if further lines or transit modes will be introduced in the future or if this feature will remain permanent. This feature however is not new, since 2019 Google Maps has provided this data.
Subway
The subway system has three heavy rail rapid transit lines (the Red, Orange and Blue Lines), and two light rail lines (the Green Line and the Ashmont–Mattapan High-Speed Line, the latter designated an extension of the Red Line). The system operates according to a spoke-hub distribution paradigm, with the lines running radially between central Boston and its environs. It is common usage in Boston to refer to all four of the color-coded rail lines which run underground as "the subway" or "the T", regardless of the actual railcar equipment used.
All four subway lines cross downtown, forming a quadrilateral configuration, and the Orange and Green Lines (which run approximately parallel in that district) also connect directly at two stations just north of downtown. The Red Line and Blue Line are the only pair of subway lines which do not have a direct transfer connection to each other. Because the various subway lines do not consistently run in any given compass direction, it is customary to refer to line directions as "inbound" or "outbound". Inbound trains travel towards the four downtown transfer stations, and outbound trains travel away from these hub stations.
The Green Line has four branches in the west: B (Boston College), C (Cleveland Circle), D (Riverside), and E (Heath Street). The A branch formerly went to Watertown, filling in the north-to-south letter assignment pattern, and the E branch formerly continued beyond Heath Street to Arborway.
The Red Line has two branches in the south, Ashmont and Braintree, named after their terminal stations.
The colors were assigned on August 26, 1965 in conjunction with design standards developed by Cambridge Seven Associates, and have served as the primary identifier for the lines since the 1964 reorganization of the MTA into the MBTA. The Orange Line is so named because it used to run along Orange Street (now lower Washington Street), as the former "Orange Street" also was the street that joined the city to the mainland through Boston Neck in colonial times; the Green Line because it runs adjacent to parts of the Emerald Necklace park system; the Blue Line because it runs under Boston Harbor; and the Red Line because its northernmost station used to be at Harvard University, whose school color is crimson.
The four transit lines all use standard rail gauge, but are otherwise incompatible; trains of one line would have to be modified to run on another. Orange and Blue Line trains are similar enough that modification of some Blue Line trains for operation on the Orange Line was considered, although ultimately rejected for cost reasons. Also, some of the new Blue Line cars from Siemens Transportation were tested on the Orange Line after hours, before acceptance for revenue service on the Blue Line. There are no direct track connections between lines, except between the Red Line and Ashmont-Mattapan High Speed Line, but all except the Blue Line have little-used connections to the national rail network, which have been used for deliveries of railcars and supplies.
Opened in September 1897, the four-track-wide segment of the Green Line tunnel between Park Street and Boylston stations was the first subway in the United States, and has been designated a National Historic Landmark. The downtown portions of what are now the Green, Orange, Blue, and Red line tunnels were all in service by 1912. Additions to the rapid transit network occurred in most decades of the 1900s, and continue in the 2000s with the addition of Silver Line bus rapid transit and planned Green Line expansion. (See History and Future plans sections.)
In FY2005, there were on average 628,400 weekday boardings on the rapid transit and light rail lines (including the Silver Line Bus Rapid Transit), or 55.0% of the MBTA system.
On January 29, 2014, the MBTA completed a countdown clock display system, alerting passengers to arriving trains, at all 53 heavy rail subway stations (the Red, Blue and Orange Lines). The MBTA introduced countdown clocks in underground Green Line stations during 2015. Unlike the other countdown clocks which count down in minutes, the Green Line clocks count down the number of stops away the train is.
Commuter rail
The MBTA Commuter Rail system is a regional rail network that reaches from Boston into the suburbs of eastern Massachusetts. The system consists of twelve main lines, three of which have two branches. The rail network operates according to a spoke-hub distribution paradigm, with the lines running radially outward from the city of Boston. Eight of the lines converge at South Station, with four of these passing through Back Bay station. The other four converge at North Station. There is no passenger connection between the two sides; the Grand Junction Railroad is used for non-revenue equipment moves accessing the maintenance facility. The North–South Rail Link has been proposed to connect the two halves of the system; it would be constructed under the Central Artery tunnel of the Big Dig.
Special MBTA trains are run over the Franklin Line and the Providence/Stoughton Line to Foxborough station for New England Patriots home games and other events at Gillette Stadium. The CapeFLYER intercity service, operated on summer weekends, uses MBTA equipment and operates over the Middleborough/Lakeville Line. Amtrak runs regularly scheduled intercity rail service over four lines: the Lake Shore Limited over the Framingham/Worcester Line, Acela Express and Northeast Regional services over the Providence/Stoughton Line, and the Downeaster over sections of the Lowell Line and Haverhill Line. Freight trains run by Pan Am Southern, Pan Am Railways, CSX Transportation, the Providence and Worcester Railroad, and the Fore River Railroad also use parts of the network.
The first commuter rail service in the United States was operated over what is now the Framingham/Worcester Line beginning in 1834. Within the next several decades, Boston was the center of a massive rail network, with eight trunk lines and dozens of branches. By 1900, ownership was consolidated under the Boston and Maine Railroad to the north, the New York Central Railroad to the west, and the New York, New Haven and Hartford Railroad to the south. Most branches and one trunk line – the former Old Colony Railroad main – had their passenger services discontinued during the middle of the 20th century. In 1964, the MBTA was formed to subsidize the failing suburban railroad operations, with an eye towards converting many to extensions of the existing rapid transit system. The first unified branding of the system was applied on October 8, 1974, with "MBTA Commuter Rail" naming and purple coloration analogous to the four subway lines. The system continued to shrink – mostly with the loss of marginal lines with one daily round trip – until 1981. The system has been expanded since, with four lines restored (Fairmount Line in 1979, Old Colony Lines in 1997, and Greenbush Line in 2007), six extended., and a number of stations added and rebuilt, especially on the Fairmount Line.
Several further expansions are planned or proposed. The South Coast Rail project, for which preliminary construction began in 2014, would extend the Stoughton section of the Providence/Stoughton Line to Taunton, with two branches to New Bedford and Fall River. Extensions of the Providence/Stoughton Line to Kingston, the Middleborough/Lakeville Line to Buzzards Bay, and the Lowell Line into New Hampshire are also proposed. Infill stations at West Station and South Salem are under construction or planned.
Each commuter rail line has up to eleven fare zones, numbered 1A and 1 through 10. Riders are charged based on the number of zones they travel through. Tickets can be purchased on the train, from ticket counters or machines in some rail stations, or with a mobile app. If a local vendor or ticket machine is available, riders will pay a surcharge for paying with cash on board. Fares range from $2.25 to $12.50, with multi-ride and monthly passes available. In 2016, the system averaged 122,600 daily riders, making it the fourth-busiest commuter rail system in the nation.
The MBTA commuter rail network was the first in the nation to offer free on-board Wi-Fi. It offers Wi-Fi-enabled coaches on all train sets.
Ferries
The MBTA boat system comprises several ferry routes via Boston Harbor. One of these is an inner harbor service, linking the downtown waterfront with the Boston Navy Yard in Charlestown. The other routes are commuter routes, linking downtown to Hingham, Hull, and Salem. Some commuter services operate via Logan International Airport.
All boat services are operated by private sector companies under contract to the MBTA. In FY2005, the MBTA boat system carried 4,650 passengers (0.41% of total MBTA passengers) per weekday. The service is provided through contract of the MBTA by Boston Harbor Cruises (BHC).
Paratransit
The MBTA contracts out operation of "The Ride", a door to door service for people with disabilities. Paratransit services carry 5,400 passengers on a typical weekday, or 0.47% of the MBTA system ridership. The two private service providers under contractual agreement with the MBTA for The Ride: Veterans Transportation LLC, and National Express Transit (NEXT).
In September 2016, the MBTA announced that paratransit users would be able to get rides from Uber and Lyft. Riders would pay $2 for a pickup within a few minutes (more for longer trips worth more than $15) instead of $3.15 for a scheduled pickup the next day. The MBTA would pay $13 instead of $31 per ride ($46 per trip when fixed costs of The Ride are considered).
Bicycles
Conventional bicycles are generally allowed on MBTA commuter rail, commuter boat, and rapid transit lines during off-peak hours and all day on weekends and holidays. However, bicycles are not allowed at any time on the Green Line, or the Ashmont–Mattapan High-Speed Line segment of the Red Line. Buses equipped with bike racks at the front (including the Silver Line) may always accommodate bicycles, up to the capacity limit of the racks. The MBTA claims that 95% of its buses are now equipped with bike racks, except for trackless trolleys which still lack this capability.
Due to congestion and tight clearances, bicycles are banned from Park Street, Downtown Crossing, and Government Center stations at all times.
However, compact folding bicycles are permitted on all MBTA vehicles at all times, provided that they are kept completely folded for the duration of the trip, including passage through faregates. Gasoline-powered vehicles, bike trailers, and Segways are prohibited.
No special permit is required to take a bicycle onto an MBTA vehicle, but bicyclists are expected to follow the rules and hours of operation. Cyclists under 16 years old are supposed to be accompanied by a parent or legal guardian. Detailed rules, and an explanation of how to use front-of-bus bike racks and bike parking are on the MBTA website.
The MBTA says that over 95% of its stations are equipped with bike racks, many of them under cover from the weather. In addition, over a dozen stations are equipped with "Pedal & Park" fully enclosed areas protected with video surveillance and controlled door access, for improved security. To obtain access, a personally registered CharlieCard must be used. Registration is done online, and requires a valid email address and the serial number of the CharlieCard. All bike parking is free of charge.
Parking
, the MBTA operates park and ride facilities at 103 locations with a total capacity of 55,000 automobiles, and is the owner of the largest number of off-street paid parking spaces in New England. The number of spaces at stations with parking varies from a few dozen to over 2,500. The larger lots and garages are usually near a major highway exit, and most lots fill up during the morning rush hour. There are some 22,000 spaces on the southern portion of the commuter rail system, 9,400 on the northern portion and 14,600 at subway stations. The parking fee ranges from $4 to $7 per day, and overnight parking (maximum 7 days) is permitted at some stations.
Management for a number of parking lots owned by the MBTA is handled by a private contractor. The 2012 contract with LAZ Parking (which was not its first) was terminated in 2017 after employees were discovered "skimming" revenue; the company paid $5.5 million to settle the case. A new contract with stronger performance incentives and anti-fraud penalties was then awarded to Republic Parking System of Tennessee.
Customers parking in MBTA-owned and operated lots with existing cash "honor boxes" can pay for parking online or via phone while in their cars or once they board a train, bus, or commuter boat. , the MBTA switched from ParkMobile to PayByPhone as its provider for mobile parking payments by smartphone. Monthly parking permits are available, offering a modest discount. Detailed parking information by station is available online, including prices, estimated vacancy rate, and number of accessible and bicycle parking slots.
, the MBTA has a policy for electric vehicle charging stations in its parking spaces, but does not yet have such facilities available.
From time to time the MBTA has made various agreements with companies that contribute to commuting options. One company the MBTA selected was Zipcar; the MBTA provides Zipcar with a limited number of parking spaces at various subway stations throughout the system.
Hours of operation
Traditionally, the MBTA has stopped running around 1 am each night, despite the fact that bars and clubs in most areas of Boston are open until 2 am. Like nearly all subways worldwide, the MBTA's subway does not have parallel express and local tracks, so much rail maintenance is only done when the trains are not running. An MBTA spokesperson has said, "with a 109-year-old system you have to be out there every night" to do necessary maintenance. The MBTA did experiment with "Night Owl" substitute bus service from 2001 to 2005, but abandoned it because of insufficient ridership, citing a $7.53 per rider cost to keep the service open, five times the cost per passenger of an average bus route.
A modified form of the MBTA's previous "Night Owl" service was experimentally reinstated starting in the spring of 2014 – this time, all subway lines were proposed to run until 3 am on weekends, along with the 15 most heavily used bus lines and the para-transit service "The Ride".
Starting March 28, 2014, the late-night service began operation on a one-year trial basis, with service continuation depending on late-night ridership and on possible corporate sponsorship. , late-night ridership was stable, and much higher than the earlier failed experimental service. However, it is still unclear whether and on what basis the program might be extended past its first year. The extended hours program has not been implemented on the MBTA commuter rail operations.
In early 2016, the MBTA decided that Late-Night service would be canceled because of lack of funding. The last night for late-night service was on March 19, 2016. The last train left at 2 a.m. on March 19, 2016.
In 2018, the MBTA further tried "Early Morning and Late Night Bus Service Pilots".
In June 2019, a year after the trials the board voted to make some changes to the schedule which would allow for further late night service to be incorporated long term
Ridership
During Fiscal Year 2013, the entire MBTA system had a typical weekday passenger ridership of 1,297,650. The MBTA's rapid transit lines (Red, Green, Orange, and Blue) accounted for 59% of all rides, buses accounted for 30%, and commuter rail accounted for 10% of all rides. The MBTA's ferries and paratransit accounted for the remaining 1% of rides.
Passenger ridership has been steadily growing over the years, and between 2010 and 2013, the system saw passenger ridership grow 4.6% or an additional 57,000 daily passengers to the system.
Funding
Fares and fare collection
The MBTA has various fare structures for its various types of service. The CharlieCard electronic farecard is accepted on the subway and bus systems, but not on commuter rail, ferry, or paratransit services. Passengers pay for subway and bus rides at faregates in station entrances or fareboxes in the front of vehicles; MBTA employees manually check tickets on the commuter rail and ferries.
Since the 1980s, the MBTA has offered discounted monthly passes on all modes for the convenience of daily commuters and other frequent riders. One-day and seven-day passes, intended primarily for tourists, are available for buses, subway, and inner harbor ferries.
The MBTA has periodically raised fares to match inflation and keep the system financially solvent. A substantial increase effective July 2012 raised public ire including an "Occupy the MBTA" protest. A transportation funding law passed in 2013 limits MBTA fare increases to 7% every two years. Subsequent fare increases took place in 2014, 2016, and 2019.
Several local politicians, including Boston Mayor Michelle Wu, Representative Ayanna Pressley, and Senator Edward J. Markey, have proposed to eliminate MBTA fares.
Subway and bus
All subway trips (Green Line, Blue Line, Orange Line, Red Line, Ashmont-Mattapan Line, and the Waterfront section of the Silver Line) cost $2.40 for all users. Local bus and trackless trolley fares (including the Washington Street section of the Silver Line) are $1.70 for all users. All transfers between subway lines are free with all fare media. Passengers using CharlieCards can transfer free from a subway to a bus, and from a bus to a subway for the difference in price ("step-up fare"). CharlieTicket holders can transfer free between buses, but not between subway and bus, except between rapid transit and the Washington Street section of the Silver Line. Paying directly with cash is only available on buses, Green Line surface stops, and the Ashmont-Mattapan Line; the higher CharlieTicket price is charged.
The MBTA operates "Inner Express" and "Outer Express" buses to suburbs outside the subway system. Inner Express bus trips cost $4.25; Outer Express trips cost $5.25. Free transfers are available to the subway and local buses with a CharlieCard, and to local buses with a CharlieTicket.
CharlieTickets are available from ticket vending machines in MBTA rapid transit stations. CharlieCards are not dispensed by the machines, but are available free of charge on request at most MBTA Customer Service booths in stations, or at the CharlieCard Store at Downtown Crossing station. As given out, the CharlieCards are "empty", and must have value added at an MBTA ticket machine before they can be used.
The fare system, including on-board and in-station fare vending machines, was purchased from German-based Scheidt and Bachmann, which developed the technology. The CharlieCards were developed by Gemalto and later by Giesecke & Devrient. In 2006 electronic fares replaced metal tokens, which had been used on and off on transit systems in Boston for over a century.
Until 2007, not all subway fares were identical – passengers were not charged for boarding outbound Green Line trains at surface stops, while double fares were charged for the outer ends of the Green Line D branch and the Red Line Braintree branch. As part of a general fare hike effective January 1, 2007, the MBTA eliminated these inconsistent fares.
Subway and bus fare history
Commuter Rail
Commuter rail fares are on a zone-based system, with fares dependent on the distance from downtown. Rides between Zone 1A stations – South Station, Back Bay, most of the Fairmount Line, and eight other stations within several miles of downtown – cost $2.40, the same as a subway fare with a CharlieCard. Fares for other stations range from $5.75 from Zone 1 (~5–10 miles from downtown) to $14.50 from Zone 10 (~60 miles). All Massachusetts stations are Zone 8 or closer; only T.F. Green Airport and Wickford Junction in Rhode Island are Zone 9 and 10.
Interzone fares – for trips that do not go to Zone 1A – are offered at a substantial discount to encourage riders to take the commuter rail for less common commuting patterns for which transit is not usually taken. Discounted monthly passes are available for all trips; 10-ride passes at full price are also available for trips to Zone 1A. All monthly passes include unlimited trips on the subway and local bus; some outer-zone monthlies also offer free use of express buses and ferries. A cash-on-board surcharge of $3.00 is added for trips originating from stations with fare vending machines.
MBTA boat
The Inner Harbor Ferry costs $3.25 per ride, and is grouped as a Zone 1A monthly commuter rail pass. Single rides cost $8.50 from Hull or Hingham to Boston, $17.00 from Hull or Hingham to Logan Airport, and $13.75 from Boston to Logan Airport.
The Ride
Fares on The Ride, the MBTA's paratransit program, are structured differently from other modes. Passengers using The Ride must maintain an account with the MBTA in order to pay for service. Fares are $3.35 for "ADA trips" originating within of fixed-route bus or subway service and booked in advance, and $5.60 for "premium trips" outside the mandated area.
Discounted fares
Discounted fares as well as discounted monthly local bus and subway passes are available to seniors over 65, and passengers who are permanently disabled who utilize a special photo CharlieCard (called "Senior ID" and "Transportation Access Pass", respectively). Holders of these passes are also entitled to 50% off the Commuter Rail fares. Passengers who are legally blind ride for free on all MBTA services (including express buses and the Commuter Rail) with a "Blind Access Card".
Children under 12 ride for free with an adult (up to 2 per adult). Military personnel, state police officers, police officers and firefighters from the MBTA service area, and certain government officials (Commonwealth Department of Public Utilities employees and state elevator inspectors) ride at no charge upon presentation of proper ID, or if dressed in official work uniforms.
Middle school and high school students receive the aforementioned discounts on fares. Student discounts require a "Student CharlieCard" or "S-Card" issued through the holder's school which is valid year-round. College students are not generally eligible for reduced fares, but some colleges offer a "Semester Pass" program. A special "Youth Pass" program was introduced in 2017, allowing young adults less than 25 years old who reside in participating cities or towns and are enrolled in specific low income programs to pay reduced fares.
Budget
Since the "forward funding" reform in 2000, the MBTA is funded primarily through 16% of the state sales tax excluding the meals tax (with minimum dollar amount guarantee), which is set at 6.25% statewide, and therefore equal to 1% of taxable non-meal purchases statewide. The authority is also funded by passenger fares and formula assessments of the cities and towns in its service area (excepting those which are assessed for the MetroWest Regional Transit Authority). Supplemental income is obtained from its paid parking lots, renting space to retail vendors in and around stations, rents from utility companies using MBTA rights of way, selling surplus land and movable property, advertising on vehicles and properties, and federal operating subsidies for special programs.
A May 2019 report found the MBTA had a maintenance backlog of approximately $10 billion, which it hopes to clear by 2032 by increasing spending on capital projects.
The Capital Investment Program is a rolling 5-year plan which programs capital expenses. The draft FY2009-2014 CIP allocates $3,795M, including $879M in projects funded from non-MBTA state sources (required for Clean Air Act compliance), and $299M in projects with one-time federal funding from the American Recovery and Reinvestment Act of 2009. Capital projects are paid for by federal grants, allocations from the general budget of the Commonwealth of Massachusetts (for legal commitments and expansion projects) and MBTA bonds (which are paid off through the operating budget). The FY2014 budget includes $1.422 billion for operating expenses and $443.8M in debt and lease payments.
The FY2010 budget was supplemented by $160 million in sales tax revenue when the statewide rate was raised from 5% to 6.25%, to avoid service cuts or a fare increase in a year when deferred debt payments were coming due.
Capital improvements and planning process
The Boston Metropolitan Planning Organization is responsible for overall regional surface transportation planning. As required by federal law for projects to be eligible for federal funding (except earmarks), the MPO maintains a fiscally constrained 20+ year Regional Transportation Plan for surface transportation expansion, the current edition of which is called Journey to 2030. The required 4-year MPO plan is called the Transportation Improvement Plan.
The MBTA maintains its own 25-year capital planning document, called the Program for Mass Transportation, which is fiscally unconstrained. The agency's 4-year plan is called the Capital Improvement Plan; it is the primary mechanism by which money is actually allocated to capital projects. Major capital spending projects must be approved by the MBTA Board, and except for unexpected needs, are usually included in the initial CIP.
In addition to federal funds programmed through the Boston MPO, and MBTA capital funds derived from fares, sales tax, municipal assessments, and other minor internal sources, the T receives funding from the Commonwealth of Massachusetts for certain projects. The state may fund items in the State Implementation Plan (SIP) – such as the Big Dig mitigation projects – which is the plan required under the Clean Air Act to reduce air pollution. (, all of Massachusetts is designated as a clean air "non-attainment" zone.)
Projects underway and future plans
Blue Line
There is a proposal to extend the Blue Line northward to Lynn, with two potential extension routes having been identified. One proposed path would run through marshland alongside the existing Newburyport/Rockport commuter rail line, while the other would extend the line along the remainder of the BRB&L right of way.
In addition, the MBTA has committed to designing an extension of the line's southern terminus westward to Charles/MGH, where it would connect with the Red Line. This was one of the mitigation measures the Commonwealth of Massachusetts agreed to offset increased automobile emissions from the Big Dig, but it was later replaced in this agreement by other projects.
Green Line
To settle a lawsuit with the Conservation Law Foundation to mitigate increased automobile emissions from the Big Dig, the Commonwealth of Massachusetts agreed to extend the Green Line north to Somerville and Medford, two suburbs currently under-served by the MBTA. This plan starts at a relocated Lechmere station, and terminates at College Avenue in Medford and Union Square in Somerville. The original settlement-imposed deadline was December 31, 2014. There will be an expected daily ridership of 8,420. After projected costs increased to $3 billion, the project was halted in 2015 and scaled back. The revised project broke ground in June 2018 and is expected to serve passengers beginning in late 2021.
Another mitigation project in the initial settlement was restoration of service on the E branch between Heath Street and Arborway/Forest Hills. A revised settlement agreement resulted in the substitution of other projects with similar air quality benefits. The state Executive Office of Transportation promised to consider other transit enhancements in the Arborway corridor.
Orange and Red Lines
In October 2013, MassDOT announced plans for a $1.3 billion subway car order for the Orange and Red Lines, which would replace and expand the existing car fleets and add more frequent service. The MassDOT Board awarded a $566.6 million contract to a China based manufacturer CNR (which became part of CRRC the following year) to build 404 replacement railcars for the Orange Line and Red Line. The other bidders were Bombardier Transportation, Kawasaki Heavy Industries and Hyundai Rotem. CNR began building the cars at a new manufacturing plant in Springfield, Massachusetts, with initial deliveries expected in 2018 and all cars in service by 2023. The Board forwent federal funding to allow the contract to specify the cars be built in Massachusetts, in order to create a local railcar manufacturing industry. In addition to the new rolling stock, the $1.3 billion allocated for the project will pay for testing, signal improvements and expanded maintenance facilities, as well as other related expenses. Sixty percent of the car's components are sourced from the United States. Replacement of the signal systems, which will increase reliability and allow more frequent trains, is expected to be complete by 2022, with a total cost of $218 million for both lines.
Commuter rail
There are several proposed extensions to current commuter rail lines. An extension of the Stoughton Line known as South Coast Rail is proposed to serve Fall River, and New Bedford. Critics argue that building the extension does not make economic sense.
A extension of the Providence Line past Providence to T. F. Green Airport and Wickford Junction in Rhode Island opened in 2012. The Rhode Island Department of Transportation is also studying the feasibility of serving existing Amtrak stations in Kingston and Westerly as well as constructing new stations in Cranston, East Greenwich, and West Davisville. Federal funding has also been provided for preliminary planning of a new station in Pawtucket.
In September 2009, CSX Transportation and the Commonwealth of Massachusetts finalized a $100 million agreement to purchase CSX's Framingham to Worcester tracks, as well as some other track, to improve service on the Framingham/Worcester Line. A liability issue that had held up the agreement was resolved. There is also a project underway to upgrade the Fitchburg Line to have cab signaling and to construct a second track along a run near Acton which is shared with freight traffic, so that the Fitchburg to Boston trip will be able to take only about an hour. Completion is expected in December 2015.
The state of New Hampshire created the New Hampshire Rail Transit Authority and allocated money to build platforms at Nashua and Manchester. An article in The Eagle-Tribune claimed that Massachusetts was negotiating to buy property which has the potential to extend the Haverhill Line to Plaistow, New Hampshire.
Massachusetts agreed in 2005 to make improvements on the Fairmount Line part of its legally binding commitment to mitigate increased air pollution from the Big Dig. These improvements, including four new infill stations, were supposed to be complete by December 31, 2011. The total cost of the project was estimated at $79.4 million, and it was expected to divert 220 daily trips from automobiles to transit. , three of the new stations were open; the fourth station has been delayed by community opposition. In 2014, the MBTA announce it would purchase Diesel Multiple Unit (DMU) self-propelled rail cars for the Fairmount Line with eventual expansion to five other lines to be known as the Indigo Line. The planned DMU procurement was canceled in 2015.
No direct rail connection exists between North Station and South Station, effectively splitting the commuter rail network into separate pieces. A North–South Rail Link has been proposed to unite the two halves of the commuter rail system, to allow more convenient and efficient through-routed service. However, because of high cost, Massachusetts withdrew its sponsorship of the proposal in 2006, in communications with the United States Department of Transportation. Advocacy groups continue to press for the project as a better alternative than expanding South Station, which would also be costly but provide fewer overall improvements in service.
MBTA Massachusetts Realty Group
As one of the most expansive land owners throughout the Commonwealth, the MBTA established a joint public-private management agency for managing the MBTA's vast inventory of property holdings and land.
This allows the MBTA to work with entities to obtain right-of-way (ROW) grant on property which the MBTA administers. The agency assists with the processing of all ROW applications as efficiently and economically as possible, and authorizes these grants at the authorized officer’s discretion. Generally, the ROW is granted for an additional stream of revenue to the MBTA outside of normal fare revenue. The agency additionally facilities persons or organizations wanting to provide concessions, or public advertising potential; or the awardence of property easements.
Occasionally sale of some surplus under-utilized public space under the MBTA real estate agency's responsibility are disposed of though bidding. This may include lands formerly in use as the state's streetcar network, equipment depots, electric substations, former railroad lines & yards or other properties. Given the vast long-haul rail routes, the MBTA further determined its desire to work with distance providers of telecom or utilities to provide authorization to use pieces of public land for ROW projects, including: renewable energy installs, electric power lines & energy corridors, optical fibre lines, communications sites, road, trail, canal, flume, pipeline or reservoir uses.
Management and administration
In 2015, Massachusetts Governor Charlie Baker signed new legislation creating a financial control board to oversee the MBTA, replacing the Massachusetts Department of Transportation's Board of Directors in the role of overseeing the transit authority. The Fiscal and Management Control Board (FMCB) started meeting in July 2015 and was charged with bringing financial stability to the agency. It reported to Massachusetts Secretary of Transportation Stephanie Pollack. Three of the five members of the MBTA FMCB were also members of the Massachusetts Department of Transportation. The FMCB's term expired at the end of June 2021 and was not extended. It was dissolved and replaced by a new governing body known simply as the MBTA Board of Directors and consisting of seven members.
The Massachusetts Secretary of Transportation leads the executive management team of MassDOT in addition to serving in the Governor's Cabinet. The MBTA's executive management team is led by its General Manager, who is currently also serving as the MassDOT Rail and Transit Administrator, overseeing all public transit in the state.
The MBTA Advisory Board represents the cities and towns in the MBTA service district. The municipalities are assessed a total of $143M annually (). In return, the Advisory Board has veto power over the MBTA operating and capital budgets, including the power to reduce the overall amount.
The MBTA is headquartered in the State Transportation Building (10 Park Plaza) in Boston, with the operations control center at 45 High Street. The agency operates service from a number of bus garages, rail yards, and maintenance facilities. The MBTA maintains its own police force, the Massachusetts Bay Transportation Authority Police, which has jurisdiction in MBTA facilities and vehicles.
Key people
Board of Directors
The seven members of the 2021-created board are as follows:
Betsy Taylor (Chair)
Robert Butler
Thomas "Scott" Darling
Thomas P. Koch
Travis McCready
Mary Beth Mello
Jamey Tesler, Secretary (head) of the state's Department of Transportation
MassDOT Board of Directors
Massachusetts Secretary of Transportation Jamey Tesler (Chair)
Timothy King
Chrystal Kornegay
Brian Lang
Dean Mazzarella
Robert Moylan, Jr.
Vanessa Otero
Betsy Taylor (Vice Chair)
Monica Tibbits-Nutt
General managers
Thomas McLernon: 1960–1965
Rush B. Lincoln Jr.: 1965–1967
Leo J. Cusick: 1967–1970
Joseph C. Kelly (acting): 1970
Joseph C. Kelly: 1970–1975
Bob Kiley: 1975–1979 (as chairman/CEO)
Robert Foster: 1979–1980 (as chairman/CEO)
Barry Locke: 1980–1981 (as chairman/CEO)
James O'Leary: 1981–1989
Thomas P. Glynn: 1989–1991
John J. Haley Jr.: 1991–1995
Patrick Moynihan: 1995–1997
Robert H. Prince: 1997–2001
Michael H. Mulhern: 2002–2005
Daniel Grabauskas: 2005–2009
Richard A. Davey: 2010–2011
Jonathan Davis (interim): 2011–2012
Beverly A. Scott: 2012–2015
Frank DePaola (interim): 2015–2016
Brian Shortsleeve (acting): 2016–2017
Steve Poftak (interim): 2017–2017
Luis Manuel Ramírez: 2017–2018
Jeff Gonneville (interim): 2018–2018
Steve Poftak: 2019–present
Employees and unions
, the MBTA employs 6,346 workers, of which roughly 600 are in part-time jobs.
Structurally, the employees of the MBTA function as part of a handful of trade unions. The largest union of the MBTA is the Carmen's Union (Local 589), representing bus and subway operators. This includes full and part-time bus drivers, motorpersons and streetcar motorpersons, full and part-time train attendants, and Customer Service Agents (CSAs). Further unions include the Machinists Union, Local 264; Electrical Workers Union, Local 717; the Welder's Union, Local 651; the Executive Union; the Office and Professional Employees International Union, Local 453; the Professional and Technical Engineers Union, Local 105; and the Office and Professional Employees Union, Local 6.
Within the authority, employees are ranked according to seniority (or "rating"). This is categorized by an employee's five or six-digit badge number, though some of the longest serving employees still have only three or four-digits. An employee's badge number indicates the relative length of employment with the MBTA; badges are issued in sequential order. The rating structure determines many different things, including the rank in which perks are to be offered to employee, such as: When offering the choice for quarter-annual route assignments ("picks"), overtime offerings, and even the rank to transfer new hires from part-time roles to a full-time role.
In popular culture
In 1951, the growing subway network was the setting of "A Subway Named Mobius", a science fiction short story written by the American astronomer Armin Joseph Deutsch. The tale described a Boston subway train which accidentally became a "phantom" by becoming lost in the fourth dimension, analogous to a topological Mobius strip. In 2001, a half-century later, the narrative was awarded a Retro Hugo Award for Best Short Story at the World Science Fiction Convention.
In 1959, the satirical song "M.T.A." (informally known as "Charlie on the MTA") was a hit single, as performed by the folksingers the Kingston Trio. It tells the absurd story of a passenger named Charlie, who cannot pay a newly imposed 5-cent exit fare, and thus remains trapped in the subway system. The song was still well known in 2006, when the MBTA named its new electronic farecards the "CharlieCard" and "CharlieTicket".
See also
List of MBTA subway stations
List of United States rapid transit systems by ridership
MBTA v. Anderson
Transportation in Boston
Boston Street Railway Association
References
Further reading
External links
MBTA Advisory Board
MBTA Vehicle Inventory – an unofficial listing of MBTA equipment
Light rail in Massachusetts
Massachusetts railroads
Passenger rail transportation in Massachusetts
Passenger rail transportation in Rhode Island
Providence metropolitan area
Underground rapid transit in the United States
Transport companies established in 1897
1897 establishments in Massachusetts
Government agencies established in 1964
1964 establishments in Massachusetts |
19870 | https://en.wikipedia.org/wiki/Meson | Meson | In particle physics, mesons ( or ) are hadronic subatomic particles composed of an equal number of quarks and antiquarks, usually one of each, bound together by strong interactions. Because mesons are composed of quark subparticles, they have a meaningful physical size, a diameter of roughly one femtometer (1×10 m), which is about 0.6 times the size of a proton or neutron. All mesons are unstable, with the longest-lived lasting for only a few hundredths of a microsecond. Heavier mesons decay to lighter mesons and ultimately to stable electrons, neutrinos and photons.
Outside the nucleus, mesons appear in nature only as short-lived products of very high-energy collisions between particles made of quarks, such as cosmic rays (high-energy protons and neutrons) and baryonic matter. Mesons are routinely produced artificially in cyclotrons or other accelerators in the collisions of protons, antiprotons, or other particles.
Higher-energy (more massive) mesons were created momentarily in the Big Bang, but are not thought to play a role in nature today. However, such heavy mesons are regularly created in particle accelerator experiments, in order to understand the nature of the heavier types of quark that compose the heavier mesons.
Mesons are part of the hadron particle family, which are defined simply as particles composed of two or more quarks. The other members of the hadron family are the baryons: subatomic particles composed of odd numbers of valence quarks (at least 3), and some experiments show evidence of exotic mesons, which do not have the conventional valence quark content of two quarks (one quark and one antiquark), but 4 or more.
Because quarks have a spin , the difference in quark number between mesons and baryons results in conventional two-quark mesons being bosons, whereas baryons are fermions.
Each type of meson has a corresponding antiparticle (antimeson) in which quarks are replaced by their corresponding antiquarks and vice versa. For example, a positive pion () is made of one up quark and one down antiquark; and its corresponding antiparticle, the negative pion (), is made of one up antiquark and one down quark.
Because mesons are composed of quarks, they participate in both the weak and strong interactions. Mesons with net electric charge also participate in the electromagnetic interaction. Mesons are classified according to their quark content, total angular momentum, parity and various other properties, such as C-parity and G-parity. Although no meson is stable, those of lower mass are nonetheless more stable than the more massive, and hence are easier to observe and study in particle accelerators or in cosmic ray experiments. The lightest group of mesons is less massive than the lightest group of baryons, meaning that they are more easily produced in experiments, and thus exhibit certain higher-energy phenomena more readily than do baryons. But mesons can be quite massive: for example, the J/Psi meson () containing the charm quark, first seen 1974, is about three times as massive as a proton, and the upsilon meson () containing the bottom quark, first seen in 1977, is about ten times as massive.
History
From theoretical considerations, in 1934 Hideki Yukawa predicted the existence and the approximate mass of the "meson" as the carrier of the nuclear force that holds atomic nuclei together. If there were no nuclear force, all nuclei with two or more protons would fly apart due to electromagnetic repulsion. Yukawa called his carrier particle the meson, from μέσος mesos, the Greek word for "intermediate", because its predicted mass was between that of the electron and that of the proton, which has about 1,836 times the mass of the electron. Yukawa or Carl David Anderson, who discovered the muon, had originally named the particle the "mesotron", but he was corrected by the physicist Werner Heisenberg (whose father was a professor of Greek at the University of Munich). Heisenberg pointed out that there is no "tr" in the Greek word "mesos".
The first candidate for Yukawa's meson, in modern terminology known as the muon, was discovered in 1936 by Carl David Anderson and others in the decay products of cosmic ray interactions. The "mu meson" had about the right mass to be Yukawa's carrier of the strong nuclear force, but over the course of the next decade, it became evident that it was not the right particle. It was eventually found that the "mu meson" did not participate in the strong nuclear interaction at all, but rather behaved like a heavy version of the electron, and was eventually classed as a lepton like the electron, rather than a meson. Physicists in making this choice decided that properties other than particle mass should control their classification.
There were years of delays in the subatomic particle research during World War II (1939–1945), with most physicists working in applied projects for wartime necessities. When the war ended in August 1945, many physicists gradually returned to peacetime research. The first true meson to be discovered was what would later be called the "pi meson" (or pion). This discovery was made in 1947, by Cecil Powell, Hugh Muirhead, César Lattes, and Giuseppe Occhialini, who were investigating cosmic ray products at the University of Bristol in England, based on photographic films placed in the Andes mountains. Some of those mesons had about the same mass as the already-known mu "meson", yet seemed to decay into it, leading physicist Robert Marshak to hypothesize in 1947 that it was actually a new and different meson. Over the next few years, more experiments showed that the pion was indeed involved in strong interactions. The pion (as a virtual particle) is also believed to be the primary force carrier for the nuclear force in atomic nuclei. Other mesons, such as the virtual rho mesons are involved in mediating this force as well, but to a lesser extent. Following the discovery of the pion, Yukawa was awarded the 1949 Nobel Prize in Physics for his predictions.
In the past, the word meson was sometimes used to mean any force carrier, such as "the Z0 meson", which is involved in mediating the weak interaction. However, this use has fallen out of favor, and mesons are now defined as particles composed of pairs of quarks and antiquarks.
Overview
Spin, orbital angular momentum, and total angular momentum
Spin (quantum number ) is a vector quantity that represents the "intrinsic" angular momentum of a particle. It comes in increments of . The is often dropped because it is the "fundamental" unit of spin, and it is implied that "spin 1" means "spin 1 ". (In some systems of natural units, is chosen to be 1, and therefore does not appear in equations.)
Quarks are fermions—specifically in this case, particles having spin ( = ). Because spin projections vary in increments of 1 (that is 1 ), a single quark has a spin vector of length , and has two spin projections (z = + and z = ). Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length = 1 and three spin projections (z = +1, z = 0, and z = −1), called the spin-1 triplet. If two quarks have unaligned spins, the spin vectors add up to make a vector of length S = 0 and only one spin projection (z = 0), called the spin-0 singlet. Because mesons are made of one quark and one antiquark, they can be found in triplet and singlet spin states. The latter are called scalar mesons or pseudoscalar mesons, depending on their parity (see below).
There is another quantity of quantized angular momentum, called the orbital angular momentum (quantum number ), that is the angular momentum due to quarks orbiting each other, and comes in increments of 1 . The total angular momentum (quantum number ) of a particle is the combination of intrinsic angular momentum (spin) and orbital angular momentum. It can take any value from up to , in increments of 1.
Particle physicists are most interested in mesons with no orbital angular momentum ( = 0), therefore the two groups of mesons most studied are the = 1; = 0 and = 0; = 0, which corresponds to = 1 and = 0, although they are not the only ones. It is also possible to obtain = 1 particles from = 0 and = 1. How to distinguish between the = 1, = 0 and = 0, = 1 mesons is an active area of research in meson spectroscopy.
-parity
-parity is left-right parity, or spatial parity, and was the first of several "parities" discovered, and so is often called just “parity”. If the universe were reflected in a mirror, most laws of physics would be identical—things would behave the same way regardless of what we call "left" and what we call "right". This concept of mirror reflection is called parity (). Gravity, the electromagnetic force, and the strong interaction all behave in the same way regardless of whether or not the universe is reflected in a mirror, and thus are said to conserve parity (-symmetry). However, the weak interaction does distinguish "left" from "right", a phenomenon called parity violation (-violation).
Based on this, one might think that, if the wavefunction for each particle (more precisely, the quantum field for each particle type) were simultaneously mirror-reversed, then the new set of wavefunctions would perfectly satisfy the laws of physics (apart from the weak interaction). It turns out that this is not quite true: In order for the equations to be satisfied, the wavefunctions of certain types of particles have to be multiplied by −1, in addition to being mirror-reversed. Such particle types are said to have negative or odd parity ( = −1, or alternatively = −), whereas the other particles are said to have positive or even parity ( = +1, or alternatively = +).
For mesons, parity is related to the orbital angular momentum by the relation:
where the is a result of the parity of the corresponding spherical harmonic of the wavefunction. The "+1" comes from the fact that, according to the Dirac equation, a quark and an antiquark have opposite intrinsic parities. Therefore, the intrinsic parity of a meson is the product of the intrinsic parities of the quark (+1) and antiquark (−1). As these are different, their product is −1, and so it contributes the "+1" that appears in the exponent.
As a consequence, all mesons with no orbital angular momentum ( = 0) have odd parity ( = −1).
C-parity
-parity is only defined for mesons that are their own antiparticle (i.e. neutral mesons). It represents whether or not the wavefunction of the meson remains the same under the interchange of their quark with their antiquark. If
then, the meson is " even" ( = +1). On the other hand, if
then the meson is " odd" ( = −1).
-parity rarely is studied on its own, but more commonly in combination with P-parity into CP-parity. -parity was originally thought to be conserved, but was later found to be violated on rare occasions in weak interactions.
-parity
-parity is a generalization of the -parity. Instead of simply comparing the wavefunction after exchanging quarks and antiquarks, it compares the wavefunction after exchanging the meson for the corresponding antimeson, regardless of quark content.
If
then, the meson is " even" ( = +1). On the other hand, if
then the meson is " odd" ( = −1).
Isospin and charge
Original isospin model
The concept of isospin was first proposed by Werner Heisenberg in 1932 to explain the similarities between protons and neutrons under the strong interaction. Although they had different electric charges, their masses were so similar that physicists believed that they were actually the same particle. The different electric charges were explained as being the result of some unknown excitation similar to spin. This unknown excitation was later dubbed isospin by Eugene Wigner in 1937.
When the first mesons were discovered, they too were seen through the eyes of isospin and so the three pions were believed to be the same particle, but in different isospin states.
The mathematics of isospin was modeled after the mathematics of spin. Isospin projections varied in increments of 1 just like those of spin, and to each projection was associated a "charged state". Because the "pion particle" had three "charged states", it was said to be of isospin Its "charged states" , , and , corresponded to the isospin projections and respectively. Another example is the "rho particle", also with three charged states. Its "charged states" , , and , corresponded to the isospin projections and respectively.
Replacement by the quark model
This belief lasted until Murray Gell-Mann proposed the quark model in 1964 (containing originally only the , , and quarks). The success of the isospin model is now understood to be an artifact of the similar masses of the and quarks. Because the and quarks have similar masses, particles made of the same number of them also have similar masses.
The exact and quark composition determines the charge, because quarks carry charge whereas quarks carry charge . For example, the three pions all have different charges
= a quantum superposition of ) and states
but they all have similar masses ( ) as they are each composed of a same total number of up and down quarks and antiquarks. Under the isospin model, they were considered a single particle in different charged states.
After the quark model was adopted, physicists noted that the isospin projections were related to the up and down quark content of particles by the relation
where the symbols are the count of up and down quarks and antiquarks.
In the "isospin picture", the three pions and three rhos were thought to be the different states of two particles. However, in the quark model, the rhos are excited states of pions. Isospin, although conveying an inaccurate picture of things, is still used to classify hadrons, leading to unnatural and often confusing nomenclature.
Because mesons are hadrons, the isospin classification is also used for them all, with the quantum number calculated by adding for each positively charged up-or-down quark-or-antiquark (up quarks and down antiquarks), and for each negatively charged up-or-down quark-or-antiquark (up antiquarks and down quarks).
Flavour quantum numbers
The strangeness quantum number S (not to be confused with spin) was noticed to go up and down along with particle mass. The higher the mass, the lower (more negative) the strangeness (the more s quarks). Particles could be described with isospin projections (related to charge) and strangeness (mass) (see the uds nonet figures). As other quarks were discovered, new quantum numbers were made to have similar description of udc and udb nonets. Because only the u and d mass are similar, this description of particle mass and charge in terms of isospin and flavour quantum numbers only works well for the nonets made of one u, one d and one other quark and breaks down for the other nonets (for example ucb nonet). If the quarks all had the same mass, their behaviour would be called symmetric, because they would all behave in exactly the same way with respect to the strong interaction. However, as quarks do not have the same mass, they do not interact in the same way (exactly like an electron placed in an electric field will accelerate more than a proton placed in the same field because of its lighter mass), and the symmetry is said to be broken.
It was noted that charge (Q) was related to the isospin projection (I3), the baryon number (B) and flavour quantum numbers (S, C, B′, T) by the Gell-Mann–Nishijima formula:
where S, C, B′, and T represent the strangeness, charm, bottomness and topness flavour quantum numbers respectively. They are related to the number of strange, charm, bottom, and top quarks and antiquark according to the relations:
meaning that the Gell-Mann–Nishijima formula is equivalent to the expression of charge in terms of quark content:
Classification
Mesons are classified into groups according to their isospin (I), total angular momentum (J), parity (P), G-parity (G) or C-parity (C) when applicable, and quark (q) content. The rules for classification are defined by the Particle Data Group, and are rather convoluted. The rules are presented below, in table form for simplicity.
Types of meson
Mesons are classified into types according to their spin configurations. Some specific configurations are given special names based on the mathematical properties of their spin configuration.
Nomenclature
Flavourless mesons
Flavourless mesons are mesons made of pair of quark and antiquarks of the same flavour (all their flavour quantum numbers are zero: = 0, = 0, = 0, = 0). The rules for flavourless mesons are:
In addition
When the spectroscopic state of the meson is known, it is added in parentheses.
When the spectroscopic state is unknown, mass (in MeV/c) is added in parentheses.
When the meson is in its ground state, nothing is added in parentheses.
Flavoured mesons
Flavoured mesons are mesons made of pair of quark and antiquarks of different flavours. The rules are simpler in this case: The main symbol depends on the heavier quark, the superscript depends on the charge, and the subscript (if any) depends on the lighter quark. In table form, they are:
In addition
If P is in the "normal series" (i.e., P = 0+, 1−, 2+, 3−, ...), a superscript ∗ is added.
If the meson is not pseudoscalar (P = 0−) or vector (P = 1−), is added as a subscript.
When the spectroscopic state of the meson is known, it is added in parentheses.
When the spectroscopic state is unknown, mass (in MeV/2) is added in parentheses.
When the meson is in its ground state, nothing is added in parentheses.
Exotic mesons
There is experimental evidence for particles that are hadrons (i.e., are composed of quarks) and are color-neutral with zero baryon number, and thus by conventional definition are mesons. Yet, these particles do not consist of a single quark/antiquark pair, as all the other conventional mesons discussed above do. A tentative category for these particles is exotic mesons.
There are at least five exotic meson resonances that have been experimentally confirmed to exist by two or more independent experiments. The most statistically significant of these is the Z(4430), discovered by the Belle experiment in 2007 and confirmed by LHCb in 2014. It is a candidate for being a tetraquark: a particle composed of two quarks and two antiquarks. See the main article above for other particle resonances that are candidates for being exotic mesons.
List
See also
Mesonic molecule
Standard Model
Citations
General references
External links
A table of some mesons and their properties
Particle Data Group—Compiles authoritative information on particle properties
hep-ph/0211411: The light scalar mesons within quark models
Naming scheme for hadrons (a PDF file)
Mesons made thinkable, an interactive visualisation allowing physical properties to be compared
Recent findings
What Happened to the Antimatter? Fermilab's DZero Experiment Finds Clues in Quick-Change Meson
CDF experiment's definitive observation of matter-antimatter oscillations in the Bs meson |
19871 | https://en.wikipedia.org/wiki/Marvel%20Super%20Heroes%20%28role-playing%20game%29 | Marvel Super Heroes (role-playing game) | {{Infobox game
|title= Marvel Super Heroes
|image=
|caption= Cover of Marvel Superheroes: Advanced Set
|designer= Jeff Grubb
|publisher= TSR
|date= 1984 (1st edition)1986 (Advanced Game)
|genre= Superhero fiction
|system= Custom
|footnotes=
}}Marvel Super Heroes (MSHRPG) is a role playing game set in the Marvel Universe, first published by TSR as the boxed set Marvel Super Heroes: The Heroic Role-Playing Game under license from Marvel Comics in 1984. In 1986, TSR published the Marvel Superheroes Advanced Game, an expanded edition. Jeff Grubb designed both editions, and Steve Winter wrote both editions. Both use the same game system.
The game lets players assume the roles of Marvel superheroes such as Spider-Man, Daredevil, Hulk, Captain America, the Fantastic Four, and the X-Men.
Grubb designed the game to be easily understood, and the simplest version, found in the 16-page "Battle Book" of the Basic Set, contains a bare-bones combat system sufficient to resolve comic book style superhero fights.
System
Attributes
Players resolve most game situations by rolling percentile dice and comparing the results against a column of the colorful "Universal Results Table". The attribute used determines which column to use; different tasks map to different attributes.
All characters have seven basic attributes:
Fighting determines hit probability in and defense against hand-to-hand attacks.
Agility determines hit probability in and defense against ranged attacks, feats of agility vs. the environment, and acrobatics.
Strength determines damage inflicted by hand-to-hand attacks, grappling, or lifting and breaking heavy objects.
Endurance determines resistance to physical damage (e.g., poison, disease, death). It also determines how long a character can fight and how fast a character can move at top speed.
Reason determines the success of tasks relating to knowledge, puzzle-solving, and advanced technology.
Intuition determines the success of tasks relating to awareness, perception, and instinct.
Psyche determines the success of tasks relating to willpower, psionics, and magic.
Players sometimes refer to this set of attributes and the game system as a whole by the acronym "FASERIP". Attribute scores for most characters range from 1 to 100, where normal human ability is Typical (6), and peak (non-superheroic) human ability is Excellent (20). The designers minimize use of the numerical figures, instead preferring adjectives like "Incredible" (36-45) and "Amazing" (46-62). A "Typical" (5-7) attribute has a 50% base chance for success at most tasks relating to that attribute. As an attribute increases, the chance of success increases about 5% per 10 points. Thus a character with an "Amazing" (50) attribute has a 75% chance of success at tasks relating to that attribute.
Superpowers and origins
Beyond the seven attributes, characters have superpowers that function on a mostly ad hoc basis, and each character's description gives considerable space to a description of how their powers work in the game.
Each character has an origin which puts ceilings on a character's abilities and superpowers. The origins include:
Altered Humans are normal people who acquire powers, such as Spider-Man or the Fantastic Four.
High-Tech Wonders are normal people whose powers come from devices, such as Iron Man.
Mutants are persons born with superpowers, such as the X-Men.
Robots are created beings, such as the Vision and Ultron.
Aliens are non-humans, including extra-dimensional beings such as Thor and Hercules.
Talents
The game also features a simple skill system referred to as Talents. Talents must be learned and cover areas of knowledge from Archery to Zoology. A Talent raises a character's ability by one rank when attempting actions related to that Talent. The GM is free to determine if a character would be unable to attempt an action without the appropriate Talent (such as a character with no medical background attempting to make a pill that can cure a rare disease).
Resources and Popularity
Characters also has two variable attributes: Resources and Popularity. These attributes use the same terms as the character's seven attributes ("Poor," "Amazing," "Unearthly," etc.). But unlike the seven physical and mental attributes, which change slowly, if at all, Resources and Popularity can change quickly.
Resources represent the character's wealth. Rather than have the player keep track of how much money the character has, the Advanced Game assumes the character has enough money to cover basic living expenses. The Resources ability is used when the character tries to buy something like a new car or house. The game books note that a character's Resources score can change after winning the lottery or having a major business transaction go bad, among other things.
Popularity reflects how much the character is liked or disliked. Popularity can influence non-player characters. A superhero with a high rating, like Captain America (whose popularity is Unearthly-the highest most characters can achieve), might use his Popularity to gain entrance to a club. If he were to try the same thing as his secret identity Steve Rogers (whose Popularity is only Typical), he would probably be unable to do it. Villains also have a Popularity score, which is usually negative (a bouncer might let Doctor Doom or Magneto into the aforementioned club out of fear). Popularity can change, too.
Character creation
The game is intended to use existing Marvel characters as the heroes. The Basic Set and Advanced Set both contain simple systems for creating original superheroes, based on random ability rolls (as in Dungeons & Dragons). In addition, the Basic Set Campaign Book allows players to create original heroes by describing the desired kind of hero and working together with the GM to assign the appropriate abilities, powers, and talents.The Ultimate Powers Book, by David Edward Martin, expands and organizes the game's list of powers. Players are given a variety of body types, secret origins, weaknesses, and powers to choose from. The UPB gives a greater range to characters one could create. The book suffers from editing problems and omissions; several errata and partial revisions were released in the pages of TSR's Dragon magazine in issue #122 "The Ultimate Addenda to the Ultimate Powers Book", issue #134 "The Ultimate Addenda's Addenda", issue #150 "Death Effects on Superheroes", and issue #151 "Son of the Ultimate Addenda".
Karma
The game's equivalent of experience points is Karma, a pool of points initially determined by the sum of a character's three mental attributes (Reason, Intuition, and Psyche).
The basic system allows players to increase their chances of success at most tasks by spending points of Karma. For example, a player who wants to make sure he hits a villain in a critical situation can spend however many Karma points are necessary to raise the dice roll to the desired result. The referee distributes additional Karma points at the end of game sessions, typically as rewards for accomplishing heroic goals such as defeating villains, saving innocents, and foiling crimes. Karma can also be lost for unheroic actions such as fleeing from a villain or failing to stop a crime. In fact, in a notable departure from many RPGs, but strongly in keeping with the genre, all Karma is lost if a hero kills someone or allows someone to die.
In the Advanced Game, Karma points can also be spent to permanently increase character attributes and powers.
Game mechanics
Two primary game mechanics drive the game: column shifts and colored results. Both influence the difficulty of an action.
A column shift is used when a character is trying a hard or easy action. A column shift to the left indicates a penalty, while a shift to the right indicates a bonus.
The column for each ability is divided into four colors: white, green, yellow, and red. A white result is always a failure or unfavorable outcome. In most cases, getting a green result is all that is needed to succeed at a particular action. Yellow and red results usually indicate more favorable results that could knock back, stun, or even kill an opponent. However, the GM can determine that succeeding at a hard task might require a yellow or red result.
Additional rules in the "Campaign Book" of the Basic Set, and the subsequent Advanced Set, use the same game mechanic to resolve non-violent tasks.
Official game supplements
The original Marvel Super Heroes game received extensive support from TSR, covering a variety of Marvel Comics characters and settings, including a Gamer's Handbook of the Marvel Universe patterned after Marvel's Official Handbook of the Marvel Universe. MSH also got its own column in the TSR-published gaming magazine, Dragon, called "The Marvel-phile", which usually spotlighted a character or group of characters that hadn't yet appeared in a published game product.
Reception
In the July–August 1984 edition of Space Gamer (No. 70), Allen Varney wrote that the game was only suited to younger players and Marvel fanatics, saying, "this is a respectable effort, and an excellent introductory game for a devoted Marvel fan aged 10 to 12; older, more experienced, or less devoted buyers will probably be disappointed. 'Nuff said."
Pete Tamlyn reviewed Marvel Super Heroes for Imagine magazine and stated that "this game has been produced in collaboration with Marvel and that opportunity itself is probably worth a new game release. However, Marvel Superheroes is not just another Superhero game. In many ways it is substantially different from other SHrpgs."
In the January–February 1985 edition of Different Worlds (Issue #38), Troy Christensen gave it an average rating of 2.5 stars out of 4, saying, "The Marvel Super Heroes roleplaying game overall is a basic and simple system which I would recommend for beginning and novice players [...] People who enjoy a fast and uncomplicated game and like a system which is conservative and to the point will like this game."
Marcus L. Rowland reviewed Marvel Super Heroes for White Dwarf #62, giving it an overall rating of 8 out of 10, and stated that "All in all, a useful system which is suitable for beginning players and referees, but should still suit experienced gamers."
Seven years later, Varney revisited the game in the August 1991 edition of Dragon (Issue #172), reviewing the new Basic Set edition that had just been released. While Varney appreciated that the game was designed for younger players, he felt that it failed to recreate the excitement of the comics. "This is the gravest flaw of this system and support line: its apathy about recreating the spirit of Marvel stories. In this new Basic Set edition... you couldn’t find a miracle if you used microscopic vision. Look at this set’s few elementary mini-scenarios: all fight scenes. The four-color grandeur and narrative magic in the best Marvel stories are absent. Is this a good introduction to role-playing?" Varney instead suggested Toon by Steve Jackson Games or Ghostbusters by West End Games as better role-playing alternatives for new and beginning young players.
In the 2007 book Hobby Games: The 100 Best, Steve Kenson commented that "it's a testament to the game's longevity that it still has enthusiastic fan support on the Internet and an active play community more than a decade after its last product was published. Even more so that it continues to set a standard by which new superhero roleplaying games are measured. Like modern comic book writers and artists following the greats of the Silver Age, modern RPG designers have a tough act to follow."
Later Marvel RPGs
Before losing the Marvel license back to Marvel Comics, TSR published a different game using their SAGA System game engine, called the Marvel Super Heroes Adventure Game. This version, written by Mike Selinker, was published in the late 1990s as a card-based version of the Marvel role-playing game (though a method of converting characters from the prior format to the SAGA System was included in the core rules). Though critically praised in various reviews at the time, it never reached a large market and has since faded into obscurity.
In 2003, after the gaming license had reverted to Marvel Comics, the Marvel Universe Roleplaying Game was published by Marvel Comics. This edition uses mechanics totally different from any previous versions, using a diceless game mechanic that incorporated a Karma-based resolution system of "stones" (or tokens) to represent character effort. Since its initial publication, a few additional supplements were published by Marvel Comics. However, Marvel stopped supporting the game a little over a year after its initial release, despite going through several printings of the core rulebook.
In August 2011, Margaret Weis Productions acquired the licence to publish an RPG based on Marvel superheroes, and Marvel Heroic Roleplaying was released beginning in 2012. Margaret Weis Productions, however, found that although the game was critically acclaimed, winning two Origins Awards, Marvel Heroic Roleplaying: Civil War'' "didn’t garner the level of sales necessary to sustain the rest of the line" so they brought the game to a close at the end of April 2013.
References
External links
Marvel Comics role-playing games
Marvel Super Heroes (role-playing game)
Role-playing games introduced in 1984
TSR, Inc. games |
19873 | https://en.wikipedia.org/wiki/Measure%20%28mathematics%29 | Measure (mathematics) | In mathematics, the concept of a measure is a generalization and formalization of geometrical measures (distance/length, area, volume) and other common notions, such as mass and probability of events. These seemingly distinct concepts have many similarities and can often be treated as mathematically indistinguishable. Measures are foundational in probability theory, integration theory, and can be generalized to assume negative values, as with electrical charge. Far-reaching generalizations of measure are widely used in quantum physics and physics in general.
The intuition behind this concept dates back to Ancient Greece, when Archimedes tried to calculate the area of a circle. But it was not until the late 19th and early 20th centuries that measure theory became a branch of mathematics. The foundations of modern measure theory were laid in the works of Émile Borel, Henri Lebesgue, Nikolai Luzin, Johann Radon, Constantin Carathéodory, and Maurice Fréchet, among others.
Definition
Let be a set and a -algebra over . A function from to the extended real number line is called a measure if it satisfies the following properties:
Non-negativity: For all in Σ, we have .
Null empty set: .
Countable additivity (or -additivity): For all countable collections of pairwise disjoint sets in Σ,
If at least one set has finite measure, then the requirement that is met automatically. Indeed, by countable additivity,
and therefore
If the condition of non-negativity is omitted but the second and third of these conditions are met, and takes on at most one of the values , then is called a signed measure.
The pair is called a measurable space, the members of Σ are called measurable sets.
A triple is called a measure space. A probability measure is a measure with total measure one – i.e. . A probability space is a measure space with a probability measure.
For measure spaces that are also topological spaces various compatibility conditions can be placed for the measure and the topology. Most measures met in practice in analysis (and in many cases also in probability theory) are Radon measures. Radon measures have an alternative definition in terms of linear functionals on the locally convex space of continuous functions with compact support. This approach is taken by Bourbaki (2004) and a number of other sources. For more details, see the article on Radon measures.
Instances
Some important measures are listed here.
The counting measure is defined by = number of elements in .
The Lebesgue measure on is a complete translation-invariant measure on a σ-algebra containing the intervals in such that ; and every other measure with these properties extends Lebesgue measure.
Circular angle measure is invariant under rotation, and hyperbolic angle measure is invariant under squeeze mapping.
The Haar measure for a locally compact topological group is a generalization of the Lebesgue measure (and also of counting measure and circular angle measure) and has similar uniqueness properties.
The Hausdorff measure is a generalization of the Lebesgue measure to sets with non-integer dimension, in particular, fractal sets.
Every probability space gives rise to a measure which takes the value 1 on the whole space (and therefore takes all its values in the unit interval [0, 1]). Such a measure is called a probability measure. See probability axioms.
The Dirac measure δa (cf. Dirac delta function) is given by δa(S) = χS(a), where χS is the indicator function of . The measure of a set is 1 if it contains the point and 0 otherwise.
Other 'named' measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Gaussian measure, Baire measure, Radon measure, Young measure, and Loeb measure.
In physics an example of a measure is spatial distribution of mass (see e.g., gravity potential), or another non-negative extensive property, conserved (see conservation law for a list of these) or not. Negative values lead to signed measures, see "generalizations" below.
Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics.
Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble.
Basic properties
Let be a measure.
Monotonicity
If and are measurable sets with then
Measure of countable unions and intersections
Subadditivity
For any countable sequence of (not necessarily disjoint) measurable sets in Σ:
Continuity from below
If are measurable sets and for all , then the union of the sets is measurable, and
Continuity from above
If are measurable sets and, for all , then the intersection of the sets is measurable; furthermore, if at least one of the has finite measure, then
This property is false without the assumption that at least one of the has finite measure. For instance, for each , let , which all have infinite Lebesgue measure, but the intersection is empty.
Other properties
Completeness
A measurable set is called a null set if . A subset of a null set is called a negligible set. A negligible set need not be measurable, but every measurable negligible set is automatically a null set. A measure is called complete if every negligible set is measurable.
A measure can be extended to a complete one by considering the σ-algebra of subsets which differ by a negligible set from a measurable set , that is, such that the symmetric difference of and is contained in a null set. One defines to equal .
μ{x : f(x) ≥ t} = μ{x : f(x) > t} (a.e.)
If the -measurable function takes values on then
for almost all with respect to the Lebesgue measure. This property is used in connection with Lebesgue integral.
Additivity
Measures are required to be countably additive. However, the condition can be strengthened as follows.
For any set and any set of nonnegative define:
That is, we define the sum of the to be the supremum of all the sums of finitely many of them.
A measure on is -additive if for any and any family of disjoint sets the following hold:
Note that the second condition is equivalent to the statement that the ideal of null sets is -complete.
Sigma-finite measures
A measure space is called finite if is a finite real number (rather than ∞). Nonzero finite measures are analogous to probability measures in the sense that any finite measure is proportional to the probability measure . A measure is called σ-finite if can be decomposed into a countable union of measurable sets of finite measure. Analogously, a set in a measure space is said to have a σ-finite measure if it is a countable union of sets with finite measure.
For example, the real numbers with the standard Lebesgue measure are σ-finite but not finite. Consider the closed intervals for all integers ; there are countably many such intervals, each has measure 1, and their union is the entire real line. Alternatively, consider the real numbers with the counting measure, which assigns to each finite set of reals the number of points in the set. This measure space is not σ-finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. The σ-finite measure spaces have some very convenient properties; σ-finiteness can be compared in this respect to the Lindelöf property of topological spaces. They can be also thought of as a vague generalization of the idea that a measure space may have 'uncountable measure'.
s-finite measures
A measure is said to be s-finite if it is a countable sum of bounded measures. S-finite measures are more general than sigma-finite ones and have applications in the theory of stochastic processes.
Non-measurable sets
If the axiom of choice is assumed to be true, it can be proved that not all subsets of Euclidean space are Lebesgue measurable; examples of such sets include the Vitali set, and the non-measurable sets postulated by the Hausdorff paradox and the Banach–Tarski paradox.
Generalizations
For certain purposes, it is useful to have a "measure" whose values are not restricted to the non-negative reals or infinity. For instance, a countably additive set function with values in the (signed) real numbers is called a signed measure, while such a function with values in the complex numbers is called a complex measure. Observe, however, that complex measure is necessarily of finite variation, hence complex measures include finite signed measures but not, for example, the Lebesgue measure.
Measures that take values in Banach spaces have been studied extensively. A measure that takes values in the set of self-adjoint projections on a Hilbert space is called a projection-valued measure; these are used in functional analysis for the spectral theorem. When it is necessary to distinguish the usual measures which take non-negative values from generalizations, the term positive measure is used. Positive measures are closed under conical combination but not general linear combination, while signed measures are the linear closure of positive measures.
Another generalization is the finitely additive measure, also known as a content. This is the same as a measure except that instead of requiring countable additivity we require only finite additivity. Historically, this definition was used first. It turns out that in general, finitely additive measures are connected with notions such as Banach limits, the dual of L∞ and the Stone–Čech compactification. All these are linked in one way or another to the axiom of choice. Contents remain useful in certain technical problems in geometric measure theory; this is the theory of Banach measures.
A charge is a generalization in both directions: it is a finitely additive, signed measure.
See also
Abelian von Neumann algebra
Almost everywhere
Carathéodory's extension theorem
Content (measure theory)
Fubini's theorem
Fatou's lemma
Fuzzy measure theory
Geometric measure theory
Hausdorff measure
Inner measure
Lebesgue integration
Lebesgue measure
Lorentz space
Lifting theory
Measurable cardinal
Measurable function
Minkowski content
Outer measure
Product measure
Pushforward measure
Regular measure
Vector measure
Valuation (measure theory)
Volume form
References
Bibliography
Robert G. Bartle (1995) The Elements of Integration and Lebesgue Measure, Wiley Interscience.
Chapter III.
R. M. Dudley, 2002. Real Analysis and Probability. Cambridge University Press.
Second edition.
Federer, Herbert. Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften, Band 153 Springer-Verlag New York Inc., New York 1969 xiv+676 pp.
D. H. Fremlin, 2000. Measure Theory. Torres Fremlin.
R. Duncan Luce and Louis Narens (1987). "measurement, theory of," The New Palgrave: A Dictionary of Economics, v. 3, pp. 428–32.
M. E. Munroe, 1953. Introduction to Measure and Integration. Addison Wesley.
Shilov, G. E., and Gurevich, B. L., 1978. Integral, Measure, and Derivative: A Unified Approach, Richard A. Silverman, trans. Dover Publications. . Emphasizes the Daniell integral.
External links
Tutorial: Measure Theory for Dummies |
19876 | https://en.wikipedia.org/wiki/Motorcycle | Motorcycle | A motorcycle, often called a motorbike, bike, cycle, or (if three-wheeled) trike, is a two- or three-wheeled motor vehicle. Motorcycle design varies greatly to suit a range of different purposes: long-distance travel, commuting, cruising, sport (including racing), and off-road riding. Motorcycling is riding a motorcycle and being involved in other related social activity such as joining a motorcycle club and attending motorcycle rallies.
The 1885 Daimler Reitwagen made by Gottlieb Daimler and Wilhelm Maybach in Germany was the first internal combustion, petroleum-fueled motorcycle. In 1894, Hildebrand & Wolfmüller became the first series production motorcycle.
In 2014, the three top motorcycle producers globally by volume were Honda (28%), Yamaha (17%) (both from Japan), and Hero MotoCorp (India). In developing countries, motorcycles are considered utilitarian due to lower prices and greater fuel economy. Of all the motorcycles in the world, 58% are in the Asia-Pacific and Southern and Eastern Asia regions, excluding car-centric Japan.
According to the US Department of Transportation, the number of fatalities per vehicle mile traveled was 37 times higher for motorcycles than for cars.
Types
The term motorcycle has different legal definitions depending on jurisdiction (see ).
There are three major types of motorcycle: street, off-road, and dual purpose. Within these types, there are many sub-types of motorcycles for different purposes. There is often a racing counterpart to each type, such as road racing and street bikes, or motocross including dirt bikes.
Street bikes include cruisers, sportbikes, scooters and mopeds, and many other types. Off-road motorcycles include many types designed for dirt-oriented racing classes such as motocross and are not street legal in most areas. Dual purpose machines like the dual-sport style are made to go off-road but include features to make them legal and comfortable on the street as well.
Each configuration offers either specialised advantage or broad capability, and each design creates a different riding posture.
In some countries the use of pillions (rear seats) is restricted.
History
Experimentation and invention
The first internal combustion, petroleum fueled motorcycle was the Daimler Reitwagen. It was designed and built by the German inventors Gottlieb Daimler and Wilhelm Maybach in Bad Cannstatt, Germany, in 1885. This vehicle was unlike either the safety bicycles or the boneshaker bicycles of the era in that it had zero degrees of steering axis angle and no fork offset, and thus did not use the principles of bicycle and motorcycle dynamics developed nearly 70 years earlier. Instead, it relied on two outrigger wheels to remain upright while turning.
The inventors called their invention the Reitwagen ("riding car"). It was designed as an expedient testbed for their new engine, rather than a true prototype vehicle.
The first commercial design for a self-propelled cycle was a three-wheel design called the Butler Petrol Cycle, conceived of Edward Butler in England in 1884. He exhibited his plans for the vehicle at the Stanley Cycle Show in London in 1884. The vehicle was built by the Merryweather Fire Engine company in Greenwich, in 1888.
The Butler Petrol Cycle was a three-wheeled vehicle, with the rear wheel directly driven by a , displacement, bore × stroke, flat twin four-stroke engine (with magneto ignition replaced by coil and battery) equipped with rotary valves and a float-fed carburettor (five years before Maybach) and Ackermann steering, all of which were state of the art at the time. Starting was by compressed air. The engine was liquid-cooled, with a radiator over the rear driving wheel. Speed was controlled by means of a throttle valve lever. No braking system was fitted; the vehicle was stopped by raising and lowering the rear driving wheel using a foot-operated lever; the weight of the machine was then borne by two small castor wheels. The driver was seated between the front wheels. It wasn't, however, a success, as Butler failed to find sufficient financial backing.
Many authorities have excluded steam powered, electric motorcycles or diesel-powered two-wheelers from the definition of a 'motorcycle', and credit the Daimler Reitwagen as the world's first motorcycle. Given the rapid rise in use of electric motorcycles worldwide, defining only internal-combustion powered two-wheelers as 'motorcycles' is increasingly problematic. The first (petroleum fueled) internal-combustion motorcycles, like the German Reitwagen, were, however, also the first practical motorcycles.
If a two-wheeled vehicle with steam propulsion is considered a motorcycle, then the first motorcycles built seem to be the French Michaux-Perreaux steam velocipede which patent application was filled in December 1868, constructed around the same time as the American Roper steam velocipede, built by Sylvester H. Roper Roxbury, Massachusetts.
who demonstrated his machine at fairs and circuses in the eastern U.S. in 1867, Roper built about 10 steam cars and cycles from the 1860s until his death in 1896.
Summary of early inventions
First motorcycle companies
In 1894, Hildebrand & Wolfmüller became the first series production motorcycle, and the first to be called a motorcycle (). Excelsior Motor Company, originally a bicycle manufacturing company based in Coventry, England, began production of their first motorcycle model in 1896.
The first production motorcycle in the US was the Orient-Aster, built by Charles Metz in 1898 at his factory in Waltham, Massachusetts.
In the early period of motorcycle history, many producers of bicycles adapted their designs to accommodate the new internal combustion engine. As the engines became more powerful and designs outgrew the bicycle origins, the number of motorcycle producers increased. Many of the nineteenth-century inventors who worked on early motorcycles often moved on to other inventions. Daimler and Roper, for example, both went on to develop automobiles.
At the end of the 19th century the first major mass-production firms were set up. In 1898, Triumph Motorcycles in England began producing motorbikes, and by 1903 it was producing over 500 bikes. Other British firms were Royal Enfield, Norton, Douglas Motorcycles and Birmingham Small Arms Company who began motorbike production in 1899, 1902, 1907 and 1910, respectively. Indian began production in 1901 and Harley-Davidson was established two years later. By the outbreak of World War I, the largest motorcycle manufacturer in the world was Indian,
producing over 20,000 bikes per year.
First World War
During the First World War, motorbike production was greatly ramped up for the war effort to supply effective communications with front line troops. Messengers on horses were replaced with despatch riders on motorcycles carrying messages, performing reconnaissance and acting as a military police. American company Harley-Davidson was devoting over 50% of its factory output toward military contract by the end of the war. The British company Triumph Motorcycles sold more than 30,000 of its Triumph Type H model to allied forces during the war. With the rear wheel driven by a belt, the Model H was fitted with a air-cooled four-stroke single-cylinder engine. It was also the first Triumph without pedals.
The Model H in particular, is regarded by many as having been the first "modern motorcycle". Introduced in 1915 it had a 550 cc side-valve four-stroke engine with a three-speed gearbox and belt transmission. It was so popular with its users that it was nicknamed the "Trusty Triumph".
Postwar
By 1920, Harley-Davidson was the largest manufacturer, with their motorcycles being sold by dealers in 67 countries.
Amongst many British motorcycle manufacturers, Chater-Lea with its twin-cylinder models followed by its large singles in the 1920s stood out. Initially, using converted a Woodmann-designed ohv Blackburne engine it became the first 350 cc to exceed 100 mph (160 km/h), recording 100.81 mph (162.24 km/h) over the flying kilometre during April 1924.[7] Later, Chater-Lea set a world record for the flying kilometre for 350 cc and 500 cc motorcycles at 102.9 mph (165.6 km/h) for the firm. Chater-Lea produced variants of these world-beating sports models and became popular among racers at the Isle of Man TT. Today, the firm is probably best remembered for its long-term contract to manufacture and supply AA Patrol motorcycles and sidecars.
By the late 1920s or early 1930s, DKW in Germany took over as the largest manufacturer.
In the 1950s, streamlining began to play an increasing part in the development of racing motorcycles and the "dustbin fairing" held out the possibility of radical changes to motorcycle design. NSU and Moto Guzzi were in the vanguard of this development, both producing very radical designs well ahead of their time.
NSU produced the most advanced design, but after the deaths of four NSU riders in the 1954–1956 seasons, they abandoned further development and quit Grand Prix motorcycle racing.
Moto Guzzi produced competitive race machines, and until the end of 1957 had a succession of victories. The following year, 1958, full enclosure fairings were banned from racing by the FIM in the light of the safety concerns.
From the 1960s through the 1990s, small two-stroke motorcycles were popular worldwide, partly as a result of East German MZs Walter Kaaden's engine work in the 1950s.
Today
In the 21st century, the motorcycle industry is mainly dominated by Indian and Japanese motorcycle companies. In addition to the large capacity motorcycles, there is a large market in smaller capacity (less than 300 cc) motorcycles, mostly concentrated in Asian and African countries and produced in China and India. A Japanese example is the 1958 Honda Super Cub, which went on to become the biggest selling vehicle of all time, with its 60 millionth unit produced in April 2008.
Today, this area is dominated by mostly Indian companies with Hero MotoCorp emerging as the world's largest manufacturer of two wheelers. Its Splendor model has sold more than 8.5 million to date. Other major producers are Bajaj and TVS Motors.
Technical aspects
Construction
Motorcycle construction is the engineering, manufacturing, and assembly of components and systems for a motorcycle which results in the performance, cost, and aesthetics desired by the designer. With some exceptions, construction of modern mass-produced motorcycles has standardised on a steel or aluminium frame, telescopic forks holding the front wheel, and disc brakes. Some other body parts, designed for either aesthetic or performance reasons may be added. A petrol-powered engine typically consisting of between one and four cylinders (and less commonly, up to eight cylinders) coupled to a manual five- or six-speed sequential transmission drives the swingarm-mounted rear wheel by a chain, driveshaft, or belt. The repair can be done using a Motorcycle lift.
Fuel economy
Motorcycle fuel economy varies greatly with engine displacement and riding style. A streamlined, fully faired Matzu Matsuzawa Honda XL125 achieved in the Craig Vetter Fuel Economy Challenge "on real highways in real conditions".
Due to low engine displacements (), and high power-to-mass ratios, motorcycles offer good fuel economy. Under conditions of fuel scarcity like 1950s Britain and modern developing nations, motorcycles claim large shares of the vehicle market. In the United States, the average motorcycle fuel economy is 44 miles per US gallon (19 km per liter).
Electric motorcycles
Very high fuel economy equivalents are often derived by electric motorcycles. Electric motorcycles are nearly silent, zero-emission electric motor-driven vehicles. Operating range and top speed are limited by battery technology. Fuel cells and petroleum-electric hybrids are also under development to extend the range and improve performance of the electric drive system.
Reliability
A 2013 survey of 4,424 readers of the US Consumer Reports magazine collected reliability data on 4,680 motorcycles purchased new from 2009 to 2012. The most common problem areas were accessories, brakes, electrical (including starters, charging, ignition), and fuel systems, and the types of motorcycles with the greatest problems were touring, off-road/dual sport, sport-touring, and cruisers. There were not enough sport bikes in the survey for a statistically significant conclusion, though the data hinted at reliability as good as cruisers. These results may be partially explained by accessories including such equipment as fairings, luggage, and auxiliary lighting, which are frequently added to touring, adventure touring/dual sport and sport touring bikes. Trouble with fuel systems is often the result of improper winter storage, and brake problems may also be due to poor maintenance. Of the five brands with enough data to draw conclusions, Honda, Kawasaki and Yamaha were statistically tied, with 11 to 14% of those bikes in the survey experiencing major repairs. Harley-Davidsons had a rate of 24%, while BMWs did worse, with 30% of those needing major repairs. There were not enough Triumph and Suzuki motorcycles surveyed for a statistically sound conclusion, though it appeared Suzukis were as reliable as the other three Japanese brands while Triumphs were comparable to Harley-Davidson and BMW. Three-fourths of the repairs in the survey cost less than US$200 and two-thirds of the motorcycles were repaired in less than two days. In spite of their relatively worse reliability in this survey, Harley-Davidson and BMW owners showed the greatest owner satisfaction, and three-fourths of them said they would buy the same bike again, followed by 72% of Honda owners and 60 to 63% of Kawasaki and Yamaha owners.
Dynamics
Two-wheeled motorcycles stay upright while rolling due to a physical property known as conservation of angular momentum in the wheels. Angular momentum points along the axle, and it "wants" to stay pointing in that direction.
Different types of motorcycles have different dynamics and these play a role in how a motorcycle performs in given conditions. For example, one with a longer wheelbase provides the feeling of more stability by responding less to disturbances. Motorcycle tyres have a large influence over handling.
Motorcycles must be leaned in order to make turns. This lean is induced by the method known as countersteering, in which the rider momentarily steers the handlebars in the direction opposite of the desired turn. This practice is counterintuitive and therefore often confusing to novices and even many experienced motorcyclists.
With such short wheelbase, motorcycles can generate enough torque at the rear wheel, and enough stopping force at the front wheel, to lift the opposite wheel off the road. These actions, if performed on purpose, are known as wheelies and stoppies (or endos) respectively.
Accessories
Various features and accessories may be attached to a motorcycle either as OEM (factory-fitted) or aftermarket. Such accessories are selected by the owner to enhance the motorcycle's appearance, safety, performance, or comfort, and may include anything from mobile electronics to sidecars and trailers.
Records
The world record for the longest motorcycle jump was set in 2008 by Robbie Maddison with .
Since late 2010, the Ack Attack team has held the motorcycle land-speed record at 376.36 mph (605.69 km/h).
Safety
Motorcycles have a higher rate of fatal accidents than automobiles or trucks and buses. United States Department of Transportation data for 2005 from the Fatality Analysis Reporting System show that for passenger cars, 18.62 fatal crashes occur per 100,000 registered vehicles. For motorcycles this figure is higher at 75.19 per 100,000 registered vehicles four times higher than for cars.
The same data shows that 1.56 fatalities occur per 100 million vehicle miles travelled for passenger cars, whereas for motorcycles the figure is 43.47 which is 28 times higher than for cars (37 times more deaths per mile travelled in 2007).
Furthermore, for motorcycles the accident rates have increased significantly since the end of the 1990s, while the rates have dropped for passenger cars.
The most common configuration of motorcycle accidents in the United States is when a motorist pulls out or turns in front of a motorcyclist, violating their right-of-way. This is sometimes called a , an acronym formed from the motorists' common response of "Sorry mate, I didn't see you".
Motorcyclists can anticipate and avoid some of these crashes with proper training, increasing their visibility to other traffic, keeping to the speed limits, and not consuming alcohol or other drugs before riding.
The United Kingdom has several organisations dedicated to improving motorcycle safety by providing advanced rider training beyond what is necessary to pass the basic motorcycle licence test. These include the Institute of Advanced Motorists (IAM) and the Royal Society for the Prevention of Accidents (RoSPA). Along with increased personal safety, riders with these advanced qualifications may benefit from reduced insurance costs
In South Africa, the Think Bike campaign is dedicated to increasing both motorcycle safety and the awareness of motorcycles on the country's roads. The campaign, while strongest in the Gauteng province, has representation in Western Cape, KwaZulu Natal and the Free State. It has dozens of trained marshals available for various events such as cycle races and is deeply involved in numerous other projects such as the annual Motorcycle Toy Run.
Motorcycle safety education is offered throughout the United States by organisations ranging from state agencies to non-profit organisations to corporations. Most states use the courses designed by the Motorcycle Safety Foundation (MSF), while Oregon and Idaho developed their own. All of the training programs include a Basic Rider Course, an Intermediate Rider Course and an Advanced Rider Course.
In Ireland, since 2010, in the UK and some Australian jurisdictions, such as Victoria, New South Wales,
the Australian Capital Territory, Tasmania
and the Northern Territory, it is compulsory to complete a basic rider training course before being issued a Learners Licence, after which they can ride on public roads.
In Canada, motorcycle rider training is compulsory in Quebec and Manitoba only, but all provinces and territories have graduated licence programs which place restrictions on new drivers until they have gained experience. Eligibility for a full motorcycle licence or endorsement for completing a Motorcycle Safety course varies by province. Without the Motorcycle Safety Course the chance of getting insurance for the motorcycle is very low. The Canada Safety Council, a non-profit safety organisation, offers the Gearing Up program across Canada and is endorsed by the Motorcycle and Moped Industry Council. Training course graduates may qualify for reduced insurance premiums.
Motorcycle rider postures
The motorcyclist's riding position depends on rider body-geometry (anthropometry) combined with the geometry of the motorcycle itself. These factors create a set of three basic postures.
Sport the rider leans forward into the wind and the weight of the upper torso is supported by the rider's core at low speed and air pressure at high speed. The footpegs are below the rider or to the rear. The reduced frontal area cuts wind resistance and allows higher speeds. At low-speed in this position the rider's arms may bear some of the weight of the rider's torso, which can be problematic.
Standard the rider sits upright or leans forward slightly. The feet are below the rider. These are motorcycles that are not specialised to one task, so they do not excel in any particular area. The standard posture is used with touring and commuting as well as dirt and dual-sport bikes, and may offer advantages for beginners.
Cruiser the rider sits at a lower seat height with the upper torso upright or leaning slightly rearward. Legs are extended forwards, sometimes out of reach of the regular controls on cruiser pegs. The low seat height can be a consideration for new or short riders. Handlebars tend to be high and wide. The emphasis is on comfort while compromising cornering ability because of low ground clearance and the greater likelihood of scraping foot pegs, floor boards, or other parts if turns are taken at the speeds other motorcycles can more readily accomplish.
Factors of a motorcycle's ergonomic geometry that determine the seating posture include the height, angle and location of footpegs, seat and handlebars. Factors in a rider's physical geometry that contribute to seating posture include torso, arm, thigh and leg length, and overall rider height.
Legal definitions and restrictions
A motorcycle is broadly defined by law in most countries for the purposes of registration, taxation and rider licensing as a powered two-wheel motor vehicle. Most countries distinguish between mopeds of 49 cc and the more powerful, larger vehicles (scooters do not count as a separate category). Many jurisdictions include some forms of three-wheeled cars as motorcycles.
In Nigeria, motorcycles, popularly referred to as Okada have been subject of many controversies with regards to safety and security followed by restriction of movement in many states. Recently, it was banned in Lagos - Nigeria's most populous city.
Environmental impact
Motorcycles and scooters' low fuel consumption has attracted interest in the United States from environmentalists and those affected by increased fuel prices.
Piaggio Group Americas supported this interest with the launch of a "Vespanomics" website and platform, claiming lower per-mile carbon emissions of 0.4 lb/mile (113 g/km) less than the average car, a 65% reduction, and better fuel economy.
However, a motorcycle's exhaust emissions may contain 10–20 times more oxides of nitrogen (NOx), carbon monoxide, and unburned hydrocarbons than exhaust from a similar-year passenger car or SUV.
This is because many motorcycles lack a catalytic converter, and the emission standard is much more permissive for motorcycles than for other vehicles. While catalytic converters have been installed in most gasoline-powered cars and trucks since 1975 in the United States, they can present fitment and heat difficulties in motorcycle applications.
United States Environmental Protection Agency 2007 certification result reports for all vehicles versus on highway motorcycles (which also includes scooters), the average certified emissions level for 12,327 vehicles tested was 0.734. The average "Nox+Co End-Of-Useful-Life-Emissions" for 3,863 motorcycles tested was 0.8531. 54% of the tested 2007-model motorcycles were equipped with a catalytic converter.
United States emissions limits
The following table shows maximum acceptable legal emissions of the combination of hydrocarbons, oxides of nitrogen, and carbon monoxide for new motorcycles sold in the United States with 280 cc or greater piston displacement.
The maximum acceptable legal emissions of hydrocarbon and carbon monoxide for new Class I and II motorcycles (50 cc–169 cc and 170 cc–279 cc respectively) sold in the United States are as follows:
Europe
European emission standards for motorcycles are similar to those for cars. New motorcycles must meet Euro 5 standards, while cars must meet Euro 6D-temp standards. Motorcycle emission controls are being updated and it has been proposed to update to Euro 5+ in 2024.
See also
Bicycle and motorcycle geometry
List of motorcycle manufacturers
List of motor scooter manufacturers and brands
Motorcycle industry in China
Streamlined motorcycle
Citations
General references
External links
Motorcycling
Wheeled vehicles |
19877 | https://en.wikipedia.org/wiki/Map | Map | A map is a symbolic depiction emphasizing relationships between elements of some space, such as objects, regions, or themes.
Many maps are static, fixed to paper or some other durable medium, while others are dynamic or interactive. Although most commonly used to depict geography, maps may represent any space, real or fictional, without regard to context or scale, such as in brain mapping, DNA mapping, or computer network topology mapping. The space being mapped may be two dimensional, such as the surface of the earth, three dimensional, such as the interior of the earth, or even more abstract spaces of any dimension, such as arise in modeling phenomena having many independent variables.
Although the earliest maps known are of the heavens, geographic maps of territory have a very long tradition and exist from ancient times. The word "map" comes from the medieval Latin Mappa mundi, wherein mappa meant napkin or cloth and mundi the world. Thus, "map" became a shortened term referring to a two-dimensional representation of the surface of the world.
History
Geography
Cartography or map-making is the study and practice of crafting representations of the Earth upon a flat surface (see History of cartography), and one who makes maps is called a cartographer.
Road maps are perhaps the most widely used maps today, and form a subset of navigational maps, which also include aeronautical and nautical charts, railroad network maps, and hiking and bicycling maps. In terms of quantity, the largest number of drawn map sheets is probably made up by local surveys, carried out by municipalities, utilities, tax assessors, emergency services providers, and other local agencies. Many national surveying projects have been carried out by the military, such as the British Ordnance Survey: a civilian government agency, internationally renowned for its comprehensively detailed work.
In addition to location information, maps may also be used to portray contour lines indicating constant values of elevation, temperature, rainfall, etc.
Orientation
The orientation of a map is the relationship between the directions on the map and the corresponding compass directions in reality. The word "orient" is derived from Latin , meaning east. In the Middle Ages many maps, including the T and O maps, were drawn with east at the top (meaning that the direction "up" on the map corresponds to East on the compass). The most common cartographic convention is that north is at the top of a map.
Maps not oriented with north at the top:
Maps from non-Western traditions have oriented a variety of ways. Old maps of Edo show the Japanese imperial palace as the "top", but also at the center, of the map. Labels on the map are oriented in such a way that you cannot read them properly unless you put the imperial palace above your head.
Medieval European T and O maps such as the Hereford Mappa Mundi were centered on Jerusalem with East at the top. Indeed, before the reintroduction of Ptolemy's Geography to Europe around 1400, there was no single convention in the West. Portolan charts, for example, are oriented to the shores they describe.
Maps of cities bordering a sea are often conventionally oriented with the sea at the top.
Route and channel maps have traditionally been oriented to the road or waterway they describe.
Polar maps of the Arctic or Antarctic regions are conventionally centered on the pole; the direction North would be toward or away from the center of the map, respectively. Typical maps of the Arctic have 0° meridian toward the bottom of the page; maps of the Antarctic have the 0° meridian toward the top of the page.
Reversed maps, also known as Upside-Down maps or South-Up maps, reverse the North is up convention and have south at the top. Ancient Africans including in Ancient Egypt utilized this orientation, as some maps in Brazil do today.
Buckminster Fuller's Dymaxion maps are based on a projection of the Earth's sphere onto an icosahedron. The resulting triangular pieces may be arranged in any order or orientation.
Scale and accuracy
Many maps are drawn to a scale expressed as a ratio, such as 1:10,000, which means that 1 unit of measurement on the map corresponds to 10,000 of that same unit on the ground. The scale statement can be accurate when the region mapped is small enough for the curvature of the Earth to be neglected, such as a city map. Mapping larger regions, where the curvature cannot be ignored, requires projections to map from the curved surface of the Earth to the plane. The impossibility of flattening the sphere to the plane without distortion means that the map cannot have a constant scale. Rather, on most projections, the best that can be attained is an accurate scale along one or two paths on the projection. Because scale differs everywhere, it can only be measured meaningfully as point scale per location. Most maps strive to keep point scale variation within narrow bounds. Although the scale statement is nominal it is usually accurate enough for most purposes unless the map covers a large fraction of the earth. At the scope of a world map, scale as a single number is practically meaningless throughout most of the map. Instead, it usually refers to the scale along the equator.
Some maps, called cartograms, have the scale deliberately distorted to reflect information other than land area or distance. For example, this map (at the right) of Europe has been distorted to show population distribution, while the rough shape of the continent is still discernible.
Another example of distorted scale is the famous London Underground map. The basic geographical structure is respected but the tube lines (and the River Thames) are smoothed to clarify the relationships between stations. Near the center of the map, stations are spaced out more than near the edges of the map.
Further inaccuracies may be deliberate. For example, cartographers may simply omit military installations or remove features solely to enhance the clarity of the map. For example, a road map may not show railroads, smaller waterways, or other prominent non-road objects, and even if it does, it may show them less clearly (e.g. dashed or dotted lines/outlines) than the main roads. Known as decluttering, the practice makes the subject matter that the user is interested in easier to read, usually without sacrificing overall accuracy. Software-based maps often allow the user to toggle decluttering between ON, OFF, and AUTO as needed. In AUTO the degree of decluttering is adjusted as the user changes the scale being displayed.
Projection
Geographic maps use a projection to translate the three-dimensional real surface of the geoid to a two-dimensional picture. Projection always distorts the surface. There are many ways to apportion the distortion, and so there are many map projections. Which projection to use depends on the purpose of the map.
Symbology
The various features shown on a map are represented by conventional signs or symbols. For example, colors can be used to indicate a classification of roads. Those signs are usually explained in the margin of the map, or on a separately published characteristic sheet.
Some cartographers prefer to make the map cover practically the entire screen or sheet of paper, leaving no room "outside" the map for information about the map as a whole.
These cartographers typically place such information in an otherwise "blank" region "inside" the mapcartouche, map legend, title, compass rose, bar scale, etc.
In particular, some maps contain smaller "sub-maps" in otherwise blank regions—often one at a much smaller scale showing the whole globe and where the whole map fits on that globe, and a few showing "regions of interest" at a larger scale to show details that wouldn't otherwise fit.
Occasionally sub-maps use the same scale as the large map—a few maps of the contiguous United States include a sub-map to the same scale for each of the two non-contiguous states.
Design
The design and production of maps is a craft that has developed over thousands of years, from clay tablets to Geographic information systems. As a form of Design, particularly closely related to Graphic design, map making incorporates scientific knowledge about how maps are used, integrated with principles of artistic expression, to create an aesthetically attractive product, carries an aura of authority, and functionally serves a particular purpose for an intended audience.
Designing a map involves bringing together a number of elements and making a large number of decisions. The elements of design fall into several broad topics, each of which has its own theory, its own research agenda, and its own best practices. That said, there are synergistic effects between these elements, meaning that the overall design process is not just working on each element one at a time, but an iterative feedback process of adjusting each to achieve the desired gestalt.
Map projections: The foundation of the map is the plane on which it rests (whether paper or screen), but projections are required to flatten the surface of the earth. All projections distort this surface, but the cartographer can be strategic about how and where distortion occurs.
Generalization: All maps must be drawn at a smaller scale than reality, requiring that the information included on a map be a very small sample of the wealth of information about a place. Generalization is the process of adjusting the level of detail in geographic information to be appropriate for the scale and purpose of a map, through procedures such as selection, simplification, and classification.
Symbology: Any map visually represents the location and properties of geographic phenomena using map symbols, graphical depictions composed of several visual variables, such as size, shape, color, and pattern.
Composition: As all of the symbols are brought together, their interactions have major effects on map reading, such as grouping and Visual hierarchy.
Typography or Labeling: Text serves a number of purposes on the map, especially aiding the recognition of features, but labels must be designed and positioned well to be effective.
Layout: The map image must be placed on the page (whether paper, web, or other media), along with related elements, such as the title, legend, additional maps, text, images, and so on. Each of these elements have their own design considerations, as does their integration, which largely follows the principles of Graphic design.
Map type-specific design: Different kinds of maps, especially thematic maps, have their own design needs and best practices.
Types
Maps of the world or large areas are often either 'political' or 'physical'. The most important purpose of the political map is to show territorial borders; the purpose of the physical is to show features of geography such as mountains, soil type, or land use including infrastructures such as roads, railroads, and buildings. Topographic maps show elevations and relief with contour lines or shading. Geological maps show not only the physical surface, but characteristics of the underlying rock, fault lines, and subsurface structures.
Electronic
From the last quarter of the 20th century, the indispensable tool of the cartographer has been the computer. Much of cartography, especially at the data-gathering survey level, has been subsumed by Geographic Information Systems (GIS). The functionality of maps has been greatly advanced by technology simplifying the superimposition of spatially located variables onto existing geographical maps. Having local information such as rainfall level, distribution of wildlife, or demographic data integrated within the map allows more efficient analysis and better decision making. In the pre-electronic age such superimposition of data led Dr. John Snow to identify the location of an outbreak of cholera. Today, it is used by agencies of humankind, as diverse as wildlife conservationists and militaries around the world.
Even when GIS is not involved, most cartographers now use a variety of computer graphics programs to generate new maps.
Interactive, computerized maps are commercially available, allowing users to zoom in or zoom out (respectively meaning to increase or decrease the scale), sometimes by replacing one map with another of different scale, centered where possible on the same point. In-car global navigation satellite systems are computerized maps with route planning and advice facilities that monitor the user's position with the help of satellites. From the computer scientist's point of view, zooming in entails one or a combination of:
replacing the map by a more detailed one
enlarging the same map without enlarging the pixels, hence showing more detail by removing less information compared to the less detailed version
enlarging the same map with the pixels enlarged (replaced by rectangles of pixels); no additional detail is shown, but, depending on the quality of one's vision, possibly more detail can be seen; if a computer display does not show adjacent pixels really separate, but overlapping instead (this does not apply for an LCD, but may apply for a cathode ray tube), then replacing a pixel by a rectangle of pixels does show more detail. A variation of this method is interpolation.
For example:
Typically (2) applies to a Portable Document Format (PDF) file or other format based on vector graphics. The increase in detail is limited to the information contained in the file: enlargement of a curve may eventually result in a series of standard geometric figures such as straight lines, arcs of circles, or splines.
(2) may apply to text and (3) to the outline of a map feature such as a forest or building.
(1) may apply to the text as needed (displaying labels for more features), while (2) applies to the rest of the image. Text is not necessarily enlarged when zooming in. Similarly, a road represented by a double line may or may not become wider when one zooms in.
The map may also have layers that are partly raster graphics and partly vector graphics. For a single raster graphics image (2) applies until the pixels in the image file correspond to the pixels of the display, thereafter (3) applies.
Climatic
The maps that reflect the territorial distribution of climatic conditions based on the results of long-term observations are called climatic maps. These maps can be compiled both for individual climatic features (temperature, precipitation, humidity) and for combinations of them at the earth's surface and in the upper layers of the atmosphere. Climatic maps show climatic features across a large region and permit values of climatic features to be compared in different parts of the region. When generating the map, spatial interpolation can be used to synthesize values where there are no measurements, under the assumption that conditions change smoothly.
Climatic maps generally apply to individual months and the year as a whole, sometimes to the four seasons, to the growing period, and so forth. On maps compiled from the observations of ground meteorological stations, atmospheric pressure is converted to sea level. Air temperature maps are compiled both from the actual values observed on the surface of the earth and from values converted to sea level. The pressure field in the free atmosphere is represented either by maps of the distribution of pressure at different standard altitudes—for example, at every kilometer above sea level—or by maps of baric topography on which altitudes (more precisely geopotentials) of the main isobaric surfaces (for example, 900, 800, and 700 millibars) counted off from sea level are plotted. The temperature, humidity, and wind on aeroclimatic maps may apply either to standard altitudes or to the main isobaric surfaces.
Isolines are drawn on maps of such climatic features as the long-term mean values (of atmospheric pressure, temperature, humidity, total precipitation, and so forth) to connect points with equal values of the feature in question—for example, isobars for pressure, isotherms for temperature, and isohyets for precipitation. Isoamplitudes are drawn on maps of amplitudes (for example, annual amplitudes of air temperature—that is, the differences between the mean temperatures of the warmest and coldest month). Isanomals are drawn on maps of anomalies (for example, deviations of the mean temperature of each place from the mean temperature of the entire latitudinal zone). Isolines of frequency are drawn on maps showing the frequency of a particular phenomenon (for example, the annual number of days with a thunderstorm or snow cover). Isochrones are drawn on maps showing the dates of onset of a given phenomenon (for example, the first frost and appearance or disappearance of the snow cover) or the date of a particular value of a meteorological element in the course of a year (for example, passing of the mean daily air temperature through zero). Isolines of the mean numerical value of wind velocity or isotachs are drawn on wind maps (charts); the wind resultants and directions of prevailing winds are indicated by arrows of different lengths or arrows with different plumes; lines of flow are often drawn. Maps of the zonal and meridional components of wind are frequently compiled for the free atmosphere. Atmospheric pressure and wind are usually combined on climatic maps. Wind roses, curves showing the distribution of other meteorological elements, diagrams of the annual course of elements at individual stations, and the like are also plotted on climatic maps.
Maps of climatic regionalization, that is, division of the earth's surface into climatic zones and regions according to some classification of climates, are a special kind of climatic map.
Climatic maps are often incorporated into climatic atlases of varying geographic ranges (globe, hemispheres, continents, countries, oceans) or included in comprehensive atlases. Besides general climatic maps, applied climatic maps and atlases have great practical value. Aeroclimatic maps, aeroclimatic atlases, and agroclimatic maps are the most numerous.
Extraterrestrial
Maps exist of the Solar System, and other cosmological features such as star maps. In addition maps of other bodies such as the Moon and other planets are technically not geographical maps.
Floor maps are also spatial but not necessarily geospatial.
Topological
Diagrams such as schematic diagrams and Gantt charts and treemaps display logical relationships between items, rather than geographical relationships. Topological in nature, only the connectivity is significant. The London Underground map and similar subway maps around the world are a common example of these maps.
General
General-purpose maps provide many types of information on one map. Most atlas maps, wall maps, and road maps fall into this category. The following are some features that might be shown on general-purpose maps: bodies of water, roads, railway lines, parks, elevations, towns and cities, political boundaries, latitude and longitude, national and provincial parks. These maps give a broad understanding of the location and features of an area. The reader may gain an understanding of the type of landscape, the location of urban places, and the location of major transportation routes all at once.
List
Aeronautical chart
Atlas
Cadastral map
Climatic map
Geologic map
Historical map
Linguistic map
Nautical map
Physical map
Political map
Relief map
Resource map
Road map
Star map
Street map
Thematic map
Topographic map
Train track map
Transit map
Weather map
World map
Legal regulation
Some countries required that all published maps represent their national claims regarding border disputes. For example:
Within Russia, Google Maps shows Crimea as part of Russia.
Both the Republic of India and the People's Republic of China require that all maps show areas subject to the Sino-Indian border dispute in their own favor.
In 2010, the People's Republic of China began requiring that all online maps served from within China be hosted there, making them subject to Chinese laws.
See also
General
Counter-mapping
Map–territory relation
Censorship of maps
List of online map services
Map collection
Map designing and types
Automatic label placement
City map
Compass rose
Contour map
Estate map
Fantasy map
Floor plan
Geologic map
Hypsometric tints
Map design
Orthophotomap—A map created from orthophotography
Pictorial maps
Plat
Road atlas
Transit map
Page layout (cartography)
Map history
Early world maps
History of cartography
List of cartographers
Related topics
Aerial landscape art
Digital geologic mapping
Economic geography
Geographic coordinate system
Index map
Global Map
List of online map services
Map database management
References
Citations
Bibliography
David Buisseret, ed., Monarchs, Ministers and Maps: The Emergence of Cartography as a Tool of Government in Early Modern Europe. Chicago: University of Chicago Press, 1992,
Denis E. Cosgrove (ed.) Mappings. Reaktion Books, 1999
Freeman, Herbert, Automated Cartographic Text Placement. White paper.
Ahn, J. and Freeman, H., “A program for automatic name placement,” Proc. AUTO-CARTO 6, Ottawa, 1983. 444–455.
Freeman, H., “Computer Name Placement,” ch. 29, in Geographical Information Systems, 1, D.J. Maguire, M.F. Goodchild, and D.W. Rhind, John Wiley, New York, 1991, 449–460.
Mark Monmonier, How to Lie with Maps,
O'Connor, J.J. and E.F. Robertson, The History of Cartography. Scotland : St. Andrews University, 2002.
External links
International Cartographic Association (ICA), the world body for mapping and GIScience professionals
Geography and Maps, an Illustrated Guide, by the staff of the U.S. Library of Congress.
The History of Cartography Project at the University of Wisconsin, a comprehensive research project in the history of maps and mapping
Cartography
Geodesy
Geography |
19881 | https://en.wikipedia.org/wiki/Management | Management | Management (or managing) is the administration of an organization, whether it is a business, a non-profit organization, or a government body. It is the art and science of managing resources.
Management includes the activities of setting the strategy of an organization and coordinating the efforts of its employees (or of volunteers) to accomplish its objectives through the application of available resources, such as financial, natural, technological, and human resources. "Run the business" and "Change the business" are two concepts that are used in management to differentiate between the continued delivery of goods or services and adapting of goods or services to meet the changing needs of customers - see trend. The term "management" may also refer to those people who manage an organization—managers.
Some people study management at colleges or universities; major degrees in management include the Bachelor of Commerce (B.Com.) Bachelor of Business Administration (BBA.) Master of Business Administration (MBA.) Master in Management (MSM or MIM) and, for the public sector, the Master of Public Administration (MPA) degree. Individuals who aim to become management specialists or experts, management researchers, or professors may complete the Doctor of Management (DM), the Doctor of Business Administration (DBA), or the Ph.D. in Business Administration or Management. There has recently been a movement for evidence-based management.
Larger organizations generally have three hierarchical levels of managers, in a pyramid structure:
Senior managers, such as members of a board of directors and a chief executive officer (CEO) or a president of an organization. They set the strategic goals of the organization and make decisions on how the overall organization will operate. Senior managers are generally executive-level professionals and provide direction to middle management, who directly or indirectly report to them.
Middle managers: examples of these would include branch managers, regional managers, department managers, and section managers, who provide direction to front-line managers. Middle managers communicate the strategic goals of senior management to the front-line managers.
Lower managers, such as supervisors and front-line team leaders, oversee the work of regular employees (or volunteers, in some voluntary organizations) and provide direction on their work.
In smaller organizations, a manager may have a much wider scope and may perform several roles or even all of the roles commonly observed in a large organization.
Social scientists study management as an academic discipline, investigating areas such as social organization, organizational adaptation, and organizational leadership.
Etymology
The English verb "manage" has its roots by the XV century French verb 'mesnager', which often referred in equestrian language "to hold in hand the reins of a horse". Also the Italian term maneggiare (to handle, especially tools or a horse) is possible. In Spanish manejar can also mean to rule the horses. These three terms derive from the two Latin words manus (hand) and agere (to act).
The French word for housekeeping, ménagerie, derived from ménager ("to keep house"; compare ménage for "household"), also encompasses taking care of domestic animals. Ménagerie is the French translation of Xenophon's famous book Oeconomicus () on household matters and husbandry. The French word mesnagement (or ménagement) influenced the semantic development of the English word management in the 17th and 18th centuries.
Definitions
Views on the definition and scope of management include:
Henri Fayol (1841–1925) stated: "to manage is to forecast and to plan, to organise, to command, to co-ordinate and to control."
Fredmund Malik (1944– ) defines management as "the transformation of resources into utility".
Management is included as one of the factors of production – along with machines, materials and money.
Ghislain Deslandes defines management as "a vulnerable force, under pressure to achieve results and endowed with the triple power of constraint, imitation and imagination, operating on subjective, interpersonal, institutional and environmental levels".
Peter Drucker (1909–2005) saw the basic task of management as twofold: marketing and innovation. Nevertheless, innovation is also linked to marketing (product innovation is a central strategic marketing issue). Drucker identifies marketing as a key essence for business success, but management and marketing are generally understood as two different branches of business administration knowledge.
Theoretical scope
Management involves identifying the mission, objective, procedures, rules and manipulation
of the human capital of an enterprise to contribute to the success of the enterprise. Scholars have focused on the management of individual, organizational, and inter-organizational relationships. This implies effective communication: an enterprise environment (as opposed to a physical or mechanical mechanism) implies human motivation and implies some sort of successful progress or system outcome. As such, management is not the manipulation of a mechanism (machine or automated program), not the herding of animals, and can occur either in a legal or in an illegal enterprise or environment. From an individual's perspective, management does not need to be seen solely from an enterprise point of view, because management is an essential function in improving one's life and relationships. Management is therefore everywhere and it has a wider range of application. Communication and a positive endeavor are two main aspects of it either through enterprise or through independent pursuit. Plans, measurements, motivational psychological tools, goals, and economic measures (profit, etc.) may or may not be necessary components for there to be management. At first, one views management functionally, such as measuring quantity, adjusting plans, and meeting goals, but this applies even in situations where planning does not take place. From this perspective, Henri Fayol (1841–1925)
considers management to consist of five functions:
planning (forecasting)
organizing
commanding
coordinating
controlling
In another way of thinking, Mary Parker Follett (1868–1933), allegedly defined management as "the art of getting things done through people".
She described management as a philosophy.
Critics, however, find this definition useful but far too narrow. The phrase "management is what managers do" occurs widely,
suggesting the difficulty of defining management without circularity, the shifting nature of definitions and the connection of managerial practices with the existence of a managerial cadre or of a class.
One habit of thought regards management as equivalent to "business administration" and thus excludes management in places outside commerce, as for example in charities and in the public sector. More broadly, every organization must "manage" its work, people, processes, technology, etc. to maximize effectiveness. Nonetheless, many people refer to university departments that teach management as "business schools". Some such institutions (such as the Harvard Business School) use that name, while others (such as the Yale School of Management) employ the broader term "management".
English-speakers may also use the term "management" or "the management" as a collective word describing the managers of an organization, for example of a corporation.
Historically this use of the term often contrasted with the term "labor" – referring to those being managed.
But in the present era the concept of management is identified in the wide areas and its frontiers have been pushed to a broader range. Apart from profitable organizations, even non-profit organizations apply management concepts. The concept and its uses are not constrained. Management as a whole is the process of planning, organizing, directing, leading and controlling.
Levels
Most organizations have three management levels: first-level, middle-level, and top-level managers. First-line managers are the lowest level of management and manage the work of non-managerial individuals who are directly involved with the production or creation of the organization's products. First-line managers are often called supervisors, but may also be called line managers, office managers, or even foremen. Middle managers include all levels of management between the first-line level and the top level of the organization. These managers manage the work of first-line managers and may have titles such as department head, project leader, plant manager, or division manager. Top managers are responsible for making organization-wide decisions and establishing the plans and goals that affect the entire organization. These individuals typically have titles such as executive vice president, president, managing director, chief operating officer, chief executive officer, or chairman of the board.
These managers are classified in a hierarchy of authority, and perform different tasks. In many organizations, the number of managers in every level resembles a pyramid. Each level is explained below in specifications of their different responsibilities and likely job titles.
Top
The top or senior layer of management consists of the board of directors (including non-executive directors, executive directors and independent directors), president, vice-president, CEOs and other members of the C-level executives. Different organizations have various members in their C-suite, which may include a chief financial officer, chief technology officer, and so on. They are responsible for controlling and overseeing the operations of the entire organization. They set a "tone at the top" and develop strategic plans, company policies, and make decisions on the overall direction of the organization. In addition, top-level managers play a significant role in the mobilization of outside resources. Senior managers are accountable to the shareholders, the general public and to public bodies that oversee corporations and similar organizations. Some members of the senior management may serve as the public face of the organization, and they may make speeches to introduce new strategies or appear in marketing.
The board of directors is typically primarily composed of non-executives who owe a fiduciary duty to shareholders and are not closely involved in the day-to-day activities of the organization, although this varies depending on the type (e.g., public versus private), size and culture of the organization. These directors are theoretically liable for breaches of that duty and typically insured under directors and officers liability insurance. Fortune 500 directors are estimated to spend 4.4 hours per week on board duties, and median compensation was $212,512 in 2010. The board sets corporate strategy, makes major decisions such as major acquisitions, and hires, evaluates, and fires the top-level manager (chief executive officer or CEO). The CEO typically hires other positions. However, board involvement in the hiring of other positions such as the chief financial officer (CFO) has increased. In 2013, a survey of over 160 CEOs and directors of public and private companies found that the top weaknesses of CEOs were "mentoring skills" and "board engagement", and 10% of companies never evaluated the CEO. The board may also have certain employees (e.g., internal auditors) report to them or directly hire independent contractors; for example, the board (through the audit committee) typically selects the auditor.
Helpful skills of top management vary by the type of organization but typically include a broad understanding of competition, world economies, and politics. In addition, the CEO is responsible for implementing and determining (within the board's framework) the broad policies of the organization. Executive management accomplishes the day-to-day details, including: instructions for preparation of department budgets, procedures, schedules; appointment of middle level executives such as department managers; coordination of departments; media and governmental relations; and shareholder communication.
Middle
Consist of general managers, branch managers and department managers. They are accountable to the top management for their department's function. They devote more time to organizational and directional functions. Their roles can be emphasized as executing organizational plans in conformance with the company's policies and the objectives of the top management, they define and discuss information and policies from top management to lower management, and most importantly they inspire and provide guidance to lower-level managers towards better performance.
Middle management is the midway management of a categorized organization, being secondary to the senior management but above the deepest levels of operational members. An operational manager may be well-thought-out by middle management or may be categorized as non-management operate, liable to the policy of the specific organization. The efficiency of the middle level is vital in any organization since they bridge the gap between top level and bottom level staffs.
Their functions include:
Design and implement effective group and inter-group work and information systems.
Define and monitor group-level performance indicators.
Diagnose and resolve problems within and among workgroups.
Design and implement reward systems that support cooperative behavior. They also make decisions and share ideas with top managers.
Lower
Lower managers include supervisors, section leaders, forepersons and team leaders. They focus on controlling and directing regular employees. They are usually responsible for assigning employees' tasks, guiding and supervising employees on day-to-day activities, ensuring the quality and quantity of production and/or service, making recommendations and suggestions to employees on their work, and channeling employee concerns that they cannot resolve to mid-level managers or other administrators. First-level or "front line" managers also act as role models for their employees. In some types of work, front line managers may also do some of the same tasks that employees do, at least some of the time. For example, in some restaurants, the front line managers will also serve customers during a very busy period of the day.
Front-line managers typically provide:
Training for new employees
Basic supervision
Motivation
Performance feedback and guidance
Some front-line managers may also provide career planning for employees who aim to rise within the organization.
Training
Colleges and universities around the world offer bachelor's degrees, graduate degrees, diplomas and certificates in management, generally within their colleges of business, business schools or faculty of management but also in other related departments. In the 2010s, there has been an increase in online management education and training in the form of electronic educational technology (also called e-learning). Online education has increased the accessibility of management training to people who do not live near a college or university, or who cannot afford to travel to a city where such training is available.
Requirement
While some professions require academic credentials in order to work in the profession (e.g., law, medicine, engineering, which require, respectively the Bachelor of Law, Doctor of Medicine and Bachelor of Engineering degrees), management and administration positions do not necessarily require the completion of academic degrees. Some well-known senior executives in the US who did not complete a degree include Steve Jobs, Bill Gates and Mark Zuckerberg. However, many managers and executives have completed some type of business or management training, such as a Bachelor of Commerce or a Master of Business Administration degree. Some major organizations, including companies, non-profit organizations and governments, require applicants to managerial or executive positions to hold at minimum bachelor's degree in a field related to administration or management, or in the case of business jobs, a Bachelor of Commerce or a similar degree.
Undergraduate
At the undergraduate level, the most common business programs are the Bachelor of Business Administration (BBA) and Bachelor of Commerce (B.Com.).
These typically comprise a four-year program designed to give students an overview of the role of managers in planning and directing within an organization.
Course topics include accounting, financial management, statistics, marketing, strategy, and other related areas.
There are many other undergraduate degrees that include the study of management, such as Bachelor of Arts degrees with a major in business administration or management and Bachelor of Public Administration (B.P.A), a degree designed for individuals aiming to work as bureaucrats in the government jobs.
Many colleges and universities also offer certificates and diplomas in business administration or management, which typically require one to two years of full-time study.
Note that to manage technological areas, one often needs an undergraduate degree in a STEM area.
Graduate
At the graduate level students aiming at careers as managers or executives may choose to specialize in major subareas of management or business administration such as entrepreneurship, human resources, international business, organizational behavior, organizational theory, strategic management, accounting, corporate finance, entertainment, global management, healthcare management, investment management, sustainability and real estate.
A Master of Business Administration (MBA) is the most popular professional degree at the master's level and can be obtained from many universities in the United States. MBA programs provide further education in management and leadership for graduate students. Other master's degrees in business and management include Master of Management (MM) and the Master of Science (M.Sc.) in business administration or management, which is typically taken by students aiming to become researchers or professors.
There are also specialized master's degrees in administration for individuals aiming at careers outside of business, such as the Master of Public Administration (MPA) degree (also offered as a Master of Arts in Public Administration in some universities), for students aiming to become managers or executives in the public service and the Master of Health Administration, for students aiming to become managers or executives in the health care and hospital sector.
Management doctorates are the most advanced terminal degrees in the field of business and management. Most individuals obtaining management doctorates take the programs to obtain the training in research methods, statistical analysis and writing academic papers that they will need to seek careers as researchers, senior consultants and/or professors in business administration or management. There are three main types of management doctorates: the Doctor of Management (D.M.), the Doctor of Business Administration (D.B.A.), and the Ph.D. in Business Administration or Management. In the 2010s, doctorates in business administration and management are available with many specializations.
Good practices
While management trends can change so fast, the long-term trend in management has been defined by a market embracing diversity and a rising service industry. Managers are currently being trained to encourage greater equality for minorities and women in the workplace, by offering increased flexibility in working hours, better retraining, and innovative (and usually industry-specific) performance markers. Managers destined for the service sector are being trained to use unique measurement techniques, better worker support and more charismatic leadership styles. Human resources finds itself increasingly working with management in a training capacity to help collect management data on the success (or failure) of management actions with employees.
Evidence-based management
Evidence-based management is an emerging movement to use the current, best evidence in management and decision-making. It is part of the larger movement towards evidence-based practices. Evidence-based management entails managerial decisions and organizational practices informed by the best available evidence. As with other evidence-based practice, this is based on the three principles of: 1) published peer-reviewed (often in management or social science journals) research evidence that bears on whether and why a particular management practice works; 2) judgement and experience from contextual management practice, to understand the organization and interpersonal dynamics in a situation and determine the risks and benefits of available actions; and 3) the preferences and values of those affected.
History
Some see management as a late-modern (in the sense of late modernity) conceptualization. On those terms it cannot have a pre-modern history – only harbingers (such as stewards). Others, however, detect management-like thought among ancient Sumerian traders and the builders of the pyramids of ancient Egypt. Slave-owners through the centuries faced the problems of exploiting/motivating a dependent but sometimes unenthusiastic or recalcitrant workforce, but many pre-industrial enterprises, given their small scale, did not feel compelled to face the issues of management systematically. However, innovations such as the spread of Arabic numerals (5th to 15th centuries) and the codification of double-entry book-keeping (1494) provided tools for management assessment, planning and control.
An organisation is more stable if members have the right to express their differences and solve their conflicts within it.
While one person can begin an organisation, "it is lasting when it is left in the care of many and when many desire to maintain it".
A weak manager can follow a strong one, but not another weak one, and maintain authority.
A manager seeking to change an established organization "should retain at least a shadow of the ancient customs".
With the changing workplaces of industrial revolutions in the 18th and 19th centuries, military theory and practice contributed approaches to managing the newly popular factories.
Given the scale of most commercial operations and the lack of mechanized record-keeping and recording before the industrial revolution, it made sense for most owners of enterprises in those times to carry out management functions by and for themselves. But with growing size and complexity of organizations, a distinction between owners (individuals, industrial dynasties or groups of shareholders) and day-to-day managers (independent specialists in planning and control) gradually became more common.
Early writing
The field of management originated in ancient China, including possibly the first highly centralized bureaucratic state, and the earliest (by the second century BC) example of an administration based on merit through testing. Some theorists have cited ancient military texts as providing lessons for civilian managers. For example, Chinese general Sun Tzu in his 6th-century BC work The Art of War recommends (when re-phrased in modern terminology) being aware of and acting on strengths and weaknesses of both a manager's organization and a foe's. The writings of influential Chinese Legalist philosopher Shen Buhai may be considered to embody a rare premodern example of abstract theory of administration. American philosopher Herrlee G. Creel and other scholars find the influence of Chinese administration in Europe by the 12th century. Thomas Taylor Meadows, Britain's consul in Guangzhou, argued in his Desultory Notes on the Government and People of China (1847) that "the long duration of the Chinese empire is solely and altogether owing to the good government which consists in the advancement of men of talent and merit only," and that the British must reform their civil service by making the institution meritocratic. Influenced by the ancient Chinese imperial examination, the Northcote–Trevelyan Report of 1854 recommended that recruitment should be on the basis of merit determined through competitive examination, candidates should have a solid general education to enable inter-departmental transfers, and promotion should be through achievement rather than "preferment, patronage, or purchase". This led to implementation of Her Majesty's Civil Service as a systematic, meritocratic civil service bureaucracy. Like the British, the development of French bureaucracy was influenced by the Chinese system. Voltaire claimed that the Chinese had "perfected moral science" and François Quesnay advocated an economic and political system modeled after that of the Chinese. French civil service examinations adopted in the late 19th century were also heavily based on general cultural studies. These features have been likened to the earlier Chinese model.
Various ancient and medieval civilizations produced "mirrors for princes" books, which aimed to advise new monarchs on how to govern. Plato described job specialization in 350 BC, and Alfarabi listed several leadership traits in AD 900. Other examples include the Indian Arthashastra by Chanakya (written around 300 BC), and The Prince by Italian author
Niccolò Machiavelli (c. 1515).
Written in 1776 by Adam Smith, a Scottish moral philosopher, The Wealth of Nations discussed efficient organization of work through division of labour.
Smith described how changes in processes could boost productivity in the manufacture of pins. While individuals could produce 200 pins per day, Smith analyzed the steps involved in manufacture and, with 10 specialists, enabled production of 48,000 pins per day.
19th century
Classical economists such as Adam Smith (1723–1790) and John Stuart Mill (1806–1873) provided a theoretical background to resource allocation, production (economics), and pricing issues. About the same time, innovators like Eli Whitney (1765–1825), James Watt (1736–1819), and Matthew Boulton (1728–1809) developed elements of technical production such as standardization, quality-control procedures, cost-accounting, interchangeability of parts, and work-planning. Many of these aspects of management existed in the pre-1861 slave-based sector of the US economy. That environment saw 4 million people, as the contemporary usages had it, "managed" in profitable quasi-mass production
before wage slavery eclipsed chattel slavery.
Salaried managers as an identifiable group first became prominent in the late 19th century. As large corporations began to overshadow small family businesses the need for personnel management positions became more necessary. Businesses grew into large corporations and the need for clerks, bookkeepers, secretaries and managers expanded. The demand for trained managers led college and university administrators to consider and move forward with plans to create the first schools of business on their campuses.
20th century
At the turn of the twentieth century the need for skilled and trained managers had become increasingly apparent. The demand occurred as personnel departments began to expand rapidly. In 1915, less than one in twenty manufacturing firms had a dedicated personnel department. By 1929 that number had grown to over one-third. Formal management education became standardized at colleges and universities. Colleges and universities capitalized on the needs of corporations by forming business schools and corporate placement departments. This shift toward formal business education marked the creation of a corporate elite in the US.
By about 1900 one finds managers trying to place their theories on what they regarded as a thoroughly scientific basis (see scientism for perceived limitations of this belief). Examples include Henry R. Towne's Science of management in the 1890s, Frederick Winslow Taylor's The Principles of Scientific Management (1911), Lillian Gilbreth's Psychology of Management (1914), Frank and Lillian Gilbreth's Applied motion study (1917), and Henry L. Gantt's charts (1910s). J. Duncan wrote the first college management textbook in 1911. In 1912 Yoichi Ueno introduced Taylorism to Japan and became the first management consultant of the "Japanese management style". His son Ichiro Ueno pioneered Japanese quality assurance.
The first comprehensive theories of management appeared around 1920. The Harvard Business School offered the first Master of Business Administration degree (MBA) in 1921. People like Henri Fayol (1841–1925) and Alexander Church (1866–1936) described the various branches of management and their inter-relationships. In the early 20th century, people like Ordway Tead (1891–1973), Walter Scott (1869–1955) and J. Mooney applied the principles of psychology to management. Other writers, such as Elton Mayo (1880–1949), Mary Parker Follett (1868–1933), Chester Barnard (1886–1961), Max Weber (1864–1920), who saw what he called the "administrator" as bureaucrat, Rensis Likert (1903–1981), and Chris Argyris (born 1923) approached the phenomenon of management from a sociological perspective.
The 1930s and 1940s saw the development of a militarization trend in management in parts of Eurasia – both the NKVD (in the Soviet Union) and the SS (in the Greater Germanic Reich), for example, managed labor camps as industrial enterprises using slave labor supervised by uniformed cadres.
Military habits persisted in some management circles.
Peter Drucker (1909–2005) wrote one of the earliest books on applied management: Concept of the Corporation (published in 1946). It resulted from Alfred Sloan (chairman of General Motors until 1956) commissioning a study of the organisation. Drucker went on to write 39 books, many in the same vein.
H. Dodge, Ronald Fisher (1890–1962), and Thornton C. Fry introduced statistical techniques into management-studies. In the 1940s, Patrick Blackett worked in the development of the applied-mathematics science of operations research, initially for military operations. Operations research, sometimes known as "management science" (but distinct from Taylor's scientific management), attempts to take a scientific approach to solving decision-problems, and can apply directly to multiple management problems, particularly in the areas of logistics and operations.
Some of the later 20th-century developments include the theory of constraints (introduced in 1984), management by objectives (systematised in 1954), re-engineering (early 1990s), Six Sigma (1986), management by walking around (1970s), the Viable system model (1972), and various information-technology-driven theories such as agile software development (so-named from 2001), as well as group-management theories such as Cog's Ladder (1972) and the notion of "thriving on chaos" (1987).
As the general recognition of managers as a class solidified during the 20th century and gave perceived practitioners of the art/science of management a certain amount of prestige, so the way opened for popularised systems of management ideas to peddle their wares. In this context many management fads may have had more to do with pop psychology than with scientific theories of management.
Business management includes the following branches:
financial management
human resource management
Management cybernetics
information technology management (responsible for management information systems )
marketing management
operations management and production management
strategic management
21st century
In the 21st century observers find it increasingly difficult to subdivide management into functional categories in this way. More and more processes simultaneously involve several categories. Instead, one tends to think in terms of the various processes, tasks, and objects subject to management.
Branches of management theory also exist relating to nonprofits and to government: such as public administration, public management, and educational management. Further, management programs related to civil-society organizations have also spawned programs in nonprofit management and social entrepreneurship.
Note that many of the assumptions made by management have come under attack from business-ethics viewpoints, critical management studies, and anti-corporate activism.
As one consequence, workplace democracy (sometimes referred to as Workers' self-management) has become both more common and more advocated, in some places distributing all management functions among workers, each of whom takes on a portion of the work. However, these models predate any current political issue, and may occur more naturally than does a command hierarchy. All management embraces to some degree a democratic principle—in that in the long term, the majority of workers must support management. Otherwise, they leave to find other work or go on strike. Despite the move toward workplace democracy, command-and-control organization structures remain commonplace as de facto organization structures. Indeed, the entrenched nature of command-and-control is evident in the way that recent layoffs have been conducted with management ranks affected far less than employees at the lower levels. In some cases, management has even rewarded itself with bonuses after laying off lower-level workers.
According to leadership-academic Manfred F.R. Kets de Vries, a contemporary senior-management team will almost inevitably have some personality disorders.
Nature of work
In profitable organizations, management's primary function is the satisfaction of a range of stakeholders. This typically involves making a profit (for the shareholders), creating valued products at a reasonable cost (for customers), and providing great employment opportunities for employees. In case of nonprofit management, one of the main functions is, keeping the faith of donors. In most models of management and governance, shareholders vote for the board of directors, and the board then hires senior management. Some organizations have experimented with other methods (such as employee-voting models) of selecting or reviewing managers, but this is rare.
Topics
Basics
According to Fayol, management operates through five basic functions: planning, organizing, coordinating, commanding, and controlling.
Planning: Deciding what needs to happen in the future and generating plans for action (deciding in advance).
Organizing (or staffing): Making sure the human and nonhuman resources are put into place.
Commanding (or leading): Determining what must be done in a situation and getting people to do it.
Coordinating: Creating a structure through which an organization's goals can be accomplished.
Controlling: Checking progress against plans.
Basic roles
Interpersonal: roles that involve coordination and interaction with employees.
Figurehead, leader
Informational: roles that involve handling, sharing, and analyzing information.
Nerve centre, disseminator
Decision: roles that require decision-making.
Entrepreneur, negotiator, allocator, disturbance handler
Skills
Management skills include:
political: used to build a power base and to establish connections.
conceptual: used to analyze complex situations.
interpersonal: used to communicate, motivate, mentor and delegate.
diagnostic: ability to visualize appropriate responses to a situation.
leadership: ability to communicate a vision and inspire people to embrace that vision.
cross-cultural leadership: ability to understand the effects of culture on leadership style.
technical: expertise in one's particular functional area.
behavioral: perception towards others, conflict resolution, time-management, self-improvement, stress management and resilience, patience, clear communication.
Implementation of policies and strategies
All policies and strategies must be discussed with all managerial personnel and staff.
Managers must understand where and how they can implement their policies and strategies.
An action plan must be devised for each department.
Policies and strategies must be reviewed regularly.
Contingency plans must be devised in case the environment changes.
Top-level managers should carry out regular progress assessments.
The business requires team spirit and a good environment.
The missions, objectives, strengths and weaknesses of each department must be analyzed to determine their roles in achieving the business's mission.
The forecasting method develops a reliable picture of the business' future environment.
A planning unit must be created to ensure that all plans are consistent and that policies and strategies are aimed at achieving the same mission and objectives.
Policies and strategies in the planning process
They give mid and lower-level managers a good idea of the future plans for each department in an organization.
A framework is created whereby plans and decisions are made.
Mid and lower-level management may add their own plans to the business's strategies.
See also
Engineering management
Outline of business management
References
External links
Leadership
Organizational theory
Majority–minority relations |
19883 | https://en.wikipedia.org/wiki/Mineralogy | Mineralogy | Mineralogy is a subject of geology specializing in the scientific study of the chemistry, crystal structure, and physical (including optical) properties of minerals and mineralized artifacts. Specific studies within mineralogy include the processes of mineral origin and formation, classification of minerals, their geographical distribution, as well as their utilization.
History
Early writing on mineralogy, especially on gemstones, comes from ancient Babylonia, the ancient Greco-Roman world, ancient and medieval China, and Sanskrit texts from ancient India and the ancient Islamic world. Books on the subject included the Naturalis Historia of Pliny the Elder, which not only described many different minerals but also explained many of their properties, and Kitab al Jawahir (Book of Precious Stones) by Persian scientist Al-Biruni. The German Renaissance specialist Georgius Agricola wrote works such as De re metallica (On Metals, 1556) and De Natura Fossilium (On the Nature of Rocks, 1546) which began the scientific approach to the subject. Systematic scientific studies of minerals and rocks developed in post-Renaissance Europe. The modern study of mineralogy was founded on the principles of crystallography (the origins of geometric crystallography, itself, can be traced back to the mineralogy practiced in the eighteenth and nineteenth centuries) and to the microscopic study of rock sections with the invention of the microscope in the 17th century.
Nicholas Steno first observed the law of constancy of interfacial angles (also known as the first law of crystallography) in quartz crystals in 1669. This was later generalized and established experimentally by Jean-Baptiste L. Romé de l'Islee in 1783. René Just Haüy, the "father of modern crystallography", showed that crystals are periodic and established that the orientations of crystal faces can be expressed in terms of rational numbers, as later encoded in the Miller indices. In 1814, Jöns Jacob Berzelius introduced a classification of minerals based on their chemistry rather than their crystal structure. William Nicol developed the Nicol prism, which polarizes light, in 1827–1828 while studying fossilized wood; Henry Clifton Sorby showed that thin sections of minerals could be identified by their optical properties using a polarizing microscope. James D. Dana published his first edition of A System of Mineralogy in 1837, and in a later edition introduced a chemical classification that is still the standard. X-ray diffraction was demonstrated by Max von Laue in 1912, and developed into a tool for analyzing the crystal structure of minerals by the father/son team of William Henry Bragg and William Lawrence Bragg.
More recently, driven by advances in experimental technique (such as neutron diffraction) and available computational power, the latter of which has enabled extremely accurate atomic-scale simulations of the behaviour of crystals, the science has branched out to consider more general problems in the fields of inorganic chemistry and solid-state physics. It, however, retains a focus on the crystal structures commonly encountered in rock-forming minerals (such as the perovskites, clay minerals and framework silicates). In particular, the field has made great advances in the understanding of the relationship between the atomic-scale structure of minerals and their function; in nature, prominent examples would be accurate measurement and prediction of the elastic properties of minerals, which has led to new insight into seismological behaviour of rocks and depth-related discontinuities in seismograms of the Earth's mantle. To this end, in their focus on the connection between atomic-scale phenomena and macroscopic properties, the mineral sciences (as they are now commonly known) display perhaps more of an overlap with materials science than any other discipline.
Physical properties
An initial step in identifying a mineral is to examine its physical properties, many of which can be measured on a hand sample. These can be classified into density (often given as specific gravity); measures of mechanical cohesion (hardness, tenacity, cleavage, fracture, parting); macroscopic visual properties (luster, color, streak, luminescence, diaphaneity); magnetic and electric properties; radioactivity and solubility in hydrogen chloride ().
Hardness is determined by comparison with other minerals. In the Mohs scale, a standard set of minerals are numbered in order of increasing hardness from 1 (talc) to 10 (diamond). A harder mineral will scratch a softer, so an unknown mineral can be placed in this scale, by which minerals; it scratches and which scratch it. A few minerals such as calcite and kyanite have a hardness that depends significantly on direction. Hardness can also be measured on an absolute scale using a sclerometer; compared to the absolute scale, the Mohs scale is nonlinear.
Tenacity refers to the way a mineral behaves, when it is broken, crushed, bent or torn. A mineral can be brittle, malleable, sectile, ductile, flexible or elastic. An important influence on tenacity is the type of chemical bond (e.g., ionic or metallic).
Of the other measures of mechanical cohesion, cleavage is the tendency to break along certain crystallographic planes. It is described by the quality (e.g., perfect or fair) and the orientation of the plane in crystallographic nomenclature.
Parting is the tendency to break along planes of weakness due to pressure, twinning or exsolution. Where these two kinds of break do not occur, fracture is a less orderly form that may be conchoidal (having smooth curves resembling the interior of a shell), fibrous, splintery, hackly (jagged with sharp edges), or uneven.
If the mineral is well crystallized, it will also have a distinctive crystal habit (for example, hexagonal, columnar, botryoidal) that reflects the crystal structure or internal arrangement of atoms. It is also affected by crystal defects and twinning. Many crystals are polymorphic, having more than one possible crystal structure depending on factors such as pressure and temperature.
Crystal structure
The crystal structure is the arrangement of atoms in a crystal. It is represented by a lattice of points which repeats a basic pattern, called a unit cell, in three dimensions. The lattice can be characterized by its symmetries and by the dimensions of the unit cell. These dimensions are represented by three Miller indices. The lattice remains unchanged by certain symmetry operations about any given point in the lattice: reflection, rotation, inversion, and rotary inversion, a combination of rotation and reflection. Together, they make up a mathematical object called a crystallographic point group or crystal class. There are 32 possible crystal classes. In addition, there are operations that displace all the points: translation, screw axis, and glide plane. In combination with the point symmetries, they form 230 possible space groups.
Most geology departments have X-ray powder diffraction equipment to analyze the crystal structures of minerals. X-rays have wavelengths that are the same order of magnitude as the distances between atoms. Diffraction, the constructive and destructive interference between waves scattered at different atoms, leads to distinctive patterns of high and low intensity that depend on the geometry of the crystal. In a sample that is ground to a powder, the X-rays sample a random distribution of all crystal orientations. Powder diffraction can distinguish between minerals that may appear the same in a hand sample, for example quartz and its polymorphs tridymite and cristobalite.
Isomorphous minerals of different compositions have similar powder diffraction patterns, the main difference being in spacing and intensity of lines. For example, the (halite) crystal structure is space group Fm3m; this structure is shared by sylvite (), periclase (), bunsenite (), galena (), alabandite (), chlorargyrite (), and osbornite ().
Chemical elements
A few minerals are chemical elements, including sulfur, copper, silver, and gold, but the vast majority are compounds. The classical method for identifying composition is wet chemical analysis, which involves dissolving a mineral in an acid such as hydrochloric acid (HCl). The elements in solution are then identified using colorimetry, volumetric analysis or gravimetric analysis.
Since 1960, most chemistry analysis is done using instruments. One of these, atomic absorption spectroscopy, is similar to wet chemistry in that the sample must still be dissolved, but it is much faster and cheaper. The solution is vaporized and its absorption spectrum is measured in the visible and ultraviolet range. Other techniques are X-ray fluorescence, electron microprobe analysis atom probe tomography and optical emission spectrography.
Optical
In addition to macroscopic properties such as colour or lustre, minerals have properties that require a polarizing microscope to observe.
Transmitted light
When light passes from air or a vacuum into a transparent crystal, some of it is reflected at the surface and some refracted. The latter is a bending of the light path that occurs because the speed of light changes as it goes into the crystal; Snell's law relates the bending angle to the Refractive index, the ratio of speed in a vacuum to speed in the crystal. Crystals whose point symmetry group falls in the cubic system are isotropic: the index does not depend on direction. All other crystals are anisotropic: light passing through them is broken up into two plane polarized rays that travel at different speeds and refract at different angles.
A polarizing microscope is similar to an ordinary microscope, but it has two plane-polarized filters, a (polarizer) below the sample and an analyzer above it, polarized perpendicular to each other. Light passes successively through the polarizer, the sample and the analyzer. If there is no sample, the analyzer blocks all the light from the polarizer. However, an anisotropic sample will generally change the polarization so some of the light can pass through. Thin sections and powders can be used as samples.
When an isotropic crystal is viewed, it appears dark because it does not change the polarization of the light. However, when it is immersed in a calibrated liquid with a lower index of refraction and the microscope is thrown out of focus, a bright line called a Becke line appears around the perimeter of the crystal. By observing the presence or absence of such lines in liquids with different indices, the index of the crystal can be estimated, usually to within .
Systematic
Systematic mineralogy is the identification and classification of minerals by their properties. Historically, mineralogy was heavily concerned with taxonomy of the rock-forming minerals. In 1959, the International Mineralogical Association formed the Commission of New Minerals and Mineral Names to rationalize the nomenclature and regulate the introduction of new names. In July 2006, it was merged with the Commission on Classification of Minerals to form the Commission on New Minerals, Nomenclature, and Classification. There are over 6,000 named and unnamed minerals, and about 100 are discovered each year. The Manual of Mineralogy places minerals in the following classes: native elements, sulfides, sulfosalts, oxides and hydroxides, halides, carbonates, nitrates and borates, sulfates, chromates, molybdates and tungstates, phosphates, arsenates and vanadates, and silicates.
Formation environments
The environments of mineral formation and growth are highly varied, ranging from slow crystallization at the high temperatures and pressures of igneous melts deep within the Earth's crust to the low temperature precipitation from a saline brine at the Earth's surface.
Various possible methods of formation include:
sublimation from volcanic gases
deposition from aqueous solutions and hydrothermal brines
crystallization from an igneous magma or lava
recrystallization due to metamorphic processes and metasomatism
crystallization during diagenesis of sediments
formation by oxidation and weathering of rocks exposed to the atmosphere or within the soil environment.
Biomineralogy
Biomineralogy is a cross-over field between mineralogy, paleontology and biology. It is the study of how plants and animals stabilize minerals under biological control, and the sequencing of mineral replacement of those minerals after deposition. It uses techniques from chemical mineralogy, especially isotopic studies, to determine such things as growth forms in living plants and animals as well as things like the original mineral content of fossils.
A new approach to mineralogy called mineral evolution explores the co-evolution of the geosphere and biosphere, including the role of minerals in the origin of life and processes as mineral-catalyzed organic synthesis and the selective adsorption of organic molecules on mineral surfaces.
Mineral ecology
In 2011, several researchers began to develop a Mineral Evolution Database. This database integrates the crowd-sourced site Mindat.org, which has over 690,000 mineral-locality pairs, with the official IMA list of approved minerals and age data from geological publications.
This database makes it possible to apply statistics to answer new questions, an approach that has been called mineral ecology. One such question is how much of mineral evolution is deterministic and how much the result of chance. Some factors are deterministic, such as the chemical nature of a mineral and conditions for its stability; but mineralogy can also be affected by the processes that determine a planet's composition. In a 2015 paper, Robert Hazen and others analyzed the number of minerals involving each element as a function of its abundance. They found that Earth, with over 4800 known minerals and 72 elements, has a power law relationship. The Moon, with only 63 minerals and 24 elements (based on a much smaller sample) has essentially the same relationship. This implies that, given the chemical composition of the planet, one could predict the more common minerals. However, the distribution has a long tail, with 34% of the minerals having been found at only one or two locations. The model predicts that thousands more mineral species may await discovery or have formed and then been lost to erosion, burial or other processes. This implies a role of chance in the formation of rare minerals occur.
In another use of big data sets, network theory was applied to a dataset of carbon minerals, revealing new patterns in their diversity and distribution. The analysis can show which minerals tend to coexist and what conditions (geological, physical, chemical and biological) are associated with them. This information can be used to predict where to look for new deposits and even new mineral species.
Uses
Minerals are essential to various needs within human society, such as minerals used as ores for essential components of metal products used in various commodities and machinery, essential components to building materials such as limestone, marble, granite, gravel, glass, plaster, cement, etc. Minerals are also used in fertilizers to enrich the growth of agricultural crops.
Collecting
Mineral collecting is also a recreational study and collection hobby, with clubs and societies representing the field. Museums, such as the Smithsonian National Museum of Natural History Hall of Geology, Gems, and Minerals, the Natural History Museum of Los Angeles County, the Carnegie Museum of Natural History,the Natural History Museum, London, and the private Mim Mineral Museum in Beirut, Lebanon, have popular collections of mineral specimens on permanent display.
See also
List of minerals
List of minerals recognized by the International Mineralogical Association
List of mineralogists
List of publications in mineralogy
Mineral collecting
Mineral physics
Metallurgy
Petrology
Notes
References
Further reading
External links
The Virtual Museum of the History of Mineralogy
Associations
American Federation of Mineral Societies
French Society of Mineralogy and Crystallography
Geological Society of America
German Mineralogical Society
International Mineralogical Association
Italian Mineralogical and Petrological Society
Mineralogical Association of Canada
Mineralogical Society of Great Britain and Ireland
Mineralogical Society of America |
19886 | https://en.wikipedia.org/wiki/Maple%20syrup | Maple syrup | Maple syrup is a syrup usually made from the xylem sap of sugar maple, red maple, or black maple trees, although it can also be made from other maple species. In cold climates, these trees store starch in their trunks and roots before winter; the starch is then converted to sugar that rises in the sap in late winter and early spring. Maple trees are tapped by drilling holes into their trunks and collecting the sap, which is processed by heating to evaporate much of the water, leaving the concentrated syrup.
Maple syrup was first made and used by the Indigenous peoples of North America. The practice was adopted by European settlers, who gradually changed production methods. Technological improvements in the 1970s further refined syrup processing. Virtually all of the world's maple syrup is produced in Canada and the United States. The Canadian province of Quebec is the largest producer, responsible for 70 percent of the world's output; Canadian exports of maple syrup in 2016 were C$487 million (about US$360 million), with Quebec accounting for some 90 percent of this total.
Maple syrup is graded according to the Canada, United States, or Vermont scales based on its density and translucency. Sucrose is the most prevalent sugar in maple syrup. In Canada, syrups must be made exclusively from maple sap to qualify as maple syrup and must also be at least 66 percent sugar. In the United States, a syrup must be made almost entirely from maple sap to be labelled as "maple", though states such as Vermont and New York have more restrictive definitions.
Maple syrup is often used as a condiment for pancakes, waffles, French toast, oatmeal, or porridge. It is also used as an ingredient in baking and as a sweetener or flavouring agent. Culinary experts have praised its unique flavour, although the chemistry responsible is not fully understood.
Sources
Three species of maple trees are predominantly used to produce maple syrup: the sugar maple (Acer saccharum), the black maple (A. nigrum), and the red maple (A. rubrum), because of the high sugar content (roughly two to five percent) in the sap of these species. The black maple is included as a subspecies or variety in a more broadly viewed concept of A. saccharum, the sugar maple, by some botanists. Of these, the red maple has a shorter season because it buds earlier than sugar and black maples, which alters the flavour of the sap.
A few other species of maple (Acer) are also sometimes used as sources of sap for producing maple syrup, including the box elder or Manitoba maple (Acer negundo), the silver maple (A. saccharinum), and the bigleaf maple (A. macrophyllum). In the Southeastern United States, Florida sugar maple (Acer floridanum) is occasionally used for maple syrup production.
Similar syrups may also be produced from walnut, birch, or palm trees, among other sources.
History
Indigenous peoples
Indigenous peoples living in northeastern North America were the first groups known to have produced maple syrup and maple sugar. According to Indigenous oral traditions, as well as archaeological evidence, maple tree sap was being processed into syrup long before Europeans arrived in the region. There are no authenticated accounts of how maple syrup production and consumption began, but various legends exist; one of the most popular involves maple sap being used in place of water to cook venison served to a chief. Indigenous tribes developed rituals around sugar-making, celebrating the Sugar Moon (the first full moon of spring) with a Maple Dance. Many aboriginal dishes replaced the salt traditional in European cuisine with maple sugar or syrup.
The Algonquians recognized maple sap as a source of energy and nutrition. At the beginning of the spring thaw, they made V-shaped incisions in tree trunks; they then inserted reeds or concave pieces of bark to run the sap into clay buckets or tightly woven birch-bark baskets. The maple sap was concentrated first by leaving it exposed to the cold temperatures overnight and disposing of the layer of ice that formed on top. Following that, the sap was transported by sled to large fires where it was boiled in clay pots to produce maple syrup. Often, multiple pots were used in conjunction, with the liquid being transferred between them as it grew more concentrated. Contrary to popular belief, syrup was not produced by dropping heated stones into wooden bowls.
European colonists
In the early stages of European colonization in northeastern North America, local Indigenous peoples showed the arriving colonists how to tap the trunks of certain types of maples during the spring thaw to harvest the sap. André Thevet, the "Royal Cosmographer of France", wrote about Jacques Cartier drinking maple sap during his Canadian voyages. By 1680, European settlers and fur traders were involved in harvesting maple products. However, rather than making incisions in the bark, the Europeans used the method of drilling tapholes in the trunks with augers. Prior to the 19th century, processed maple sap was used primarily as a source of concentrated sugar, in both liquid and crystallized-solid form, as cane sugar had to be imported from the West Indies.
Maple sugaring parties typically began to operate at the start of the spring thaw in regions of woodland with sufficiently large numbers of maples. Syrup makers first bored holes in the trunks, usually more than one hole per large tree; they then inserted wooden spouts into the holes and hung a wooden bucket from the protruding end of each spout to collect the sap. The buckets were commonly made by cutting cylindrical segments from a large tree trunk and then hollowing out each segment's core from one end of the cylinder, creating a seamless, watertight container. Sap filled the buckets, and was then either transferred to larger holding vessels (barrels, large pots, or hollowed-out wooden logs), often mounted on sledges or wagons pulled by draft animals, or carried in buckets or other convenient containers. The sap-collection buckets were returned to the spouts mounted on the trees, and the process was repeated for as long as the flow of sap remained "sweet". The specific weather conditions of the thaw period were, and still are, critical in determining the length of the sugaring season. As the weather continues to warm, a maple tree's normal early spring biological process eventually alters the taste of the sap, making it unpalatable, perhaps due to an increase in amino acids.
The boiling process was very time-consuming. The harvested sap was transported back to the party's base camp, where it was then poured into large vessels (usually made from metal) and boiled to achieve the desired consistency. The sap was usually transported using large barrels pulled by horses or oxen to a central collection point, where it was processed either over a fire built out in the open or inside a shelter built for that purpose (the "sugar shack").
Since 1850
Around the time of the American Civil War (1861–1865), syrup makers started using large, flat sheet metal pans as they were more efficient for boiling than heavy, rounded iron kettles, because of a greater surface area for evaporation. Around this time, cane sugar replaced maple sugar as the dominant sweetener in the US; as a result, producers focused marketing efforts on maple syrup. The first evaporator, used to heat and concentrate sap, was patented in 1858. In 1872, an evaporator was developed that featured two pans and a metal arch or firebox, which greatly decreased boiling time. Around 1900, producers bent the tin that formed the bottom of a pan into a series of flues, which increased the heated surface area of the pan and again decreased boiling time. Some producers also added a finishing pan, a separate batch evaporator, as a final stage in the evaporation process.
Buckets began to be replaced with plastic bags, which allowed people to see at a distance how much sap had been collected. Syrup producers also began using tractors to haul vats of sap from the trees being tapped (the sugarbush) to the evaporator. Some producers adopted motor-powered tappers and metal tubing systems to convey sap from the tree to a central collection container, but these techniques were not widely used. Heating methods also diversified: modern producers use wood, oil, natural gas, propane, or steam to evaporate sap. Modern filtration methods were perfected to prevent contamination of the syrup.
A large number of technological changes took place during the 1970s. Plastic tubing systems that had been experimental since the early part of the century were perfected, and the sap came directly from the tree to the evaporator house. Vacuum pumps were added to the tubing systems, and preheaters were developed to recycle heat lost in the steam. Producers developed reverse-osmosis machines to take a portion of water out of the sap before it was boiled, increasing processing efficiency.
Improvements in tubing and vacuum pumps, new filtering techniques, "supercharged" preheaters, and better storage containers have since been developed. Research continues on pest control and improved woodlot management. In 2009, researchers at the University of Vermont unveiled a new type of tap that prevents backflow of sap into the tree, reducing bacterial contamination and preventing the tree from attempting to heal the bore hole. Experiments show that it may be possible to use saplings in a plantation instead of mature trees, dramatically boosting productivity per acre. As a result of the smaller tree diameter, milder diurnal temperature swings are needed for the tree to freeze and thaw, which enables sap production in milder climatic conditions outside of northeastern North America.
Processing
Open pan evaporation methods have been streamlined since colonial days, but remain basically unchanged. Sap must first be collected and boiled down to obtain syrup. Maple syrup is made by boiling between 20 and 50 volumes of sap (depending on its concentration) over an open fire until 1 volume of syrup is obtained, usually at a temperature over the boiling point of water. As the boiling point of water varies with changes in air pressure the correct value for pure water is determined at the place where the syrup is being produced, each time evaporation is begun and periodically throughout the day. Syrup can be boiled entirely over one heat source or can be drawn off into smaller batches and boiled at a more controlled temperature. Defoamers are often added during boiling.
Boiling the syrup is a tightly controlled process, which ensures appropriate sugar content. Syrup boiled too long will eventually crystallize, whereas under-boiled syrup will be watery, and will quickly spoil. The finished syrup has a density of 66° on the Brix scale (a hydrometric scale used to measure sugar solutions). The syrup is then filtered to remove precipitated "sugar sand", crystals made up largely of sugar and calcium malate. These crystals are not toxic, but create a "gritty" texture in the syrup if not filtered out.
In addition to open pan evaporation methods, many large producers use the more fuel efficient reverse osmosis procedure to separate the water from the sap. Smaller producers can also use batchwise recirculating reverse osmosis, with the most energy-efficient operation taking the sugar concentration to 25% prior to boiling.
The higher the sugar content of the sap, the smaller the volume of sap is needed to obtain the same amount of syrup. 57 units of sap with 1.5 percent sugar content will yield 1 unit of syrup, but only 25 units of sap with a 3.5 percent sugar content are needed to obtain one unit of syrup. The sap's sugar content is highly variable and will fluctuate even within the same tree.
The filtered syrup is graded and packaged while still hot, usually at a temperature of or greater. The containers are turned over after being sealed to sterilize the cap with the hot syrup. Packages can be made of metal, glass, or coated plastic, depending on volume and target market. The syrup can also be heated longer and further processed to create a variety of other maple products, including maple sugar, maple butter or cream, and maple candy or taffy.
Off-flavours
Off-flavours can sometimes develop during the production of maple syrup, resulting from contaminants in the boiling apparatus (such as disinfectants), microorganisms, fermentation products, metallic can flavours, and "buddy sap", an off-flavour occurring late in the syrup season when tree budding has begun. In some circumstances, it is possible to remove off-flavours through processing.
Production
Maple syrup production is centred in northeastern North America; however, given the correct weather conditions, it can be made wherever suitable species of maple trees grow, such as New Zealand, where there are efforts to establish commercial production.
A maple syrup production farm is called a "sugarbush" or "sugarwood". Sap is often boiled in a "sugar house" (also known as a "sugar shack", "sugar shanty", or cabane à sucre), a building louvered at the top to vent the steam from the boiling sap.
Maples are usually tapped beginning at 30 to 40 years of age. Each tree can support between one and three taps, depending on its trunk diameter. The average maple tree will produce of sap per season, up to per day. This is roughly equal to seven percent of its total sap. Tap seasons typically happen during late winter and spring and usually last for four to eight weeks, though the exact dates depends on the weather, location, and climate. The timing of the season and the region of maximum sap flow are both expected to be significantly altered by climate change by 2100.
During the day, starch stored in the roots for the winter rises through the trunk as sugary sap, allowing it to be tapped. Sap is not tapped at night because the temperature drop inhibits sap flow, although taps are typically left in place overnight. Some producers also tap in autumn, though this practice is less common than spring tapping. Maples can continue to be tapped for sap until they are over 100 years old.
Commerce
Until the 1930s, the United States produced most of the world's maple syrup. Today, after rapid growth in the 1990s, Canada produces more than 80 percent of the world's maple syrup, producing about in 2016. The vast majority of this comes from the province of Quebec, which is the world's largest producer, with about 70 percent of global production. Canada exported more than C$362 million of maple syrup in 2016. In 2015, 64 percent of Canadian maple syrup exports went to the United States (a value of C$229 million), 8 percent to Germany (C$31 million), 6 percent to Japan (C$26 million), and 5 percent to the United Kingdom (C$16 million).
In 2015, Quebec accounts for 90.83 percent of maple syrup produced in Canada, followed by New Brunswick at 4.83 percent, Ontario at 4.14 percent, and Nova Scotia at 0.2 percent. However, 94.28 percent of exported Canadian maple syrup originated from Quebec, whereas 4.91 percent of exported syrup originated from New Brunswick, and the remaining 0.81 percent from all other provinces. Ontario holds the most maple syrup farms in Canada outside of Quebec, with 2,240 maple syrup producers in 2011. This is followed by New Brunswick, with 191 maple syrup producers; and Nova Scotia, with 152 maple syrup producers.
As of 2016, Quebec had some 7,300 producers working with 13,500 farmers, collectively making over of syrup. Production in Quebec is controlled through a supply management system, with producers receiving quota allotments from the government sanctioned Federation of Quebec Maple Syrup Producers (Fédération des producteurs acéricoles du Québec, FPAQ), which also maintains reserves of syrup, although there is a black-market trade in Quebec product. In 2017, the FPAQ mandated increased output of maple syrup production, attempting to establish Quebec's dominance in the world market.
The Canadian provinces of Manitoba and Saskatchewan produce maple syrup using the sap of the box elder or Manitoba maple (Acer negundo). In 2011, there were 67 maple syrup producers in Manitoba, and 24 in Saskatchewan. A Manitoba maple tree's yield is usually less than half that of a similar sugar maple tree. Manitoba maple syrup has a slightly different flavour from sugar-maple syrup, because it contains less sugar and the tree's sap flows more slowly. British Columbia is home to a growing maple sugar industry using sap from the bigleaf maple, which is native to the West Coast of the United States and Canada. In 2011, there were 82 maple syrup producers in British Columbia.
Vermont is the biggest US producer, with over during the 2019 season, followed by New York with and Maine with . Wisconsin, Ohio, New Hampshire, Michigan, Pennsylvania, Massachusetts and Connecticut all produced marketable quantities of maple syrup.
Maple syrup has been produced on a small scale in some other countries, notably Japan and South Korea. However, in South Korea in particular, it is traditional to consume maple sap, called gorosoe, instead of processing it into syrup.
Markings
Under Canadian Maple Product Regulations, containers of maple syrup must include the words "maple syrup", its grade name and net quantity in litres or millilitres, on the main display panel with a minimum font size of 1.6 mm. If the maple syrup is of Canada Grade A level, the name of the colour class must appear on the label in both English and French. Also, the lot number or production code, and either: (1) the name and address of the sugar bush establishment, packing or shipper establishment, or (2) the first dealer and the registration number of the packing establishment, must be labeled on any display panel other than the bottom.
Grades
Following an effort from the International Maple Syrup Institute (IMSI) and many maple syrup producer associations, both Canada and the United States have altered their laws regarding the classification of maple syrup to be uniform. Whereas in the past each state or province had their own laws on the classification of maple syrup, now those laws define a unified grading system. This had been a work in progress for several years, and most of the finalization of the new grading system was made in 2014. The Canadian Food Inspection Agency (CFIA) announced in the Canada Gazette on 28 June 2014 that rules for the sale of maple syrup would be amended to include new descriptors, at the request of the IMSI.
As of 31 December 2014, the CFIA and as of 2 March 2015, the United States Department of Agriculture (USDA) Agricultural Marketing Service issued revised standards intended to harmonize Canada-United States regulations on the classification of maple syrup as follows:
Grade A
Golden Colour and Delicate Taste
Amber Colour and Rich Taste
Dark Colour and Robust Taste
Very Dark Colour and Strong Taste
Processing Grade
Substandard
As long as maple syrup does not have an off-flavour, is of a uniform colour, and is free from turbidity and sediment, it can be labelled as one of the A grades. If it exhibits any problems, it does not meet Grade A requirements, and then must be labelled as Processing Grade maple syrup and may not be sold in containers smaller than . If maple syrup does not meet the requirements of Processing Grade maple syrup (including a fairly characteristic maple taste), it is classified as Substandard.
This grading system was accepted and made law by most maple-producing states and provinces, and became compulsory in Canada as of 13 December 2016. Vermont, in an effort to "jump-start" the new grading regulations, adopted the new grading system as of 1 January 2014, after the grade changes passed the Senate and House in 2013. Maine passed a bill to take effect as soon as both Canada and the United States adopted the new grades. In New York, the new grade changes became law on 1 January 2015. New Hampshire did not require legislative approval and so the new grade laws became effective as of 16 December 2014, and producer compliance was required as of 1 January 2016.
Golden and Amber grades typically have a milder flavour than Dark and Very dark, which are both dark and have an intense maple flavour. The darker grades of syrup are used primarily for cooking and baking, although some specialty dark syrups are produced for table use. Syrup harvested earlier in the season tends to yield a lighter colour. With the new grading system, the classification of maple syrup depends ultimately on its internal transmittance at 560 nm wavelength through a 10 mm sample. Golden must have 75 percent or more transmittance, Amber must have 50.0 to 74.9 percent transmittance, Dark must have 25.0 to 49.9 percent transmittance, and Very Dark is any product having less than 25.0 percent transmittance.
Old grading system
In Canada, maple syrup was classified prior to 31 December 2014, by the Canadian Food Inspection Agency (CFIA) as one of three grades, each with several colour classes:
Canada No. 1, including
Extra Light,
Light, and
Medium;
No. 2 Amber; and
No. 3 Dark or any other ungraded category.
Producers in Ontario or Quebec may have followed either federal or provincial grading guidelines. Quebec's and Ontario's guidelines differed slightly from the federal:
there were two "number" categories in Quebec
Number 1, with four colour classes, and
Number 2, with five colour classes.
As in Quebec, Ontario's producers had two "number" grades:
Number 1, with three colour classes; and
Number 2, with one colour class, which was typically referred to as "Ontario Amber" when produced and sold in that province only.
A typical year's yield for a maple syrup producer will be about 25 to 30 percent of each of the #1 colours, 10 percent #2 Amber, and 2 percent #3 Dark.
The United States used different grading standards — some states still do as they await state regulation. Maple syrup was divided into two major grades:
Grade A:
Light Amber (sometimes known as Fancy),
Medium Amber, and
Dark Amber. and,
Grade B.
In Massachusetts, the Grade B was renamed as Grade A Very Dark, Strong Taste.
The Vermont Agency of Agriculture Food and Markets used a similar grading system of colour, and is roughly equivalent, especially for lighter syrups, but using letters: "AA", "A", etc. The Vermont grading system differed from the US system in maintaining a slightly higher standard of product density (measured on the Baumé scale). New Hampshire maintained a similar standard, but not a separate state grading scale. The Vermont-graded product had 0.9 percent more sugar and less water in its composition than US-graded. One grade of syrup not for table use, called commercial or Grade C, was also produced under the Vermont system.
Packing regulations
In Canada, the packing of maple syrup must follow the "Packing" conditions stated in the Maple Products Regulations, or utilize the equivalent Canadian or imported grading system.
As stated in the Maple Products Regulations, Canadian maple syrup can be classified as "Canadian Grade A" and "Canadian Processing Grade". Any maple syrup container under these classifications should be filled to at least 90% of the bottle size while still containing the net quantity of syrup product as stated on the label. Every container of maple syrup must be new if it has a capacity of 5 litres or less or is marked with a grade name. Every container of maple sugar must also be new if it has a capacity of less than 5 kg or is either exported out of Canada or conveyed from one province to another.
Each maple syrup product must be verified clean if it follows a grade name or if it is exported out of the province in which it was originally manufactured.
Nutrition
The basic ingredient in maple syrup is the sap from the xylem of sugar maple or various other species of maple trees. It consists primarily of sucrose and water, with small amounts of the monosaccharides glucose and fructose from the invert sugar created in the boiling process.
In a 100g amount, maple syrup provides 260 calories and is composed of 32 percent water by weight, 67 percent carbohydrates (90 percent of which are sugars), and no appreciable protein or fat (table). Maple syrup is generally low in overall micronutrient content, although manganese and riboflavin are at high levels along with moderate amounts of zinc and calcium (right table). It also contains trace amounts of amino acids which increase in content as sap flow occurs.
Maple syrup contains a wide variety of polyphenols and volatile organic compounds, including vanillin, hydroxybutanone, lignans, propionaldehyde, and numerous organic acids. It is not yet known exactly all compounds responsible for the distinctive flavour of maple syrup, although primary flavour-contributing compounds are maple furanone (5-ethyl-3-hydroxy-4-methyl-2(5H)-furanone), strawberry furanone, and maltol. New compounds have been identified in maple syrup, one of which is quebecol, a natural phenolic compound created when the maple sap is boiled to create syrup. Its sweetness derives from a high content of sucrose (99% of total sugars). Its brown colour – a significant factor in the appeal and quality grading of maple syrup – develops during thermal evaporation.
One author described maple syrup as "a unique ingredient, smooth- and silky-textured, with a sweet, distinctive flavour – hints of caramel with overtones of toffee will not do – and a rare colour, amber set alight. Maple flavour is, well, maple flavour, uniquely different from any other." Agriculture Canada has developed a "flavour wheel" that details 91 unique flavours that can be present in maple syrup. These flavours are divided into 13 families: vanilla, burnt, milky, fruity, floral, spicy, foreign (deterioration or fermentation), foreign (environment), maple, confectionery, plant (herbaceous), plant (forest, humus or cereals), and plant (ligneous). These flavours are evaluated using a procedure similar to wine tasting. Other culinary experts praise its unique flavour.
Maple syrup and its various artificial imitations are widely used as toppings for pancakes, waffles, and French toast in North America. They can also be used to flavour a variety of foods, including fritters, ice cream, hot cereal, fresh fruit, bacon, and sausages. It is also used as sweetener for granola, applesauce, baked beans, candied sweet potatoes, winter squash, cakes, pies, breads, tea, coffee, and hot toddies.
Imitations
In Canada, maple syrup must be made entirely from maple sap, and syrup must have a density of 66° on the Brix scale to be marketed as maple syrup. In the United States, maple syrup must be made almost entirely from maple sap, although small amounts of substances such as salt may be added. Labeling laws prohibit imitation syrups from having "maple" in their names unless the finished product contains 10 percent or more of natural maple syrup.
"Maple-flavoured" syrups include maple syrup, but may contain additional ingredients. "Pancake syrup", "waffle syrup", "table syrup", and similarly named syrups are substitutes which are less expensive than maple syrup. In these syrups, the primary ingredient is most often high-fructose corn syrup flavoured with sotolon; they have little genuine maple content, and are usually thickened above the viscosity of maple syrup.
Imitation syrups are generally cheaper than maple syrup, with less natural flavour. In the United States, consumers generally prefer imitation syrups, likely because of the significantly lower cost and sweeter flavour; they typically cost about , whereas authentic maple syrup costs as of 2015.
In 2016, maple syrup producers from nine US states petitioned the Food and Drug Administration (FDA) to regulate labeling of products containing maple syrup or using the word "maple" in manufactured products, indicating that imitation maple products contained insignificant amounts of natural maple syrup. In September 2016, the FDA published a consumer advisory to carefully inspect the ingredient list of products labeled as "maple".
Cultural significance
Maple products are considered emblematic of Canada, and are frequently sold in tourist shops and airports as souvenirs from Canada. The sugar maple's leaf has come to symbolize Canada, and is depicted on the country's flag. Several US states, including West Virginia, New York, Vermont, and Wisconsin, have the sugar maple as their state tree. A scene of sap collection is depicted on the Vermont state quarter, issued in 2001.
Maple syrup and maple sugar were used during the American Civil War and by abolitionists in the years before the war because most cane sugar and molasses were produced by Southern slaves. Because of food rationing during the Second World War, people in the northeastern United States were encouraged to stretch their sugar rations by sweetening foods with maple syrup and maple sugar, and recipe books were printed to help housewives employ this alternative source.
See also
Canadian cuisine
List of foods made from maple
List of syrups
Mapleine
Great Canadian Maple Syrup Heist
Treacle
References
Notes
Cited works
Further reading
External links
Maple Syrup Quality Control Manual, University of Maine
UVM Center for Digital Initiatives: The Maple Research Collection by the Vermont Agricultural Experiment Station
US Food and Drug Administration description of table syrup
American cuisine
Articles containing video clips
Canadian cuisine
Cuisine of New York (state)
Cuisine of Quebec
Syrup
Forestry in Canada
Forestry in the United States
Indigenous cuisine in Canada
Maritime culture
New England cuisine
Non-timber forest products
Syrup
Tree tapping |
19888 | https://en.wikipedia.org/wiki/Matthew | Matthew | Matthew may refer to:
Matthew (given name)
Matthew (surname)
Matthew (ship), the replica of the ship sailed by John Cabot in 1497
Matthew (album), a 2000 album by rapper Kool Keith
Matthew (elm cultivar), a cultivar of the Chinese Elm Ulmus parvifolia
Hurricane Matthew, a former hurricane in the Atlantic Ocean.
Christianity
Matthew the Apostle, one of the apostles of Jesus
Gospel of Matthew, a book of the Bible
See also
Matt (given name), the diminutive form of Matthew
Mathew, alternative spelling of Matthew
Matthews (disambiguation)
Matthew effect
Matthew 18 process
Tropical Storm Matthew (disambiguation) |
19890 | https://en.wikipedia.org/wiki/Male%20%28disambiguation%29 | Male (disambiguation) | Male, in biology, is the half of a reproduction system that produces sperm cells.
Male may also refer to:
Gender
Male plant, a plant that gives rise to male gametophytes
Male pregnancy, the incubation of embryos or fetuses by male members of some species
Man, an adult male human being
Boy, a young male human, usually a child or adolescent
Gentleman, any man of good, courteous conduct
Male connector, in hardware and electronics
Masculine gender, in languages with grammatical gender
Male as norm, perception the corresponding female category is a derivation
Art and entertainment
Male (film), a 2015 Indian film
Male (Foetus album), a 1992 live album by Foetus
Male (Natalie Imbruglia album), a 2015 studio album by Natalie Imbruglia
, a German band
Il Male, an Italian satirical magazine published in Italy between 1978 and 1982
Places
Malé, the capital of the Maldives
Malé Island, the island the city is on
Malé Atoll, the atoll the island is in
Malé, Italy, a municipality in the province of Trento, Italy
Małe, Łódź Voivodeship, a village in central Poland
Małe, Pomeranian Voivodeship, a village in northern Poland
Mâle, Orne, a village in France
Male, Belgium, a quarter in Bruges
Male, Vikramgad, a village in Maharashtra, India
Male (woreda), a woreda in Ethiopia
Males, Crete, a village in Greece
Maleš (mountain), a mountain in Bulgaria and Northern Macedonia
Male, Mauritania, a town in Mauritania
Other uses
Male language, several languages
Maale people, an ethnic group of Ethiopia
Male (surname)
Medium-altitude long-endurance unmanned aerial vehicle, an unmanned aerial vehicle
malE, a bacterial gene encoding maltose-binding protein
People with the name
Male (surname) (including a list of people with the name)
Male Rao Holkar (1745–1767), Maharaja of Indore
The Malês, as in the Malê revolt
Male Sa'u (born 1987), Japanese professional rugby union footballer
See also
Female (disambiguation)
Male and Female (disambiguation)
Masculine (disambiguation)
Feminine (disambiguation)
Mail (disambiguation)
Mele (disambiguation) |
19891 | https://en.wikipedia.org/wiki/Macron%20%28diacritic%29 | Macron (diacritic) | A macron () is a diacritical mark: it is a straight bar placed above a letter, usually a vowel. Its name derives from Ancient Greek (makrón) "long", since it was originally used to mark long or heavy syllables in Greco-Roman metrics. It now more often marks a long vowel. In the International Phonetic Alphabet, the macron is used to indicate a mid-tone; the sign for a long vowel is instead a modified triangular colon .
The opposite is the breve , which marks a short or light syllable or a short vowel.
Uses
Syllable weight
In Greco-Roman metrics and in the description of the metrics of other literatures, the macron was introduced and is still widely used to mark a long (heavy) syllable. Even relatively recent classical Greek and Latin dictionaries are still concerned with indicating only the length (weight) of syllables; that is why most still do not indicate the length of vowels in syllables that are otherwise metrically determined. Many textbooks about Ancient Rome and Greece use the macron, even if it was not actually used at that time (an apex was used if vowel length was marked in Latin).
Vowel length
The following languages or transliteration systems use the macron to mark long vowels:
Slavicists use the macron to indicate a non-tonic long vowel, or a non-tonic syllabic liquid, such as on l, lj, m, n, nj, and r. Languages with this feature include standard and dialect varieties of Serbo-Croatian, Slovene, and Bulgarian.
Transcriptions of Arabic typically use macrons to indicate long vowels – (alif when pronounced ), (waw, when pronounced or ), and (ya', when pronounced or ). Thus the Arabic word (three) is transliterated thalāthah.
Transcriptions of Sanskrit typically use a macron over ā, ī, ū, ṝ, and ḹ in order to mark a long vowel (e and o are always long and consequently do not need any macron).
In Latin, many of the more recent dictionaries and learning materials use the macron as the modern equivalent of the ancient Roman apex to mark long vowels. Any of the six vowel letters (ā, ē, ī, ō, ū, ӯ) can bear it. It is sometimes used in conjunction with the breve, especially to distinguish the short vowels and from their semi-vowel counterparts and , originally, and often to this day, spelt with the same letters. However, the older of these editions are not always explicit on whether they mark long vowels or heavy syllables – a confusion that is even found in some modern learning materials. In addition, most of the newest academic publications use both the macron and the breve sparingly, mainly when vowel length is relevant to the discussion.
In romanization of classical Greek, the letters η (eta) and ω (omega) are transliterated, respectively, as ē and ō, representing the long vowels of classical Greek, whereas the short vowels ε (epsilon) and ο (omicron) are always transliterated as plain e and o. The other long vowel phonemes don't have dedicated letters in the Greek alphabet, being indicated by digraphs (transliterated likewise as digraphs) or by the letters α, ι , υ – represented as ā, ī, ū. The same three letters are transliterated as plain a, i, u when representing short vowels.
The Hepburn romanization system of Japanese, for example, kōtsū (, ) "traffic" as opposed to kotsu (, ) "bone" or "knack".
The Syriac language uses macrons to indicate long vowels in its romanized transliteration: ā for , ē for , ū for and ō for .
Baltic languages and Baltic-Finnic languages:
Latvian. ā, ē, ī, ū are separate letters but are given the same position in collation as a, e, i, u respectively. Ō was also used in Latvian, but it was discarded as of 1946. Some usage remains in Latgalian.
Lithuanian. ū is a separate letter but is given the same position in collation as the unaccented u. It marks a long vowel; other long vowels are indicated with an ogonek (which used to indicate nasalization, but it no longer does): ą, ę, į, ų and o being always long in Lithuanian except for some recent loanwords. For the long counterpart of i, y is used.
Livonian. ā, ǟ, ē, ī, ō, ȱ, ȭ and ū are separate letters that sort in alphabetical order immediately after a, ä, e, i, o, ȯ, õ, and u, respectively.
Samogitian. ā, ē, ė̄, ī, ū and ō are separate letters that sort in alphabetical order immediately after a, e, ė, i, u and o respectively.
Transcriptions of Nahuatl, the Aztecs' language, spoken in Mexico. When the Spanish conquistadors arrived, they wrote the language in their own alphabet without distinguishing long vowels. Over a century later, in 1645, Horacio Carochi defined macrons to mark long vowels ā, ē, ī and ō, and short vowels with grave (`) accents. This is rare nowadays since many people write Nahuatl without any orthographic sign and with the letters k, s and w, not present in the original alphabet.
Modern transcriptions of Old English, for long vowels.
Latin transliteration of Pali and Sanskrit, and in the IAST and ISO 15919 transcriptions of Indo-Aryan and Dravidian languages.
Polynesian languages:
Cook Islands Māori. In Cook Islands Māori, the macron or mākarōna is not commonly used in writing, but is used in references and teaching materials for those learning the language.
Hawaiian. The macron is called kahakō, and it indicates vowel length, which changes meaning and the placement of stress.
Māori. In modern written Māori, the macron is used to designate long vowels, with the trema mark sometimes used if the macron is unavailable (e.g. "Mäori"). The Māori word for macron is tohutō. The term pōtae ("hat") is also used. In the past, writing in Māori either did not distinguish vowel length, or doubled long vowels (e.g. "Maaori"), as some iwi dialects still do.
Niuean. In Niuean, "popular spelling" does not worry too much about vowel quantity (length), so the macron is primarily used in scholarly study of the language.
Tahitian. The use of the macron is comparatively recent in Tahitian. The Fare Vānaa or Académie Tahitienne (Tahitian Academy) recommends using the macron, called the tārava, to represent long vowels in written text, especially for scientific or teaching texts and it has widespread acceptance. (In the past, written Tahitian either did not distinguish vowel length, or used multiple other ways).
Tongan and Samoan. The macron is called the toloi/fakamamafa or fa'amamafa, respectively. Its usage is similar to that in Māori, including its substitution by a trema. Its usage is not universal in Samoan, but recent academic publications and advanced study textbooks promote its use.
The macron is used in Fijian language dictionaries, in instructional materials for non-Fijian speakers, and in books and papers on Fijian linguistics. It is not typically used in Fijian publications intended for fluent speakers, where context is usually sufficient for a reader to distinguish between heteronyms.
Both Cyrillic and Latin transcriptions of Udege.
The Latin and Cyrillic alphabet transcriptions of the Tsebari dialect of Tsez.
In western Cree, Sauk, and Saulteaux, the Algonquianist Standard Roman Orthography (SRO) indicates long vowels either with a circumflex ⟨â ê î ô⟩ or with a macron ⟨ā ē ī ō⟩.
Tone
The following languages or alphabets use the macron to mark tones:
In the International Phonetic Alphabet, a macron over a vowel indicates a mid-level tone.
In Pinyin, the official Romanization of Mandarin Chinese, macrons over a, e, i, o, u, ü (ā, ē, ī, ō, ū, ǖ) indicate the high level tone of Mandarin Chinese. The alternative to the macron is the number 1 after the syllable (for example, tā = ta1).
Similarly in the Yale romanization of Cantonese, macrons over a, e, i, o, u, m, n (ā, ē, ī, ō, ū, m̄, n̄) indicate the high level tone of Cantonese. Like Mandarin, the alternative to the macron is the number 1 after the syllable (for example, tā = ta1).
In Pe̍h-ōe-jī romanization of Hokkien, macrons over a, e, i, m, n, o, o͘, u, (ā, ē, ī, m̄, n̄, ō, ō͘, ū) indicate the mid level tone ("light departing" or 7th tone) of Hokkien.
Omission
Sometimes the macron marks an omitted n or m, like the tilde:
In Old English texts a macron above a letter indicates the omission of an m or n that would normally follow that letter.
In older handwriting such as the German Kurrentschrift, the macron over an a-e-i-o-u or ä-ö-ü stood for an n, or over an m or an n meant that the letter was doubled. This continued into print in English in the sixteenth century, and to some extent in German. Over a u at the end of a word, the macron indicated um as a form of scribal abbreviation.
Letter extension
In romanizations of Hebrew, the macron below is typically used to mark the begadkefat consonant lenition. However, for typographical reasons a regular macron is used on p and g instead: p̄, ḡ.
The macron is used in the orthography of a number of vernacular languages of the Solomon Islands and Vanuatu, particularly those first transcribed by Anglican missionaries. The macron has no unique value, and is simply used to distinguish between two different phonemes.
Thus, in several languages of the Banks Islands, including Mwotlap, the simple m stands for , but an m with a macron (m̄) is a rounded labial-velar nasal ; while the simple n stands for the common alveolar nasal , an n with macron (n̄) represents the velar nasal ; the vowel ē stands for a (short) higher by contrast with plain e ; likewise ō contrasts with plain o .
In Hiw orthography, the consonant r̄ stands for the prestopped velar lateral approximant .
In Araki, the same symbol r̄ encodes the alveolar trill – by contrast with r, which encodes the alveolar flap .
In Bislama (orthography before 1995), Lamenu and Lewo, a macron is used on two letters . m̄ represents , and p̄ represents . The orthography after 1995 (which has no diacritics) has these written as mw and pw.
In Kokota, ḡ is used for the velar stop , but g without macron is the voiced velar fricative .
In Marshallese, a macron is used on four letters – – whose pronunciations differ from the unmarked . Marshallese uses a vertical vowel system with three to four vowel phonemes, but traditionally their allophones have been written out, so vowel letters with macron are used for some of these allophones. Though the standard diacritic involved is a macron, there are no other diacritics used above letters, so in practice other diacritics can and have been used in less polished writing or print, yielding nonstandard letters like , depending on displayability of letters in computer fonts.
The letter is pronounced , the palatalized allophone of the phoneme .
The letter represents the velar nasal phoneme and the labialized velar nasal phoneme , depending on context. The standard letter does not exist as a precombined glyph in Unicode, so the nonstandard variant is often used in its place.
The letter is pronounced or , which are the unrounded velarized allophones of the phonemes and respectively.
The letter is pronounced , the unrounded velarized allophone of the phoneme .
In Obolo, the simple n stands for the common alveolar nasal , while an n with macron (n̄) represents the velar nasal .
Other uses
In older German and in the German Kurrent handwriting, as well as older Danish, a macron is used on some consonants, especially n and m, as a short form for a double consonant (for example, n̄ instead of nn).
In Russian cursive, as well as in some others based on the Cyrillic script (for example, Bulgarian), a lowercase Т looks like a lowercase m, and a macron is often used to distinguish it from Ш, which looks like a lowercase w (see Т). Some writers also underline the letter ш to reduce ambiguity further.
Also, in some instances, a diacritic will be written like a macron, although it represents another diacritic whose standard form is different:
In some Finnish, Estonian and Swedish comic books that are hand-lettered, or in handwriting, a macron-style umlaut is used for ä or ö (also õ and ü in Estonian), sometimes known colloquially as a "lazy man's umlaut". This can also be seen in some modern handwritten German.
In Norwegian ū, ā, ī, ē and ō can be used for decorative purposes both in handwritten and computed Bokmål and Nynorsk or to denote vowel length such as in dū (you), lā (infinitive form of to let), lēser (present form of "to read") and lūft (air). The diacritic is entirely optional, carries no IPA value and is seldom used in modern Norwegian outside of handwriting.
In informal Hungarian handwriting, a macron is often a substitute for either a double acute accent or an umlaut (e.g., ö or ő). Because of this ambiguity, using it is often regarded as bad practice.
In informal handwriting, the Spanish ñ is sometimes written with a macron-shaped tilde: (n̄).
Medicine
In medical prescriptions and other handwritten notes, macrons mean:
ā, before, abbreviating Latin ante
c̄, with, abbreviating Latin cum
p̄, after, abbreviating Latin post
q̄, every, abbreviating Latin quisque (and its inflected forms)
s̄, without, abbreviating Latin sine
x̄, except
Mathematics and science
The overline is a typographical symbol similar to the macron, used in a number of ways in mathematics and science. For example, it is used to represent complex conjugation:
and to represent a line segment in geometry (e.g., ), sample means in statistics (e.g., ) and negations in logic. It is also used in Hermann–Mauguin notation.
Music
In music, the tenuto marking resembles the macron.
The macron is also used in German lute tablature to distinguish repeating alphabetic characters.
Letters with macron
Technical notes
The Unicode Standard encodes combining and precomposed macron characters:
Macron-related Unicode characters not included in the table above:
CJK fullwidth variety:
Kazakhstani tenge
Overlines
Characters using a macron below instead of above
Tone contour transcription characters incorporating a macron:
Two intonation marks historically used by Antanas Baranauskas for Lithuanian dialectology:
In LaTeX a macron is created with the command "\=", for example: M\=aori for Māori.
In OpenOffice, if the extension Compose Special Characters is installed, a macron may be added by following the letter with a hyphen and pressing the user's predefined shortcut key for composing special characters. A macron may also be added by following the letter with the character's four-digit hex-code, and pressing the user's predefined shortcut key for adding unicode characters.
See also
Macron below
Vinculum (symbol)
References
External links
Diacritics Project – All you need to design a font with correct accents
Kupu o te Rā How to set up the keyboard to type macrons in various operating systems.
Latin-script diacritics
Greek-script diacritics
Cyrillic-script diacritics
Poetic rhythm |
19894 | https://en.wikipedia.org/wiki/Mosque | Mosque | A mosque (; from , ; literally "place of ritual prostration"), also called masjid, is a place of worship for Muslims. Any act of worship that follows the Islamic rules of prayer can be said to create a mosque, whether or not it takes place in a special building. Informal and open-air places of worship are called musalla, while mosques used for communal prayer on Friday are known as jāmiʿ. Mosque buildings typically contain an ornamental niche (mihrab) set into the wall that indicates the direction of Mecca (qiblah), ablution facilities and minarets from which calls to prayer are issued. The pulpit (minbar), from which the Friday (jumu'ah) sermon (khutba) is delivered, was in earlier times characteristic of the central city mosque, but has since become common in smaller mosques. Mosques typically have segregated spaces for men and women. This basic pattern of organization has assumed different forms depending on the region, period and denomination.
Mosques commonly serve as locations for prayer, Ramadan vigils, funeral services, marriage and business agreements, alms collection and distribution, as well as homeless shelters. Historically, mosques have served as a community center, a court of law, and a religious school. In modern times, they have also preserved their role as places of religious instruction and debate. Special importance is accorded to the Great Mosque of Mecca (centre of the hajj), the Prophet's Mosque in Medina (burial place of Muhammad) and Al-Aqsa Mosque in Jerusalem (believed to be the site of Muhammad's ascent to heaven).
With the spread of Islam, mosques multiplied across the Islamic world. Sometimes churches and temples were converted into mosques, which influenced Islamic architectural styles. While most pre-modern mosques were funded by charitable endowments, increasing government regulation of large mosques has been countered by a rise of privately funded mosques, many of which serve as bases for different Islamic revivalist currents and social activism. Mosques have played a number of political roles. The rates of mosque attendance vary widely depending on the region.
Etymology
The word 'mosque' entered the English language from the French word mosquée, probably derived from Italian moschea (a variant of Italian moscheta), from either Middle Armenian մզկիթ (mzkit‘), Medieval (masgídion), or Spanish mezquita, from (meaning "site of prostration (in prayer)" and hence a place of worship), either from Nabataean masgĕdhā́ or from Arabic (meaning "to bow down in prayer"), probably ultimately from Nabataean Arabic masgĕdhā́ or Aramaic sĕghēdh.
History
Origins
According to non-Muslim scholars, Islam started during the lifetime of Muhammad in the 7th century CE, and so did architectural components such as the mosque. In this case, either the Mosque of the Companions in the Eritrean city of Massawa, or the Quba Mosque in the Hejazi city of Medina (the first structure built by Muhammad upon his emigration from Mecca in 622 CE), would be the first mosque that was built in the history of Islam.
Other scholars, reference Islamic tradition and passages of the Quran, that claim Islam as a religion preceded Muhammad, and includes previous prophets such as Abraham. Abraham in Islam is credited by Muslims with having built the Ka'bah ('Cube') in Mecca, and consequently its sanctuary, Al-Masjid Al-Haram (The Sacred Mosque), which is seen by Muslims as the first mosque that existed. A Hadith in Sahih al-Bukhari states that the sanctuary of the Kaaba was the first mosque on Earth, with the second mosque being Al-Aqsa Mosque in Jerusalem, which is also associated with Abraham. Since as early as 638 AD, the Sacred Mosque of Mecca has been expanded on several occasions to accommodate the increasing number of Muslims who either live in the area or make the annual pilgrimage known as Hajj to the city.
Either way, after the Quba Mosque, Muhammad went on to establish another mosque in Medina, which is now known as Al-Masjid an-Nabawi (The Prophet's Mosque). Built on the site of his home, Muhammad participated in the construction of the mosque himself and helped pioneer the concept of the mosque as the focal point of the Islamic city. The Prophet's mosque introduced some of the features still common in today's mosques, including the niche at the front of the prayer space known as the mihrab and the tiered pulpit called the minbar. The mosque was also constructed with a large courtyard, a motif common among mosques built since then.
Diffusion and evolution
The Great Mosque of Kairouan in present-day Tunisia was reportedly the first mosque built in northwest Africa, with its present form (dating from the 9th century) serving as a model for other Islamic places of worship in the Maghreb. It was the first to incorporate a square minaret (as opposed to the more common circular minaret) and includes naves akin to a basilica. Those features can also be found in Andalusian mosques, including the Grand Mosque of Cordoba, as they tended to reflect the architecture of the Moors instead of their Visigoth predecessors. Still, some elements of Visigothic architecture, like horseshoe arches, were infused into the mosque architecture of Spain and the Maghreb.
The first mosque in East Asia was reportedly established in the 8th century in Xi'an. However, the Great Mosque of Xi'an, whose current building dates from the 18th century, does not replicate the features often associated with mosques elsewhere. Minarets were initially prohibited by the state. Following traditional Chinese architecture, the Great Mosque of Xi'an, like many other mosques in eastern China, resembles a pagoda, with a green roof instead of the yellow roof common on imperial structures in China. Mosques in western China were more likely to incorporate elements, like domes and minarets, traditionally seen in mosques elsewhere.
A similar integration of foreign and local influences could be seen on the Indonesian islands of Sumatra and Java, where mosques, including the Demak Great Mosque, were first established in the 15th century. Early Javanese mosques took design cues from Hindu, Buddhist, and Chinese architectural influences, with tall timber, multi-level roofs similar to the pagodas of Balinese Hindu temples; the ubiquitous Islamic dome did not appear in Indonesia until the 19th century. In turn, the Javanese style influenced the styles of mosques in Indonesia's Austronesian neighbors—Malaysia, Brunei, and the Philippines.
Muslim empires were instrumental in the evolution and spread of mosques. Although mosques were first established in India during the 7th century, they were not commonplace across the subcontinent until the arrival of the Mughals in the 16th and 17th centuries. Reflecting their Timurid origins, Mughal-style mosques included onion domes, pointed arches, and elaborate circular minarets, features common in the Persian and Central Asian styles. The Jama Masjid in Delhi and the Badshahi Mosque in Lahore, built in a similar manner in the mid-17th century, remain two of the largest mosques on the Indian subcontinent.
The Umayyad Caliphate was particularly instrumental in spreading Islam and establishing mosques within the Levant, as the Umayyads constructed among the most revered mosques in the region — Al-Aqsa Mosque and Dome of the Rock in Jerusalem, and the Umayyad Mosque in Damascus. The designs of the Dome of the Rock and the Umayyad Mosque were influenced by Byzantine architecture, a trend that continued with the rise of the Ottoman Empire.
Several of the early mosques in the Ottoman Empire were originally churches or cathedrals from the Byzantine Empire, with the Hagia Sophia (one of those converted cathedrals) informing the architecture of mosques from after the Ottoman conquest of Constantinople. Still, the Ottomans developed their own architectural style characterized by large central rotundas (sometimes surrounded by multiple smaller domes), pencil-shaped minarets, and open facades.
Mosques from the Ottoman period are still scattered across Eastern Europe, but the most rapid growth in the number of mosques in Europe has occurred within the past century as more Muslims have migrated to the continent. Many major European cities are home to mosques, like the Grand Mosque of Paris, that incorporate domes, minarets, and other features often found with mosques in Muslim-majority countries. The first mosque in North America was founded by Albanian Americans in 1915, but the continent's oldest surviving mosque, the Mother Mosque of America, was built in 1934. As in Europe, the number of American mosques has rapidly increased in recent decades as Muslim immigrants, particularly from South Asia, have come in the United States. Greater than forty percent of mosques in the United States were constructed after 2000.
Inter-religious conversion
According to early Muslim historians, towns that surrendered without resistance and made treaties with the Muslims were allowed to retain their churches and the towns captured by Muslims had many of their churches converted to mosques. One of the earliest examples of these kinds of conversions was in Damascus, Syria, where in 705 Umayyad caliph Al-Walid I bought the church of St. John from the Christians and had it rebuilt as a mosque in exchange for building a number of new churches for the Christians in Damascus. Overall, Abd al-Malik ibn Marwan (Al-Waleed's father) is said to have transformed 10 churches in Damascus into mosques.
The process of turning churches into mosques were especially intensive in the villages where most of the inhabitants converted to Islam. The Abbasid caliph al-Ma'mun turned many churches into mosques. Ottoman Turks converted nearly all churches, monasteries, and chapels in Constantinople, including the famous Hagia Sophia, into mosques immediately after capturing the city in 1453. In some instances mosques have been established on the places of Jewish or Christian sanctuaries associated with Biblical personalities who were also recognized by Islam.
Mosques have also been converted for use by other religions, notably in southern Spain, following the conquest of the Moors in 1492. The most prominent of them is the Great Mosque of Cordoba, itself constructed on the site of a church demolished during the period of Muslim rule. Outside of the Iberian Peninsula, such instances also occurred in southeastern Europe once regions were no longer under Muslim rule.
Religious functions
The masjid jāmiʿ (), a central mosque, can play a role in religious activities such as teaching the Quran and educating future imams.
Prayers
There are two holidays (Eids) in the Islamic calendar: ʿĪd al-Fiṭr and ʿĪd al-Aḍḥā, during which there are special prayers held at mosques in the morning. These Eid prayers are supposed to be offered in large groups, and so, in the absence of an outdoor Eidgah, a large mosque will normally host them for their congregants as well as the congregants of smaller local mosques. Some mosques will even rent convention centers or other large public buildings to hold the large number of Muslims who attend. Mosques, especially those in countries where Muslims are the majority, will also host Eid prayers outside in courtyards, town squares or on the outskirts of town in an Eidgah.
Ramadan
Islam's holiest month, Ramaḍān, is observed through many events. As Muslims must fast during the day during Ramadan, mosques will host Ifṭār dinners after sunset and the fourth required prayer of the day, that is Maghrib. Food is provided, at least in part, by members of the community, thereby creating daily potluck dinners. Because of the community contribution necessary to serve iftar dinners, mosques with smaller congregations may not be able to host the iftar dinners daily. Some mosques will also hold Suḥūr meals before dawn to congregants attending the first required prayer of the day, Fajr. As with iftar dinners, congregants usually provide the food for suhoor, although able mosques may provide food instead. Mosques will often invite poorer members of the Muslim community to share in beginning and breaking the fasts, as providing charity during Ramadan is regarded in Islam as especially honorable.
Following the last obligatory daily prayer (ʿIshāʾ) special, optional Tarāwīḥ prayers are offered in larger mosques. During each night of prayers, which can last for up to two hours each night, usually one member of the community who has memorized the entire Quran (a Hafiz) will recite a segment of the book. Sometimes, several such people (not necessarily of the local community) take turns to do this. During the last ten days of Ramadan, larger mosques will host all-night programs to observe Laylat al-Qadr, the night Muslims believe that Muhammad first received Quranic revelations. On that night, between sunset and sunrise, mosques employ speakers to educate congregants in attendance about Islam. Mosques or the community usually provide meals periodically throughout the night
During the last ten days of Ramadan, larger mosques within the Muslim community will host Iʿtikāf, a practice in which at least one Muslim man from the community must participate. Muslims performing itikaf are required to stay within the mosque for ten consecutive days, often in worship or learning about Islam. As a result, the rest of the Muslim community is responsible for providing the participants with food, drinks, and whatever else they need during their stay.
Charity
The third of the Five Pillars of Islam states that Muslims are required to give approximately one-fortieth of their wealth to charity as Zakat. Since mosques form the center of Muslim communities, they are where Muslims go to both give zakat and, if necessary, collect it. Before the holiday of Eid ul-Fitr, mosques also collect a special zakat that is supposed to assist in helping poor Muslims attend the prayers and celebrations associated with the holiday.
Frequency of attendance
The frequency by which Muslims attend mosque services vary greatly around the world. In some countries, weekly attendance at religious services are common among Muslims while in others, attendance is rare. A study of American Muslims did not find differences in mosque attendance by gender or age.
Architecture
Styles
Arab-plan or hypostyle mosques are the earliest type of mosques, pioneered under the Umayyad Dynasty. These mosques have square or rectangular plans with an enclosed courtyard (sahn) and covered prayer hall. Historically, in the warm Middle Eastern and Mediterranean climates, the courtyard served to accommodate the large number of worshippers during Friday prayers. Most early hypostyle mosques had flat roofs on prayer halls, which required the use of numerous columns and supports. One of the most notable hypostyle mosques is the Great Mosque of Cordoba in Spain, the building being supported by over 850 columns. Frequently, hypostyle mosques have outer arcades (riwaq) so that visitors can enjoy the shade. Arab-plan mosques were constructed mostly under the Umayyad and Abbasid dynasties; subsequently, however, the simplicity of the Arab plan limited the opportunities for further development, the mosques consequently losing popularity.
The first departure within mosque design started in Persia (Iran). The Persians had inherited a rich architectural legacy from the earlier Persian dynasties, and they began incorporating elements from earlier Parthian and Sassanid designs into their mosques, influenced by buildings such as the Palace of Ardashir and the Sarvestan Palace. Thus, Islamic architecture witnessed the introduction of such structures as domes and large, arched entrances, referred to as iwans. During Seljuq rule, as Islamic mysticism was on the rise, the four-iwan arrangement took form. The four-iwan format, finalized by the Seljuqs, and later inherited by the Safavids, firmly established the courtyard façade of such mosques, with the towering gateways at every side, as more important than the actual buildings themselves. They typically took the form of a square-shaped central courtyard with large entrances at each side, giving the impression of gateways to the spiritual world. The Persians also introduced Persian gardens into mosque designs. Soon, a distinctly Persian style of mosques started appearing that would significantly influence the designs of later Timurid, and also Mughal, mosque designs.
The Ottomans introduced central dome mosques in the 15th century. These mosques have a large dome centered over the prayer hall. In addition to having a large central dome, a common feature is smaller domes that exist off-center over the prayer hall or throughout the rest of the mosque, where prayer is not performed. This style was heavily influenced by Byzantine architecture with its use of large central domes.
Mosques built in Southeast Asia often represent the Indonesian-Javanese style architecture, which are different from the ones found throughout the Greater Middle East. The ones found in Europe and North America appear to have various styles but most are built on Western architectural designs, some are former churches or other buildings that were used by non-Muslims. In Africa, most mosques are old but the new ones are built in imitation of those of the Middle East. This can be seen in the Abuja National Mosque in Nigeria and others.
Islam forbids figurative art, on the grounds that the artist must not imitate God's creation. Mosques are, therefore, decorated with abstract patterns and beautiful inscriptions. Decoration is often concentrated around doorways and the miḥrāb. Tiles are used widely in mosques. They lend themselves to pattern-making, can be made with beautiful subtle colors, and can create a cool atmosphere, an advantage in the hot Arab countries. Quotations from the Quran often adorn mosque interiors. These texts are meant to inspire people by their beauty, while also reminding them of the words of Allah.
Prayer hall
The prayer hall, also known as the muṣallá (), rarely has furniture; chairs and pews are generally absent from the prayer hall so as to allow as many worshipers as possible to line the room. Some mosques have Islamic calligraphy and Quranic verses on the walls to assist worshippers in focusing on the beauty of Islam and its holiest book, the Quran, as well as for decoration.
Often, a limited part of the prayer hall is sanctified formally as a masjid in the sharia sense (although the term masjid is also used for the larger mosque complex as well). Once designated, there are onerous limitations on the use of this formally designated masjid, and it may not be used for any purpose other than worship; restrictions that do not necessarily apply to the rest of the prayer area, and to the rest of the mosque complex (although such uses may be restricted by the conditions of the waqf that owns the mosque).
In many mosques, especially the early congregational mosques, the prayer hall is in the hypostyle form (the roof held up by a multitude of columns). One of the finest examples of the hypostyle-plan mosques is the Great Mosque of Kairouan (also known as the Mosque of Uqba) in Tunisia.
Usually opposite the entrance to the prayer hall is the qiblah wall, the visually emphasized area inside the prayer hall. The qiblah wall should, in a properly oriented mosque, be set perpendicular to a line leading to Mecca, the location of the Kaaba. Congregants pray in rows parallel to the qiblah wall and thus arrange themselves so they face Mecca. In the qiblah wall, usually at its center, is the mihrab, a niche or depression indicating the direction of Mecca. Usually the mihrab is not occupied by furniture either. A raised minbar or pulpit is located to the right side of the mihrab for a Khaṭīb, or some other speaker, to offer a Khuṭbah (Sermon) during Friday prayers. The mihrab serves as the location where the imam leads the five daily prayers on a regular basis.
Left to the mihrab, in the front left corner of the mosque, sometimes there is a kursu (Turkish , Bosnian ), a small elevated plateau (rarely with a chair or other type of seat) used for less formal preaching and speeches.
Makhphil
Women who pray in mosques are separated from men there. Their part for prayer is called makhphil or maqfil (Bosnian ). It is located above the main prayer hall, elevated in the background as stairs-separated gallery or plateau (surface-shortened to the back relative to the bottom main part). It usually has a perforated fence at the front, through which imam (and male prayers in the main hall) can be partially seen. Makhphil is completely used by men when Jumu'ah is practised (due to lack of space).
Mihrab
A miḥrāb, also spelled as mehrab is a semicircular niche in the wall of a mosque that faces the qiblah (i.e the "front" of the mosque); the imam stands in this niche and leads prayer. Given that the imam typically stands alone in the frontmost row, this niche's practical effect is to save unused space. The minbar is a pulpit from which the Friday sermon is delivered. While the minbar of Muhammad was a simple chair, later it became larger and attracted artistic attention. Some remained made of wood, albeit exquisitely carved, while others were made of marble and featured friezes.
Minarets
A common feature in mosques is the minaret, the tall, slender tower that usually is situated at one of the corners of the mosque structure. The top of the minaret is always the highest point in mosques that have one, and often the highest point in the immediate area. The tallest minaret in the world is located at the Hassan II Mosque in Casablanca, Morocco. It has a height of and completed in 1993, it was designed by Michel Pinseau. The first minaret was constructed in 665 in Basra during the reign of the Umayyad caliph Muawiyah I. Muawiyah encouraged the construction of minarets, as they were supposed to bring mosques on par with Christian churches with their bell towers. Consequently, mosque architects borrowed the shape of the bell tower for their minarets, which were used for essentially the same purpose—calling the faithful to prayer. The oldest standing minaret in the world is the minaret of the Great Mosque of Kairouan in Tunisia, built between the 8th and the 9th century, it is a massive square tower consisting of three superimposed tiers of gradual size and decor.
Before the five required daily prayers, a Mu’adhdhin () calls the worshippers to prayer from the minaret. In many countries like Singapore where Muslims are not the majority, mosques are prohibited from loudly broadcasting the Adhān (, Call to Prayer), although it is supposed to be said loudly to the surrounding community. The adhan is required before every prayer. However, nearly every mosque assigns a muezzin for each prayer to say the adhan as it is a recommended practice or Sunnah () of the Islamic prophet Muhammad. At mosques that do not have minarets, the adhan is called instead from inside the mosque or somewhere else on the ground. The Iqâmah (), which is similar to the adhan and proclaimed right before the commencement of prayers, is usually not proclaimed from the minaret even if a mosque has one.
Domes
The domes, often placed directly above the main prayer hall, may signify the vaults of the heaven and sky. As time progressed, domes grew, from occupying a small part of the roof near the mihrab to encompassing the whole roof above the prayer hall. Although domes normally took on the shape of a hemisphere, the Mughals in India popularized onion-shaped domes in South Asia which has gone on to become characteristic of the Arabic architectural style of dome. Some mosques have multiple, often smaller, domes in addition to the main large dome that resides at the center.
Ablution facilities
As ritual purification precedes all prayers, mosques often have ablution fountains or other facilities for washing in their entryways or courtyards. However, worshippers at much smaller mosques often have to use restrooms to perform their ablutions. In traditional mosques, this function is often elaborated into a freestanding building in the center of a courtyard. This desire for cleanliness extends to the prayer halls where shoes are disallowed to be worn anywhere other than the cloakroom. Thus, foyers with shelves to put shoes and racks to hold coats are commonplace among mosques.
Contemporary features
Modern mosques have a variety of amenities available to their congregants. As mosques are supposed to appeal to the community, they may also have additional facilities, from health clinics and clubs (gyms) to libraries to gymnasiums, to serve the community.
Symbols
Certain symbols are represented in a mosque's architecture to allude to different aspects of the Islamic religion. One of these feature symbols is the spiral. The "cosmic spiral" found in designs and on minarets is a references to heaven as it has "no beginning and no end". Mosques also often have floral patterns or images of fruit and vegetables. These are allusions to the paradise after death.
Rules and etiquette
Mosques, in accordance with Islamic practices, institute a number of rules intended to keep Muslims focused on worshiping God. While there are several rules, such as those regarding not allowing shoes in the prayer hall, that are universal, there are many other rules that are dealt with and enforced in a variety of ways from mosque to mosque.
Prayer leader (Imam)
Appointment of a prayer leader is considered desirable, but not always obligatory. The permanent prayer leader (imam) must be a free honest individual and is authoritative in religious matters. In mosques constructed and maintained by the government, the prayer leader is appointed by the ruler; in private mosques, however, appointment is made by members of the congregation through majority voting. According to the Hanafi school of Islamic jurisprudence, the individual who built the mosque has a stronger claim to the title of imam, but this view is not shared by the other schools.
Leadership at prayer falls into three categories, depending on the type of prayer: five daily prayers, Friday prayer, or optional prayers. According to the Hanafi and Maliki school of Islamic jurisprudence, appointment of a prayer leader for Friday service is mandatory because otherwise the prayer is invalid. The Shafi'i and Hanbali schools, however, argue that the appointment is not necessary and the prayer is valid as long as it is performed in a congregation. A slave may lead a Friday prayer, but Muslim authorities disagree over whether the job can be done by a minor. An imam appointed to lead Friday prayers may also lead at the five daily prayers; Muslim scholars agree to the leader appointed for five daily services may lead the Friday service as well.
All Muslim authorities hold the consensus opinion that only men may lead prayer for men. Nevertheless, women prayer leaders are allowed to lead prayer in front of all-female congregations.
Cleanliness
All mosques have rules regarding cleanliness, as it is an essential part of the worshippers' experience. Muslims before prayer are required to cleanse themselves in an ablution process known as wudu. However, even to those who enter the prayer hall of a mosque without the intention of praying, there are still rules that apply. Shoes must not be worn inside the carpeted prayer hall. Some mosques will also extend that rule to include other parts of the facility even if those other locations are not devoted to prayer. Congregants and visitors to mosques are supposed to be clean themselves. It is also undesirable to come to the mosque after eating something that smells, such as garlic.
Dress
Islam requires that its adherents wear clothes that portray modesty. Men are supposed to come to the mosque wearing loose and clean clothes that do not reveal the shape of the body. Likewise, it is recommended that women at a mosque wear loose clothing that covers to the wrists and ankles, and cover their heads with a Ḥijāb (), or other covering. Many Muslims, regardless of their ethnic background, wear Middle Eastern clothing associated with Arabic Islam to special occasions and prayers at mosques.
Concentration
As mosques are places of worship, those within the mosque are required to remain respectful to those in prayer. Loud talking within the mosque, as well as discussion of topics deemed disrespectful, is forbidden in areas where people are praying. In addition, it is disrespectful to walk in front of or otherwise disturb Muslims in prayer. The walls within the mosque have few items, except for possibly Islamic calligraphy, so Muslims in prayer are not distracted. Muslims are also discouraged from wearing clothing with distracting images and symbols so as not to divert the attention of those standing behind them during prayer. In many mosques, even the carpeted prayer area has no designs, its plainness helping worshippers to focus.
Gender separation
There is nothing written in the Qur'an about the issue of space in mosques and gender separation. However, traditional rules have segregated women and men. By traditional rules, women are most often told to occupy the rows behind the men. In part, this was a practical matter as the traditional posture for prayerkneeling on the floor, head to the groundmade mixed-gender prayer uncomfortably revealing for many women and distracting for some men. Traditionalists try to argue that Muhammad preferred women to pray at home rather than at a mosque, and they cite a ḥadīth in which Muhammad supposedly said: "The best mosques for women are the inner parts of their houses," although women were active participants in the mosque started by Muhammad. Muhammad told Muslims not to forbid women from entering mosques. They are allowed to go in. The second Sunni caliph 'Umar at one time prohibited women from attending mosques especially at night because he feared they might be sexually harassed or assaulted by men, so he required them to pray at home. Sometimes a special part of the mosque was railed off for women; for example, the governor of Mecca in 870 had ropes tied between the columns to make a separate place for women.
Many mosques today will put the women behind a barrier or partition or in another room. Mosques in South and Southeast Asia put men and women in separate rooms, as the divisions were built into them centuries ago. In nearly two-thirds of American mosques, women pray behind partitions or in separate areas, not in the main prayer hall; some mosques do not admit women at all due to the lack of space and the fact that some prayers, such as the Friday Jumuʻah, are mandatory for men but optional for women. Although there are sections exclusively for women and children, the Grand Mosque in Mecca is desegregated.
Non-Muslims
Under most interpretations of sharia, non-Muslims are permitted to enter mosques provided that they respect the place and the people inside it. A dissenting opinion and minority view is presented by followers of the Maliki school of Islamic jurisprudence, who argue that non-Muslims may not be allowed into mosques under any circumstances.
The Quran addresses the subject of non-Muslims, and particularly polytheists, in mosques in two verses in its ninth chapter, Sura At-Tawba. The seventeenth verse of the chapter prohibits those who join gods with Allah—polytheists—from maintaining mosques:
The twenty-eighth verse of the same chapter is more specific as it only considers polytheists in the Masjid al-Haram in Mecca:
According to Ahmad ibn Hanbal, these verses were followed to the letter at the times of Muhammad, when Jews and Christians, considered monotheists, were still allowed to Al-Masjid Al-Haram. However, the Umayyad caliph Umar II later forbade non-Muslims from entering mosques, and his ruling remains in practice in present-day Saudi Arabia. Today, the decision on whether non-Muslims should be allowed to enter mosques varies. With few exceptions, mosques in the Arabian Peninsula as well as Morocco do not allow entry to non-Muslims. For example, the Hassan II Mosque in Casablanca is one of only two mosques in Morocco currently open to non-Muslims.
However, there are also many other places in the West as well as the Islamic world where non-Muslims are welcome to enter mosques. Most mosques in the United States, for example, report receiving non-Muslim visitors every month. Many mosques throughout the United States welcome non-Muslims as a sign of openness to the rest of the community as well as to encourage conversions to Islam.
In modern-day Saudi Arabia, the Grand Mosque and all of Mecca are open only to Muslims. Likewise, Al-Masjid Al-Nabawi and the city of Medina that surrounds it are also off-limits to those who do not practice Islam. For mosques in other areas, it has most commonly been taken that non-Muslims may only enter mosques if granted permission to do so by Muslims, and if they have a legitimate reason. All entrants regardless of religious affiliation are expected to respect the rules and decorum for mosques.
In modern Turkey, non-Muslim tourists are allowed to enter any mosque, but there are some strict rules. Visiting a mosque is allowed only between prayers; visitors are required to wear long trousers and not to wear shoes, women must cover their heads; visitors are not allowed to interrupt praying Muslims, especially by taking photos of them; no loud talk is allowed; and no references to other religions are allowed (no crosses on necklaces, no cross gestures, etc.) Similar rules apply to mosques in Malaysia, where larger mosques that are also tourist attractions (such as the Masjid Negara) provide robes and headscarves for visitors who are deemed inappropriately attired.
In certain times and places, non-Muslims were expected to behave a certain way in the vicinity of a mosque: in some Moroccan cities, Jews were required to remove their shoes when passing by a mosque; in 18th-century Egypt, Jews and Christians had to dismount before several mosques in veneration of their sanctity.
The association of the mosque with education remained one of its main characteristics throughout history, and the school became an indispensable appendage to the mosque. From the earliest days of Islam, the mosque was the center of the Muslim community, a place for prayer, meditation, religious instruction, political discussion, and a school. Anywhere Islam took hold, mosques were established; and basic religious and educational instruction began.
Role in contemporary society
Political mobilization
The late 20th century saw an increase in the number of mosques used for political purposes. While some governments in the Muslim world have attempted to limit the content of Friday sermons to strictly religious topics, there are also independent preachers who deliver khutbas that address social and political issues, often in emotionally charged terms. Common themes include social inequalities, necessity of jihad in the face of injustice, and the universal struggle between good and evil. In Islamic countries like Bangladesh, Pakistan, Iran, and Saudi Arabia, political subjects are preached by imams at Friday congregations on a regular basis. Mosques often serve as meeting points for political opposition in times of crisis.
Countries with a minority Muslim population are more likely than Muslim-majority countries of the Greater Middle East to use mosques as a way to promote civic participation. Studies of US Muslims have consistently shown a positive correlation between mosque attendance and political involvement. Some of the research connects civic engagement specifically with mosque attendance for social and religious activities other than prayer. American mosques host voter registration and civic participation drives that promote involving Muslims, who are often first- or second-generation immigrants, in the political process. As a result of these efforts as well as attempts at mosques to keep Muslims informed about the issues facing the Muslim community, regular mosque attendants are more likely to participate in protests, sign petitions, and otherwise be involved in politics. Research on Muslim civic engagement in other Western countries "is less conclusive but seems to indicate similar trends".
Role in violent conflicts
As they are considered important to the Muslim community, mosques, like other places of worship, can be at the heart of social conflicts. The Babri Mosque in India was the subject of such a conflict up until the early 1990s when it was demolished. Before a mutual solution could be devised, the mosque was destroyed on December 6, 1992 as the mosque was built by Babur allegedly on the site of a previous Hindu temple marking the birthplace of Rama. The controversy surrounded the mosque was directly linked to rioting in Bombay (present-day Mumbai) as well as bombings in 1993 that killed 257 people.
Bombings in February 2006 and June 2007 seriously damaged Iraq's al-Askari Mosque and exacerbated existing tensions. Other mosque bombings in Iraq, both before and after the February 2006 bombing, have been part of the conflict between the country's groups of Muslims. However, mosque bombings have not been exclusive to Iraq; in June 2005, a suicide bomber killed at least 19 people at an Afghan Shia mosque near Jade Maivand. In April 2006, two explosions occurred at India's Jama Masjid. Following the al-Askari Mosque bombing in Iraq, imams and other Islamic leaders used mosques and Friday prayers as vehicles to call for calm and peace in the midst of widespread violence.
A study 2005 indicated that while support for suicide bombings is not correlated with personal devotion to Islam among Palestinian Muslims, it is correlated with mosque attendance because "participating in communal religious rituals of any kind likely encourages support for self-sacrificing behaviors that are done for the collective good."
Following the September 11 attacks, several American mosques were targeted in attacks ranging from simple vandalism to arson. Furthermore, the Jewish Defense League was suspected of plotting to bomb the King Fahd Mosque in Culver City, California. Similar attacks occurred throughout the United Kingdom following the 7 July 2005 London bombings. Outside the Western world, in June 2001, the Hassan Bek Mosque was the target of vandalism and attacks by hundreds of Israelis after a suicide bomber killed 19 people in a night club in Tel Aviv. Although mosquegoing is highly encouraged for men, it is permitted to stay at home when one feels at risk from Islamophobic persecution.
Saudi influence
Although the Saudi involvement in Sunni mosques around the world can be traced back to the 1960s, it was not until later in the 20th century that the government of Saudi Arabia became a large influence in foreign Sunni mosques. Beginning in the 1980s, the Saudi Arabian government began to finance the construction of Sunni mosques in countries around the world. An estimated US$45 billion has been spent by the Saudi Arabian government financing mosques and Sunni Islamic schools in foreign countries. Ain al-Yaqeen, a Saudi newspaper, reported in 2002 that Saudi funds may have contributed to building as many as 1,500 mosques and 2,000 other Islamic centers.
Saudi citizens have also contributed significantly to mosques in the Islamic world, especially in countries where they see Muslims as poor and oppressed. Following the fall of the Soviet Union, in 1992, mosques in war-torn Afghanistan saw many contributions from Saudi citizens. The King Fahd Mosque in Culver City, California and the Islamic Cultural Center of Italy in Rome represent two of Saudi Arabia's largest investments in foreign mosques as former Saudi king Fahd bin Abdul Aziz al-Saud contributed US$8 million and US$50 million to the two mosques, respectively.
Political controversy
In the western world, and in the United States in particular, Anti-Muslim sentiment and targeted domestic policy has created challenges for mosques and those looking to build them. There has been government and police surveillance of mosques in the US and local attempts to ban mosques and block constructions, despite data showing that in fact, most Americans opposing banning the building of mosques (79%) and the surveillance of U.S. mosques (63%) as shown in a 2018 study done by the Institute for Social Policy and Understanding.
Since 2017, Chinese authorities have destroyed or damaged two-thirds of the mosques in China's Xinjiang province. Ningxia officials were notified on 3 August 2018 that the Weizhou Grand Mosque would be forcibly demolished because it had not received the proper permits before construction. Officials in the town said that the mosque had not been given proper building permits, because it is built in a Middle Eastern style and includes numerous domes and minarets. The residents of Weizhou alarmed each other through social media and finally stopped the mosque destruction by public demonstrations.
See also
Holiest sites in Islam
Jama'at Khana
Lists of mosques
Explanatory notes
References
Citations
General bibliography
Further reading
Campanini, Massimo, Mosque, in Muhammad in History, Thought, and Culture: An Encyclopedia of the Prophet of God (2 vols.), Edited by C. Fitzpatrick and A. Walker, Santa Barbara, ABC-CLIO, 2014.
External links
Images of mosques from throughout the world, from the Aga Khan Documentation Center at MIT
Devostock Public domain images, Images of mosques from around the world ]
Islamic holy places
Building types
Islamic architecture
Mosque architecture |
19895 | https://en.wikipedia.org/wiki/Molecular%20cloud | Molecular cloud | A molecular cloud, sometimes called a stellar nursery (if star formation is occurring within), is a type of interstellar cloud, the density and size of which permit absorption nebulae, the formation of molecules (most commonly molecular hydrogen, H2), and the formation of H II regions. This is in contrast to other areas of the interstellar medium that contain predominantly ionized gas.
Molecular hydrogen is difficult to detect by infrared and radio observations, so the molecule most often used to determine the presence of H2 is carbon monoxide (CO). The ratio between CO luminosity and H2 mass is thought to be constant, although there are reasons to doubt this assumption in observations of some other galaxies.
Within molecular clouds are regions with higher density, where much dust and many gas cores reside, called clumps. These clumps are the beginning of star formation if gravitational forces are sufficient to cause the dust and gas to collapse.
History
The form of molecular clouds by interstellar dust and hydrogen gas traces its links to the formation of the Solar System, approximately 4.6 billion years ago.
Occurrence
Within the Milky Way, molecular gas clouds account for less than one percent of the volume of the interstellar medium (ISM), yet it is also the densest part of the medium, comprising roughly half of the total gas mass interior to the Sun's galactic orbit. The bulk of the molecular gas is contained in a ring between from the center of the Milky Way (the Sun is about 8.5 kiloparsecs from the center). Large scale CO maps of the galaxy show that the position of this gas correlates with the spiral arms of the galaxy. That molecular gas occurs predominantly in the spiral arms suggests that molecular clouds must form and dissociate on a timescale shorter than 10 million years—the time it takes for material to pass through the arm region.
Vertically to the plane of the galaxy, the molecular gas inhabits the narrow midplane of the galactic disc with a characteristic scale height, Z, of approximately 50 to 75 parsecs, much thinner than the warm atomic (Z from 130 to 400 parsecs) and warm ionized (Z around 1000 parsecs) gaseous components of the ISM. The exception to the ionized-gas distribution are H II regions, which are bubbles of hot ionized gas created in molecular clouds by the intense radiation given off by young massive stars and as such they have approximately the same vertical distribution as the molecular gas.
This distribution of molecular gas is averaged out over large distances; however, the small scale distribution of the gas is highly irregular with most of it concentrated in discrete clouds and cloud complexes.
Types of molecular cloud
Giant molecular clouds
A vast assemblage of molecular gas that has more than 10 thousand times the mass of the Sun is called a giant molecular cloud (GMC). GMCs are around 15 to 600 light-years (5 to 200 parsecs) in diameter, with typical masses of 10 thousand to 10 million solar masses. Whereas the average density in the solar vicinity is one particle per cubic centimetre, the average density of a GMC is a hundred to a thousand times as great. Although the Sun is much denser than a GMC, the volume of a GMC is so great that it contains much more mass than the Sun. The substructure of a GMC is a complex pattern of filaments, sheets, bubbles, and irregular clumps.
Filaments are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments observations have revealed quasi-periodic chains of dense cores with spacing of 0.15 parsec comparable to the filament inner width.
The densest parts of the filaments and clumps are called "molecular cores", while the densest molecular cores are called "dense molecular cores" and have densities in excess of 104 to 106 particles per cubic centimetre. Observationally, typical molecular cores are traced with CO and dense molecular cores are traced with ammonia. The concentration of dust within molecular cores is normally sufficient to block light from background stars so that they appear in silhouette as dark nebulae.
GMCs are so large that "local" ones can cover a significant fraction of a constellation; thus they are often referred to by the name of that constellation, e.g. the Orion Molecular Cloud (OMC) or the Taurus Molecular Cloud (TMC). These local GMCs are arrayed in a ring in the neighborhood of the Sun coinciding with the Gould Belt. The most massive collection of molecular clouds in the galaxy forms an asymmetrical ring about the galactic center at a radius of 120 parsecs; the largest component of this ring is the Sagittarius B2 complex. The Sagittarius region is chemically rich and is often used as an exemplar by astronomers searching for new molecules in interstellar space.
Small molecular clouds
Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globules. The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies.
High-latitude diffuse molecular clouds
In 1984 IRAS identified a new type of diffuse molecular cloud. These were diffuse filamentary clouds that are visible at high galactic latitudes. These clouds have a typical density of 30 particles per cubic centimetre.
Processes
Star formation
The formation of stars occurs exclusively within molecular clouds. This is a natural consequence of their low temperatures and high densities, because the gravitational force acting to collapse the cloud must exceed the internal pressures that are acting "outward" to prevent a collapse. There is observed evidence that the large, star-forming clouds are confined to a large degree by their own gravity (like stars, planets, and galaxies) rather than by external pressure. The evidence comes from the fact that the "turbulent" velocities inferred from CO linewidth scale in the same manner as the orbital velocity (a virial relation).
Physics
The physics of molecular clouds is poorly understood and much debated. Their internal motions are governed by turbulence in a cold, magnetized gas, for which the turbulent motions are highly supersonic but comparable to the speeds of magnetic disturbances. This state is thought to lose energy rapidly, requiring either an overall collapse or a steady reinjection of energy. At the same time, the clouds are known to be disrupted by some process—most likely the effects of massive stars—before a significant fraction of their mass has become stars.
Molecular clouds, and especially GMCs, are often the home of astronomical masers.
See also
References
External links
Nebulae
Cosmic dust
Concepts in astronomy |
19897 | https://en.wikipedia.org/wiki/Minoru%20Yamasaki | Minoru Yamasaki | was an American architect, best known for designing the original World Trade Center in New York City and several other large-scale projects. Yamasaki was one of the most prominent architects of the 20th century. He and fellow architect Edward Durell Stone are generally considered to be the two master practitioners of "New Formalism".
During his three-decade career, he or his firm designed over 250 buildings. His firm, Yamasaki & Associates, closed on December 31, 2009.
Early life and education
Yamasaki was born in Seattle, Washington, the son of John Tsunejiro Yamasaki and Hana Yamasaki, issei Japanese immigrants. The family later moved to Auburn, Washington, and he graduated from Garfield Senior High School in Seattle. He enrolled in the University of Washington program in architecture in 1929, and graduated with a Bachelor of Architecture (B.Arch.) in 1934. During his college years, he was strongly encouraged by faculty member Lionel Pries. He earned money to pay for his tuition by working at an Alaskan salmon cannery, working five summers and earning $50 a month, plus 25 cents an hour in overtime pay.
In part to escape anti-Japanese prejudice, he moved to Manhattan in 1934, with $40 and no job prospects. He wrapped dishes for an importing company until he found work as a draftsman and engineer. He enrolled at New York University for a master's degree in architecture and got a job with the architecture firm Shreve, Lamb & Harmon, designers of the Empire State Building. The firm helped Yamasaki avoid internment as a Japanese-American during World War II, and he himself sheltered his parents in New York City. After leaving Shreve, Lamb & Harmon, Yamasaki worked briefly for Harrison & Abramovitz and Raymond Loewy.
In 1945, Yamasaki moved to Detroit, where he secured a position with Smith, Hinchman & Grylls. Yamasaki left the firm in 1949, and started his own partnership. He worked from Birmingham and Troy, Michigan. One of the first projects he designed at his own firm was Ruhl's Bakery at 7 Mile Road and Monica Street in Detroit.
Career
Yamasaki's first major project was the Pruitt–Igoe public housing project in St. Louis in 1955. Despite his love of traditional Japanese design and ornamentation, this was a stark, modernist concrete structure, severely constricted by a tight budget. The housing project soon experienced so many problems that it was demolished starting in 1972, less than twenty years after its completion. Its destruction would be considered by architectural historian Charles Jencks to be the symbolic end of modernist architecture.
In 1955, he also designed the "sleek" terminal at Lambert–St. Louis International Airport which led to his 1959 commission to design the Dhahran International Airport in Saudi Arabia. In the 1950s, Yamasaki was commissioned by the Reynolds Company to design an aluminum-wrapped building in Southfield, Michigan, which would "symbolize the auto industry's past and future progress with aluminum." The three-story glass building wrapped in aluminum, known as the Reynolds Metals Company's Great Lakes Sales Headquarters Building, was also supposed to reinforce the company's main product and showcase its admirable characteristics of strength and beauty.
Yamasaki's first widely-acclaimed design was the Pacific Science Center, with its iconic lacy and airy decorative arches. It was constructed by the City of Seattle for the 1962 Seattle World's Fair. The building raised his public profile so much that he was featured on the cover of Time magazine.
In the post-war period, he created a number of office buildings which led to his innovative design of the towers of the World Trade Center in 1964, which began construction March 21, 1966. The first of the towers was finished in 1970. Many of his buildings feature superficial details inspired by the pointed arches of Gothic architecture, and make use of extremely narrow vertical windows. This narrow-windowed style arose from his own personal fear of heights.
One particular design challenge of the World Trade Center's design related to the efficacy of the elevator system, which became unique in the world when it was first opened for service. Yamasaki employed the fastest elevators at the time, running at per minute. Instead of placing a traditional large cluster of full-height elevator shafts in the core of each tower, Yamasaki created the Twin Towers' "Skylobby" system. The Skylobby design created three separate, connected elevator systems which would serve different zones of the building, depending on which floor was chosen, saving approximately 70% of the space which would have been required for traditional shafts. The space saved was then used for additional office space. Internally, each office floor was a vast open space unimpeded by support columns, ready to be subdivided as the tenants might choose.
In 1978, Yamasaki designed the Federal Reserve Bank tower in Richmond, Virginia. The work was designed with a similar external appearance as the World Trade Center complex, with its narrow fenestration, and now stands at .
Yamasaki was a member of the Pennsylvania Avenue Commission, created in 1961 to restore the grand avenue in Washington, DC, but he resigned after disagreements and disillusionment with the design by committee approach.
After partnering with Emery Roth and Sons on the design of the World Trade Center, the collaboration continued with other projects including new buildings at Bolling Air Force Base in Washington, DC
The campus for the University of Regina was designed in tandem with Yamasaki's plan for Wascana Centre, a park built around Wascana Lake in Regina, Saskatchewan. The original campus design was approved in 1962. Yamasaki was awarded contracts to design the first three buildings: the Classroom Building, the Laboratory Building, and the Dr. John Archer Library, which were built between 1963 and 1967.
Yamasaki designed two notable synagogues, North Shore Congregation Israel in Glencoe, Illinois (1964), and Temple Beth El, in Bloomfield Hills, Michigan (1973). He designed a number of buildings on the campus of Carleton College in Northfield, Minnesota between 1958 and 1968.
After criticism of his dramatically cantilevered Rainier Tower (1977) in Seattle, Yamasaki became less adventurous in his designs during the last decade of his career.
Legacy
Despite the many buildings he completed, Yamasaki's reputation faded along with the overall decline of modernism towards the end of the 20th century. Two of his major projects, the Pruitt-Igoe public housing complex, and the original World Trade Center, shared the dubious symbolic distinction of being destroyed while recorded by live TV broadcasts. In many ways, these best-known works ran counter to Yamasaki's own design principles, and he later regretted his reluctant acceptance of architectural compromises dictated by the clients of these projects. Several others of his buildings have also been demolished.
Yamasaki collaborated closely with structural engineers, including John Skilling, Leslie Roberts, and Jack Christiansen, to produce some of his innovative architectural designs. He strived to achieve "serenity, surprise, and delight" in his humanistic modernist buildings and their surrounds.
Decades after his death, Yamasaki's buildings and legacy would be re-assessed more sympathetically by some architectural critics. Several of his buildings have now been restored in accordance with his original designs, and his McGregor Memorial Conference Center was awarded National Historic Landmark status in 2015.
Personal life
Yamasaki was first married in 1941 to Teruko "Teri" Hirashiki. They had three children together: Carol, Taro, and Kim. They divorced in 1961 and Yamasaki married Peggy Watty. He and Watty divorced two years later, and Yamasaki married a third time briefly before remarrying Teruko in 1969. In a 1969 Detroit News article about the remarriage, Yamasaki said "I'm just going to be nicer to her".
Yamasaki suffered from health problems for at least three decades, and ulcers caused surgical removal of much of his stomach in 1953. Over time, he endured several more operations on his stomach. His health was not improved by increasingly heavy drinking towards the end of his life. Yamasaki died of stomach cancer on February 6, 1986, at the age of 73.
Gallery
Honors
Fellow of the American Institute of Architects, 1960
DFA from Bates College, 1964
American Institute of Architects' First Honor Award, three times
Cover story of TIME on 18 January 1963
See also
Construction of the World Trade Center
List of works by Minoru Yamasaki
References
Further reading
External links
GreatBuildings.com listing
The Wayne State University Yamasaki Legacy
Minoru Yamasaki interview, [ca. 1959 Aug.] - Archives of American Art
Images from the Minoru Yamasaki Collection Walter P. Reuther Library
Researchers can access archival evidence of Yamasaki's work in The papers of Minoru Yamasaki at the Walter P. Reuther Library. Available materials include correspondence on projects, travel, communications with associates, speaking invitations, and involvement in professional organizations. Early architectural drawings, speeches and writings, photographs, awards and doctoral degrees, scrapbooks detailing the progress of his career, and various publications are also included.
Modernist architects from the United States
.
1912 births
1986 deaths
Fellows of the American Institute of Architects
Architects from Detroit
Architects from Seattle
American people of Japanese descent
Bates College alumni
Polytechnic Institute of New York University alumni
University of Washington College of Built Environments alumni
Garfield High School (Seattle) alumni
Deaths from cancer in Michigan
Deaths from stomach cancer
People from Auburn, Washington
20th-century American architects |
19898 | https://en.wikipedia.org/wiki/Madeira | Madeira | Madeira ( , , ), officially the Autonomous Region of Madeira (), is one of the two autonomous regions of Portugal, the other being the Azores. It is an archipelago situated in the North Atlantic Ocean, in a region known as Macaronesia, just under to the north of the Canary Islands and west of Morocco. Madeira is geologically located on the African Tectonic Plate, though the archipelago is culturally, economically and politically European. Its total population was estimated in 2021 at 251,060. The capital of Madeira is Funchal, which is located on the main island's south coast.
The archipelago includes the islands of Madeira, Porto Santo, and the Desertas, administered together with the separate archipelago of the Savage Islands. The region has political and administrative autonomy through the Administrative Political Statute of the Autonomous Region of Madeira provided for in the Portuguese Constitution. The autonomous region is an integral part of the European Union as an outermost region. Madeira generally has a very mild and moderate subtropical climate with mediterranean summer droughts and winter rain. Many microclimates are found at different elevations.
Madeira, originally uninhabited, was claimed by Portuguese sailors in the service of Prince Henry the Navigator in 1419 and settled after 1420. The archipelago is considered to be the first territorial discovery of the exploratory period of the Age of Discovery.
As of 2017, it was a popular year-round resort, being visited every year by about 1.4 million tourists, almost six times its population. The region is noted for its Madeira wine, gastronomy, historical and cultural value, flora and fauna, landscapes (laurel forest) that are classified as a UNESCO World Heritage Site, and embroidery artisans. The main harbour in Funchal has long been the leading Portuguese port in cruise liner dockings, receiving more than half a million tourists through its main port in 2017, being an important stopover for commercial and trans-Atlantic passenger cruises between Europe, the Caribbean and North Africa. In addition, the International Business Centre of Madeira, also known as the Madeira Free Trade Zone, was created formally in the 1980s as a tool of regional economic policy. It consists of a set of incentives, mainly tax-related, granted with the objective of attracting foreign direct investment based on international services into Madeira.
History
Exploration
Plutarch in his Parallel Lives (Sertorius, 75 AD) referring to the military commander Quintus Sertorius (d. 72 BC), relates that after his return to Cádiz, he met sailors who spoke of idyllic Atlantic islands: "The islands are said to be two in number separated by a very narrow strait and lie from Africa. They are called the Isles of the Blessed."
The historian Diodorus Siculus told that the Tyrrhenians of Sardinia, inhabitants of the Nuragic villages, had organized an expedition to conquer an Atlantic island, Madeira, in 650 BC. The project failed due to the intervention of the Carthaginians, who tried to hinder the expansionist aims of the Sardinians.
Archaeological evidence suggests that the islands may have been visited by the Vikings sometime between 900 and 1030.
Accounts by Muhammad al-Idrisi state that the Mugharrarin came across an island where they found "a huge quantity of sheep, the meat of which was bitter and inedible" before going to the more incontrovertibly inhabited Canary Islands. This island, possibly Madeira or Hierro, must have been inhabited or previously visited by people for livestock to be present.
Legend
During the reign of King Edward III of England, lovers Robert Machim and Anna d'Arfet were said to have fled from England to France in 1346. Driven off course by a violent storm, their ship ran aground along the coast of an island that may have been Madeira. Later this legend was the basis of the naming of the city of Machico on the island, in memory of the young lovers.
European discovery
Knowledge of some Atlantic islands, such as Madeira, existed before their formal discovery and settlement, as the islands were shown on maps as early as 1339.
In 1418, two captains under service to Prince Henry the Navigator, João Gonçalves Zarco and Tristão Vaz Teixeira, were driven off course by a storm to an island they named Porto Santo (English: holy harbour) in gratitude for divine deliverance from a shipwreck. The following year, an organised expedition, under the captaincy of Zarco, Vaz Teixeira, and Bartolomeu Perestrello, traveled to the island to claim it on behalf of the Portuguese Crown. Subsequently, the new settlers observed "a heavy black cloud suspended to the southwest." Their investigation revealed it to be the larger island they called Madeira.
Settlement
The first Portuguese settlers began colonizing the islands around 1420 or 1425.
The first settlers were the three captain-donees and their respective families, a small group of members of the gentry, people of modest conditions and some former inmates of the kingdom.
The settlement involved people from all over the kingdom. It was from the Algarve that some of the early settlers set out. Many came with the important task of the landlord system employment. Servants, squires, knights and noblemen are identified as the ones who secured the beginning of the settlement. Later on, settlers came from the north of Portugal, namely from the region of Entre Douro and Minho, who intervened specifically in the organization of the agricultural area.
Majority of settlers were fishermen and peasant farmers, who willingly left Portugal for a new life on the islands, a better one, they hoped, than was possible in a Portugal which had been ravaged by the Black Death and where the best farmlands were strictly controlled by the nobility.
To have minimum conditions for the development of agriculture on the island, the settlers had to chop down part of the dense forest and build a large number of water channels, called “levadas”, to carry the abundant waters on the north coast to the south coast of the island.
Initially, the settlers produced wheat for their own sustenance but, later began to export wheat to mainland Portugal.
In earlier times, fish and vegetables were the settlers’ main means of subsistence.
Grain production began to fall and the ensuing crisis forced Henry the Navigator to order other commercial crops to be planted so that the islands could be profitable. These specialised plants, and their associated industrial technology, created one of the major revolutions on the islands and fuelled Portuguese industry. Following the introduction of the first water-driven sugar mill on Madeira, sugar production increased to over 6,000 arrobas (an arroba was equal to 11 to 12 kilograms) by 1455, using advisers from Sicily and financed by Genoese capital. (Genoa acted as an integral part of the island economy until the 17th century.) The accessibility of Madeira attracted Genoese and Flemish traders, who were keen to bypass Venetian monopolies.
Sugarcane production was the primary engine of the island's economy which quickly afforded the Funchal metropolis frank economic prosperity. The production of sugar cane attracted adventurers and merchants from all parts of Europe, especially Italians, Basques, Catalans, and Flemish. This meant that, in the second half of the fifteenth century, the city of Funchal became a mandatory port of call for European trade routes.
Slaves were used during the island's period of sugar trade to cultivate sugar cane alongside paid workers, though slave owners were only a small minority of the Madeiran population, and those who did own slaves owned only a few. Slaves consisted of Guanches from the nearby Canary islands, captured Berbers from the conquest of Ceuta and West Africans after further exploration of the African coast.
Until the first half of the sixteenth century, Madeira was one of the major sugar markets of the Atlantic. Apparently it is in Madeira that, in the context of sugar production, slave labour was applied for the first time. The colonial system of sugar production was put into practice on the island of Madeira, on a much smaller scale, and later transferred, on a large scale, to other overseas production areas.
Later on, this small scale of production was completely outmatched by Brazilian and São Tomean plantations. Madeiran sugar production declines in such a way that it is not enough for domestic needs, being imported to the island sugar from other Portuguese colonies. Sugar mills are gradually abandoned, with few remaining, which gave way to other markets in Madeira.
In the 17th century, as Portuguese sugar production was shifted to Brazil, São Tomé and Príncipe and elsewhere, Madeira's most important commodity product became its wine. Sugar plantations were replaced by vineyards, originating in the so-called ‘Wine Culture’, which acquired international fame and provided the rise of a new social class, the Bourgeoisie.
With the increase of commercial treaties with England, important English merchants settled on the Island and, ultimately, controlled the increasingly important island wine trade. The English traders settled in the Funchal as of the seventeenth century, consolidating the markets from North America, the West Indies and England itself. The Madeira Wine became very popular in the markets and it is also said to have been used in a toast during the Declaration of Independence by the Founding Fathers. In the eighteenth and nineteenth centuries, Madeira stands out for its climate and therapeutic effects. In the nineteenth century, visitors to the island integrated four major groups: patients, travellers, tourists and scientists. Most visitors belonged to the moneyed aristocracy.
As a result of a high demand for the season, there was a need to prepare guides for visitors. The first tourist guide of Madeira appeared in 1850 and focused on elements of history, geology, flora, fauna and customs of the island.
Regarding hotel infrastructures, the British and the Germans were the first to launch the Madeiran hotel chain. The historic Belmond Reid's Palace, opened in 1891, is still open to this day.
Barbary corsairs from North Africa, who enslaved Europeans from ships and coastal communities throughout the Mediterranean region, captured 1,200 people in Porto Santo in 1617.
The British first amicably occupied the island in 1801 whereafter Colonel William Henry Clinton became governor. A detachment of the 85th Regiment of Foot under Lieutenant-colonel James Willoughby Gordon garrisoned the island.
After the Peace of Amiens, British troops withdrew in 1802, only to reoccupy Madeira in 1807 until the end of the Peninsular War in 1814. In 1846 James Julius Wood wrote a series of seven sketches of the island. In 1856, British troops recovering from cholera, and widows and orphans of soldiers fallen in the Crimean War, were stationed in Funchal, Madeira.
World War I
On 31 December 1916, during the Great War, a German U-boat, , captained by Max Valentiner, entered Funchal harbour on Madeira. U-38 torpedoed and sank three ships, bringing the war to Portugal by extension. The ships sunk were:
CS Dacia (1,856 tons), a British cable-laying vessel. Dacia had previously undertaken war work off the coast of Casablanca and Dakar. It was in the process of diverting the German South American cable into Brest, France.
SS Kanguroo (2,493 tons), a French specialized "heavy-lift" transport.
Surprise (680 tons), a French gunboat. Her commander and 34 crewmen (including 7 Portuguese) were killed.
After attacking the ships, U-38 bombarded Funchal for two hours from a range of about . Batteries on Madeira returned fire and eventually forced U-38 to withdraw.
On 12 December 1917, two German U-boats, SM U-156 and SM U-157 (captained by Max Valentiner), again bombarded Funchal. This time the attack lasted around 30 minutes. The U-boats fired 40 shells. There were three fatalities and 17 wounded; a number of houses and Santa Clara church were hit.
Charles I (Karl I), the last Emperor of the Austro-Hungarian Empire, was exiled to Madeira after the war. Determined to prevent an attempt to restore Charles to the throne, the Council of Allied Powers agreed he could go into exile on Madeira because it was isolated in the Atlantic and easily guarded. He died there on 1 April 1922 and his coffin lies in a chapel of the Church of Our Lady of Monte.
Geography
The archipelago of Madeira is located from the African coast and from the European continent (approximately a one-and-a-half-hour flight from the Portuguese capital of Lisbon). Madeira is on the same parallel as Bermuda a few time zones further west in the Atlantic. The two archipelagos are the only land in the Atlantic on the 32nd parallel north. Madeira is found in the extreme south of the Tore-Madeira Ridge, a bathymetric structure of great dimensions oriented along a north-northeast to south-southwest axis that extends for . This submarine structure consists of long geomorphological relief that extends from the abyssal plain to 3500 metres; its highest submersed point is at a depth of about 150 metres (around latitude 36ºN). The origins of the Tore-Madeira Ridge are not clearly established, but may have resulted from a morphological buckling of the lithosphere.
Islands and islets
Madeira (740.7 km2), including Ilhéu de Agostinho, Ilhéu de São Lourenço, Ilhéu Mole (northwest); Total population: 262,456 (2011 Census).
Porto Santo (42.5 km2), including Ilhéu de Baixo ou da Cal, Ilhéu de Ferro, Ilhéu das Cenouras, Ilhéu de Fora, Ilhéu de Cima; Total population: 5,483 (2011 Census).
Desertas Islands (14.2 km2), including the three uninhabited islands: Deserta Grande Island, Bugio Island and Ilhéu de Chão.
Savage Islands (3.6 km2), archipelago 280 km south-southeast of Madeira Island including three main islands and 16 uninhabited islets in two groups: the Northwest Group (Selvagem Grande Island, Ilhéu de Palheiro da Terra, Ilhéu de Palheiro do Mar) and the Southeast Group (Selvagem Pequena Island, Ilhéu Grande, Ilhéu Sul, Ilhéu Pequeno, Ilhéu Fora, Ilhéu Alto, Ilhéu Comprido, Ilhéu Redondo, Ilhéu Norte).
Madeira Island
The island of Madeira is at the top of a massive shield volcano that rises about from the floor of the Atlantic Ocean, on the Tore underwater mountain range. The volcano formed atop an east–west rift in the oceanic crust along the African Plate, beginning during the Miocene epoch over 5 million years ago, continuing into the Pleistocene until about 700,000 years ago. This was followed by extensive erosion, producing two large amphitheatres open to south in the central part of the island. Volcanic activity later resumed, producing scoria cones and lava flows atop the older eroded shield. The most recent volcanic eruptions were on the west-central part of the island only 6,500 years ago, creating more cinder cones and lava flows.
It is the largest island of the group with an area of , a length of (from Ponte de São Lourenço to Ponte do Pargo), while approximately at its widest point (from Ponte da Cruz to Ponte São Jorge), with a coastline of . It has a mountain ridge that extends along the centre of the island, reaching at its highest point (Pico Ruivo), while much lower (below 200 metres) along its eastern extent. The primitive volcanic foci responsible for the central mountainous area, consisted of the peaks: Ruivo (1,862 m), Torres (1,851 m), Arieiro (1,818 m), Cidrão (1,802 m), Cedro (1,759 m), Casado (1,725 m), Grande (1,657 m), Ferreiro (1,582 m). At the end of this eruptive phase, an island circled by reefs was formed, its marine vestiges are evident in a calcareous layer in the area of Lameiros, in São Vicente (which was later explored for calcium oxide production). Sea cliffs, such as Cabo Girão, valleys and ravines extend from this central spine, making the interior generally inaccessible. Daily life is concentrated in the many villages at the mouths of the ravines, through which the heavy rains of autumn and winter usually travel to the sea.
Climate
Madeira has many different bioclimates.
Based on differences in sun exposure, humidity, and annual mean temperature, there are clear variations between north- and south-facing regions, as well as between some islands. The islands are strongly influenced by the Gulf Stream and Canary Current, giving it mild to warm year-round temperatures; according to the Instituto de Meteorologia (IPMA), the average annual temperature at Funchal weather station is for the 1981–2010 period. Relief is a determinant factor on precipitation levels, areas such as the Madeira Natural Park can get as much as of precipitation a year hosting green lush laurel forests, while Porto Santo, being a much flatter island, has a semiarid climate (BSh). In most winters snowfall occurs in the mountains of Madeira. The main Madeira island has areas with an annual average temperature exceeding along the coast (according to the Portuguese Meteorological Institute).
Flora and fauna
Madeira island is home to several endemic plant and animal species.
In the south, there is very little left of the indigenous subtropical rainforest that once covered the whole island (the original settlers set fire to the island to clear the land for farming) and gave it the name it now bears (Madeira means "wood" in Portuguese). However, in the north, the valleys contain native trees of fine growth. These "laurisilva" forests, called lauraceas madeirense, notably the forests on the northern slopes of Madeira Island, are designated as a World Heritage Site by UNESCO. The paleobotanical record of Madeira reveals that laurisilva forest has existed in this island for at least 1.8 million years. Critically endangered species such as the vine Jasminum azoricum and the rowan Sorbus maderensis are endemic to Madeira. The Madeiran large white butterfly was an endemic subspecies of the Large white which inhabited the laurisilva forests but has not been seen since 1977 so may now be extinct.
Madeiran wall lizard
The Madeiran wall lizard (Teira dugesii) is a species of lizard in the family Lacertidae. The species is endemic to the Island where it is very common, and is the only small lizard, ranging from sea coasts to altitudes of . It is usually found in rocky places or among scrub and may climb into trees. It is also found in gardens and on the walls of buildings. It feeds on small invertebrates such as ants and also eats some vegetable matter. The tail is easily shed and the stump regenerates slowly. The colouring is variable and tends to match the colour of the animal's surroundings, being some shade of brown or grey with occasionally a greenish tinge. Most animals are finely flecked with darker markings. The underparts are white or cream, sometimes with dark spots, with some males having orange or red underparts and blue throats, but these bright colours may fade if the animal is disturbed. The Madeiran wall lizard grows to a snout-to-vent length of about with a tail about 1.7 times the length of its body. Females lay two to three clutches of eggs in a year with the juveniles being about when they hatch.
Endemic birds
Two species of birds are endemic to Madeira, the Trocaz pigeon and the Madeira firecrest. In addition to these are several extinct species which may have died out soon after the islands were settled, the Madeiran scops owl, two rail species, Rallus adolfocaesaris and R. lowei, and two quail species, Coturnix lignorum and C. alabrevis, and the Madeiran wood pigeon, a subspecies of the widespread common wood pigeon and which was last seen in the early 20th century.
Levadas
The island of Madeira is wet in the northwest, but dry in the southeast. In the 16th century the Portuguese started building levadas or aqueducts to carry water to the agricultural regions in the south. Madeira is very mountainous, and building the levadas was difficult and often convicts or slaves were used. Many are cut into the sides of mountains, and it was also necessary to dig of tunnels, some of which are still accessible.
Today the levadas not only supply water to the southern parts of the island, but provide hydro-electric power. There are over of levadas and they provide a network of walking paths. Some provide easy and relaxing walks through the countryside, but others are narrow, crumbling ledges where a slip could result in serious injury or death. Since 2011, some improvements have been made to these pathways, after the 2010 Madeira floods and mudslides on the island, to clean and reconstruct some critical parts of the island, including the levadas. Such improvements involved the continuous maintenance of the water streams, cementing the trails, and positioning safety fences on dangerous paths.
Two of the most popular levadas to hike are the Levada do Caldeirão Verde and the Levada do Caldeirão do Inferno, which should not be attempted by hikers prone to vertigo or without torches and helmets. The Levada do Caniçal is a much easier walk, running from Maroços to the Caniçal Tunnel. It is known as the mimosa levada, because "mimosa" trees, (the colloquial name for invasive acacia) are found all along the route.
Politics
Political autonomy
Due to its distinct geography, economy, social and cultural situation, as well as the historical autonomic aspirations of the Madeiran island population, the Autonomous Regions of Madeira was established in 1976. Although it is a politico-administrative autonomic region the Portuguese constitution specifies both a regional and national connection, obliging their administrations to maintain democratic principles and promote regional interests, while still reinforcing national unity.
As defined by the Portuguese constitution and other laws, Madeira possesses its own political and administrative statute and has its own government. The branches of Government are the Regional Government and the Legislative Assembly, the later being elected by universal suffrage, using the D'Hondt method of proportional representation.
The president of the Regional Government is appointed by the Representative of the Republic according to the results of the election to the legislative assemblies.
The sovereignty of the Portuguese Republic was represented in Madeira by the Minister of the Republic, proposed by the Government of the Republic and appointed by the President of the Republic. However, after the sixth amendment to the Portuguese Constitution was passed in 2006, the Minister of the Republic was replaced by a less-powerful Representative of the Republic who is appointed by the President, after listening to the Government, but otherwise it is a presidential prerogative. The other tasks of Representative of the Republic are to sign and order the publication of regional legislative decrees and regional regulatory decrees or to exercise the right of veto over regional laws, should these laws be unconstitutional.
Status within the European Union
Madeira is also an Outermost Region (OMR) of the European Union, meaning that due to its geographical situation, it is entitled to derogation from some EU policies despite being part of the European Union.
According to the Treaty on the Functioning of the European Union, both primary and secondary European Union law applies automatically to Madeira, with possible derogations to take account of its "structural social and economic situation (...) which is compounded by their remoteness, insularity, small size, difficult topography and climate, economic dependence on a few products, the permanence and combination of which severely restrain their development". An example of such derogation is seen in the approval of the International Business Centre of Madeira and other state aid policies to help the rum industry.
It forms part of the European Union customs area, the Schengen Area and the European Union Value Added Tax Area.
Administrative divisions
Administratively, Madeira (with a population of 251,060 inhabitants in 2021) and covering an area of is organised into eleven municipalities:
Funchal
Funchal is the capital and principal city of the Autonomous Region of Madeira, located along the southern coast of the island of Madeira. It is a modern city, located within a natural geological "amphitheatre" composed of vulcanological structure and fluvial hydrological forces. Beginning at the harbour (Porto de Funchal), the neighbourhoods and streets rise almost , along gentle slopes that helped to provide a natural shelter to the early settlers.
Population
Demographics
The island was settled by Portuguese people, especially farmers from the Minho region,<ref>{{cite web |url=http://www.ceha-madeira.net/livros/infante.html|title=Alberto Vieira, O Infante e a Madeira: dúvidas e certezas, Centro Estudos História Atlântico |publisher=Ceha-madeira.net |access-date=30 July 2010 |url-status=dead |archive-url=https://web.archive.org/web/20100531222502/http://www.ceha-madeira.net/livros/infante.html |archive-date=31 May 2010}}</ref> meaning that Madeirans (), as they are called, are ethnically Portuguese, though they have developed their own distinct regional identity and cultural traits.
The region of Madeira and Porto Santo has a total population of just under 256,060, the majority of whom live on the main island of Madeira 251,060 where the population density is ; meanwhile only around 5,000 live on the Porto Santo island where the population density is .
About 247,000 (96%) of the population are Catholic and Funchal is the location of the Catholic cathedral.
Diaspora
Madeirans migrated to the United States, Venezuela, Brazil, Guyana, Saint Vincent and the Grenadines and Trinidad and Tobago."Madeira and Emigration " Madeiran immigrants in North America mostly clustered in the New England and mid-Atlantic states, Toronto, Northern California, and Hawaii. The city of New Bedford is especially rich in Madeirans, hosting the Museum of Madeira Heritage, as well as the annual Madeiran and Luso-American celebration, the Feast of the Blessed Sacrament, the world's largest celebration of Madeiran heritage, regularly drawing crowds of tens of thousands to the city's Madeira Field.
In 1846, when a famine struck Madeira over 6,000 of the inhabitants migrated to British Guiana. In 1891 they numbered 4.3% of the population. In 1902 in Honolulu, Hawaii there were 5,000 Portuguese people, mostly Madeirans. In 1910 this grew to 21,000.
1849 saw an emigration of Protestant religious exiles from Madeira to the United States, by way of Trinidad and other locations in the West Indies. Most of them settled in Illinois with financial and physical aid of the American Protestant Society, headquartered in New York City. In the late 1830s the Reverend Robert Reid Kalley, from Scotland, a Presbyterian minister as well as a physician, made a stop at Funchal, Madeira on his way to a mission in China, with his wife, so that she could recover from an illness. The Rev. Kalley and his wife stayed on Madeira where he began preaching the Protestant gospel and converting islanders from Catholicism. Eventually, the Rev. Kalley was arrested for his religious conversion activities and imprisoned. Another missionary from Scotland, William Hepburn Hewitson, took on Protestant ministerial activities in Madeira. By 1846, about 1,000 Protestant Madeirenses, who were discriminated against and the subjects of mob violence because of their religious conversions, chose to immigrate to Trinidad and other locations in the West Indies in answer for a call for sugar plantation workers. The Madeirenses exiles did not fare well in the West Indies. The tropical climate was unfamiliar and they found themselves in serious economic difficulties. By 1848, the American Protestant Society raised money and sent the Rev. Manuel J. Gonsalves, a Baptist minister and a naturalized U.S. citizen from Madeira, to work with the Rev. Arsénio da Silva, who had emigrated with the exiles from Madeira, to arrange to resettle those who wanted to come to the United States. The Rev. da Silva died in early 1849. Later in 1849, the Rev. Gonsalves was then charged with escorting the exiles from Trinidad to be settled in Sangamon and Morgan counties in Illinois on land purchased with funds raised by the American Protestant Society. Accounts state that anywhere from 700 to 1,000 exiles came to the United States at this time.
There are several large Madeiran communities around the world, such as the number in the UK, including Jersey, the Portuguese British community mostly made up of Madeirans celebrate Madeira Day.
Immigration
Madeira is part of the Schengen Area.
The Venezuelan (14.4%), British (14.2%), Brazilian (12.1%) and German (7%) nationalities constituted the largest foreign communities residing in Madeira in 2017. The Venezuelan community dramatically increased in number (38%) in 2017 due to migration fuelled by the socioeconomic crisis in Venezuela. In terms of geographical distribution, the foreign population mainly concentrates in Funchal (59.2% of the total of the region), followed by Santa Cruz (13.8%), Calheta (7.3%) and Porto Santo (4%). The foreign population with resident status in the Autonomous Region of Madeira totalled 6,720 (up by 10% from 2016), distributed between residence permits (6,692) and long-stay visas (28).
Economy
The Gross domestic product (GDP) of the region was 4.9 billion euros in 2018, accounting for 2.4% of Portugal's economic output. GDP per capita adjusted for purchasing power was 22,500 euros or 75% of the EU27 average in the same year. The GDP per employee was 71% of the EU average.
Madeira International Business Center
The setting-up of a free trade zone, also known as the Madeira International Business Center (MIBC) has led to the installation, under more favorable conditions, of infrastructure, production shops and essential services for small and medium-sized industrial enterprises. The International Business Centre of Madeira comprises presently three sectors of investment: the Industrial Free Trade Zone, the International Shipping Register – MAR and the International Services. Madeira's tax regime has been approved by the European Commission as legal State Aid and its deadline has recently been extended by the E.C. until the end of 2027. The International Business Center of Madeira, also known as Madeira Free Trade Zone, was created formally in the 1980s as a tool of regional economic policy. It consists of a set of incentives, mainly of a tax nature, granted with the objective of attracting inward investment into Madeira, recognized as the most efficient mechanism to modernize, diversify and internationalize the regional economy. The decision to create the International Business Center of Madeira was the result of a thorough process of analysis and study. Other small island economies, with similar geographical and economic restraints, had successfully implemented projects of attraction of foreign direct investment based on international services activities, becoming therefore examples of successful economic policies.
Since the beginning, favorable operational and fiscal conditions have been offered in the context of a preferential tax regime, fully recognized and approved by the European Commission in the framework of State aid for regional purposes and under the terms for the Ultra-peripheral Regions set in the Treaties, namely Article 299 of the Treaty on European Union. The IBC of Madeira has therefore been fully integrated in the Portuguese and EU legal systems and, as a consequence, it is regulated and supervised by the competent Portuguese and EU authorities in a transparent and stable business environment, marking a clear difference from the so-called "tax havens" and "offshore jurisdictions", since its inception. In 2015, the European Commission authorized the new state aid regime for new companies incorporated between 2015 and 2020 and the extension of the deadline of the tax reductions until the end of 2027. The present tax regime is outlined in Article 36°-A of the Portuguese Tax Incentives Statute. Available data clearly demonstrates the contribution that this development programme has brought to the local economy over its 20 years of existence: impact in the local labor market, through the creation of qualified jobs for the young population but also for Madeiran professionals who have returned to Madeira thanks to the opportunities now created; an increase in productivity due to the transfer of know how and the implementation of new business practices and technologies; indirect influence on other sectors of activity: business tourism benefits from the visits of investors and their clients and suppliers, and other sectors such as real estate, telecommunications and other services benefit from the growth of their client base; impact on direct sources of revenue: the companies attracted by the IBC of Madeira represent over 40% of the revenue in terms of corporate income tax for the Government of Madeira and nearly 3.000 jobs, most of which qualified, among other benefits. Also there are above average salaries paid by the companies in the IBC of Madeira in comparison with the wages paid in the other sectors of activity in Madeira.
Regional government
Madeira has been a significant recipient of European Union funding, totaling up to €2 billion. In 2012, it was reported that despite a population of just 250,000, the local administration owes some €6 billion. Furthermore, the Portuguese treasury (IGCP) assumed Madeira's debt management between 2012 and 2015. The region continues to work with the central government on a long-term plan to reduce its debt levels and commercial debt stock. Moody's notes that the region has made significant fiscal consolidation efforts and that its tax revenue collection has increased significantly in recent years due to tax rate hikes. Madeira's tax revenues increased by 41% between 2012 and 2016, helping the region to reduce its deficit to operating revenue ratio to 10% in 2016 from 77% in 2013.
Tourism
Tourism is an important sector in the region's economy, contributing 20% to the region's GDP, providing support throughout the year for commercial, transport and other activities and constituting a significant market for local products. The share in Gross Value Added of hotels and restaurants (9%) also highlights this phenomenon. The island of Porto Santo, with its beach and its climate, is entirely devoted to tourism.
Visitors are mainly from the European Union, with German, British, Scandinavian and Portuguese tourists providing the main contingents. The average annual occupancy rate was 60.3% in 2008, reaching its maximum in March and April, when it exceeds 70%.
Whale watching
Whale watching has become very popular in recent years. Many species of dolphins, such as common dolphin, spotted dolphin, striped dolphin, bottlenose dolphin, short-finned pilot whale, and whales such as Bryde's whale, Sei whale, fin whale, sperm whale, beaked whales can be spotted near the coast or offshore.
Energy
Electricity on Madeira is provided solely through EEM (Empresa de Electricidade da Madeira, SA, which holds a monopoly for the provision of electrical supply on the autonomous region) and consists largely of fossil fuels, but with a significant supply of seasonal hydroelectricity from the levada system, wind power and a small amount of solar. Energy production comes from conventional thermal and hydropower, as well as wind and solar energy. The Ribeira dos Soccoridos hydropower plant, rated at 15MW utilises a pumped hydropower reservoir to recycle mountain water during the dry summer.
In 2011, renewable energy formed 26.5% of the electricity used in Madeira. By 2020, half of Madeira's energy will come from renewable energy sources. This is due to the planned completion of the Pico da Urze / Calheta pumped storage hydropower plant, rated at 30MW.
Battery technologies are being tested to minimise Madeira's reliance on fossil fuel imports. Renault SA and EEM piloted the Sustainable Porto Santo—Smart Fossil Free Island project on Porto Santo to demonstrate how fossil fuels can be entirely replaced with renewable energy.
Transport
The Islands have two airports, Cristiano Ronaldo International Airport and Porto Santo Airport, on the islands of Madeira and Porto Santo respectively. From Cristiano Ronaldo International Airport the most frequent flights are to Lisbon. There are also direct flights to over 30 other airports in Europe and nearby islands.
Transport between the two main islands is by plane, or ferries from the Porto Santo Line, the latter also carrying vehicles. Visiting the interior of the islands is now easy thanks to construction of the Vias Rápidas, major roads that cross the island. Modern roads reach all points of interest on the islands.
Funchal has an extensive public transportation system. Bus companies, including Horários do Funchal, which has been operating for over a hundred years, have regularly scheduled routes to all points of interest on the island.
Culture
Music
Folklore music in Madeira is widespread and mainly uses local musical instruments such as the machete, rajao, brinquinho and cavaquinho, which are used in traditional folkloric dances like the .
Emigrants from Madeira also influenced the creation of new musical instruments. In the 1880s, the ukulele was created, based on two small guitar-like instruments of Madeiran origin, the cavaquinho and the rajao. The ukulele was introduced to the Hawaiian Islands by Portuguese immigrants from Madeira and Cape Verde. Three immigrants in particular, Madeiran cabinet makers Manuel Nunes, José do Espírito Santo, and Augusto Dias, are generally credited as the first ukulele makers. Two weeks after they disembarked from the SS Ravenscrag in late August 1879, the Hawaiian Gazette reported that "Madeira Islanders recently arrived here, have been delighting the people with nightly street concerts."
Cuisine
Because of the geographic situation of Madeira in the Atlantic Ocean, the island has an abundance of fish of various kinds. The species that are consumed the most are espada (black scabbardfish), blue fin tuna, white marlin, blue marlin, albacore, bigeye tuna, wahoo, spearfish, skipjack tuna and many others are found in the local dishes as they are found along the coast of Madeira. Espada is often served with banana. Bacalhau is also popular, as it is in Portugal.
There are many different meat dishes on Madeira, one of the most popular being espetada. Espetada is traditionally made of large chunks of beef rubbed in garlic, salt and bay leaf and marinated for 4 to 6 hours in Madeira wine, red wine vinegar and olive oil then skewered onto a bay laurel stick and left to grill over smouldering wood chips. These are so integral a part of traditional eating habits that a special iron stand is available with a T-shaped end, each branch of the "T" having a slot in the middle to hold a brochette (espeto in Portuguese); a small plate is then placed underneath to collect the juices. The brochettes are very long and have a V-shaped blade in order to pierce the meat more easily. It is usually accompanied with the local bread called bolo do caco.
Other popular dishes in Madeira include açorda, feijoada and carne de vinha d'alhos.
Traditional pastries in Madeira usually contain local ingredients, one of the most common being mel de cana, literally "sugarcane honey" (molasses). The traditional cake of Madeira is called Bolo de Mel, which translates as (Sugarcane) "Honey Cake" and according to custom, is never cut with a knife, but broken into pieces by hand. It is a rich and heavy cake. The cake commonly known as "Madeira cake" in England is named after Madeira wine.
Malasadas are a local confection which are mainly consumed during the Carnival of Madeira. Pastéis de nata, as in the rest of Portugal, are also very popular.
Milho frito is a popular dish in Madeira that is similar to the Italian dish polenta fritta. Açorda Madeirense is another popular local dish.
Madeira is known for the high quality of its cherimoya fruits. The Annona Festival is traditional and held annually in the parish of Faial. This event encourages the consumption of this fruit and its derivatives, such as liqueurs, puddings, ice cream and smoothies.
Beverages
Madeira is a fortified wine, produced in the Madeira Islands; varieties may be sweet or dry. It has a history dating back to the Age of Exploration when Madeira was a standard port of call for ships heading to the New World or East Indies. To prevent the wine from spoiling, neutral grape spirits were added. However, wine producers of Madeira discovered, when an unsold shipment of wine returned to the islands after a round trip, that the flavour of the wine had been transformed by exposure to heat and movement. Today, Madeira is noted for its unique winemaking process that involves heating the wine and deliberately exposing the wine to some levels of oxidation. Most countries limit the use of the term Madeira to those wines that come from the Madeira Islands, to which the European Union grants Protected designation of origin (PDO) status.
A local beer called Coral is produced by the Madeira Brewery, which dates from 1872. It has achieved 2 Monde Selection Grand Gold Medals, 24 Monde Selection Gold Medals and 2 Monde Selection Silver Medals. Other alcoholic drinks are also popular in Madeira, such as the locally created Poncha, Niquita, Pé de Cabra, Aniz, as well as Portuguese drinks such as Macieira Brandy, Licor Beirão.
Laranjada is a type of carbonated soft drink with an orange flavour, its name being derived from the Portuguese word laranja'' ("orange"). Launched in 1872 it was the first soft drink to be produced in Portugal, and remains very popular to the present day. Brisa drinks, a brand name, are also very popular and come in a range of flavours.
There is a coffee culture in Madeira. As in mainland Portugal, popular coffee-based drinks include Garoto, Galão, Bica, Café com Cheirinho, Mazagran, Chinesa.
Sports
Football is the most popular sport in Madeira and the island was indeed the first place in Portugal to host a match, organised by British residents in 1875. The island is the birthplace of international star Cristiano Ronaldo and is home to two prominent Primeira Liga teams, C.S. Marítimo - the only island team to win a national championship - and C.D. Nacional.
As well as football, the island is also home to professional sports teams in basketball (CAB Madeira) and handball (Madeira Andebol SAD, who were runners up in the 2019 European Challenge Cup). Madeira was also the host of the 2003 World Handball Championship.
The annual Rally Vinho da Madeira is considered one of the biggest sporting events on the island whilst other popular sporting activities include golf at one of the island's two courses (plus one on Porto Santo), surfing, scuba diving, and hiking.
Sister provinces
Madeira Island has the following sister provinces:
Autonomous Region of Aosta Valley, Italy (1987)
Bailiwick of Jersey, Jersey (1998)
Eastern Cape Province, South Africa
Jeju Province, South Korea (2007)
Gibraltar (2009)
Postage stamps
Portugal has issued postage stamps for Madeira during several periods, beginning in 1868.
See also
"Have Some Madeira M'Dear"
Geology of Madeira
List of birds of Madeira
Madeira Islands Open, an annual European Tour golf tournament
Surfing in Madeira
Islands of Macaronesia
Azores
Cabo Verde
Canary Islands
References
Bibliography
External links
World History Encyclopedia - The Portuguese Colonization of Madeira
Madeira's Government Website
1420s establishments in the Portuguese Empire
1976 disestablishments in the Portuguese Empire
1976 establishments in Portugal
Autonomous Regions of Portugal
Integral overseas territories
Islands of Macaronesia
North Africa
Outermost regions of the European Union
Populated places established in the 1420s
States and territories established in 1976
Volcanoes of Portugal
Wine regions of Portugal
Islands of Africa |
19901 | https://en.wikipedia.org/wiki/M16%20rifle | M16 rifle | The M16 rifle (officially designated Rifle, Caliber 5.56 mm, M16) is a family of military rifles adapted from the ArmaLite AR-15 rifle for the United States military. The original M16 rifle was a 5.56×45mm automatic rifle with a 20-round magazine.
In 1964, the M16 entered US military service and the following year was deployed for jungle warfare operations during the Vietnam War. In 1969, the M16A1 replaced the M14 rifle to become the US military's standard service rifle. The M16A1's modifications include a bolt-assist, chrome-plated bore and a 30-round magazine.
In 1983, the US Marine Corps adopted the M16A2 rifle and the US Army adopted it in 1986. The M16A2 fires the improved 5.56×45mm (M855/SS109) cartridge and has a newer adjustable rear sight, case deflector, heavy barrel, improved handguard, pistol grip and buttstock, as well as a semi-auto and three-round burst fire selector. Adopted in July 1997, the M16A4 is the fourth generation of the M16 series. It is equipped with a removable carrying handle and Picatinny rail for mounting optics and other ancillary devices.
The M16 has also been widely adopted by other armed forces around the world. Total worldwide production of M16s is approximately 8 million, making it the most-produced firearm of its 5.56 mm caliber. The US military has largely replaced the M16 in frontline combat units with a shorter and lighter version, the M4 carbine.
History
Background
In 1928, a U.S. Army 'Caliber Board' conducted firing tests at Aberdeen Proving Ground and recommended transitioning to smaller caliber rounds, mentioning in particular caliber. Largely in deference to tradition, this recommendation was ignored and the Army referred to the caliber as "full sized" for the next 35 years. After World War II, the United States military started looking for a single automatic rifle to replace the M1 Garand, M1/M2 Carbines, M1918 Browning Automatic Rifle, M3 "Grease Gun" and Thompson submachine gun. However, early experiments with select-fire versions of the M1 Garand proved disappointing. During the Korean War, the select-fire M2 carbine largely replaced the submachine gun in US service and became the most widely used carbine variant. However, combat experience suggested that the .30 Carbine round was underpowered. American weapons designers concluded that an intermediate round was necessary, and recommended a small-caliber, high-velocity cartridge.
However, senior American commanders, having faced fanatical enemies and experienced major logistical problems during World War II and the Korean War, insisted that a single, powerful .30 caliber cartridge be developed, that could not only be used by the new automatic rifle, but by the new general-purpose machine gun (GPMG) in concurrent development. This culminated in the development of the 7.62×51 mm NATO cartridge.
The U.S. Army then began testing several rifles to replace the obsolete M1. Springfield Armory's T44E4 and heavier T44E5 were essentially updated versions of the M1 chambered for the new 7.62 mm round, while Fabrique Nationale submitted their FN FAL as the T48. ArmaLite entered the competition late, hurriedly submitting several AR-10 prototype rifles in the fall of 1956 to the U.S. Army's Springfield Armory for testing. The AR-10 featured an innovative straight-line barrel/stock design, forged aluminum alloy receivers and with phenolic composite stocks. It had rugged elevated sights, an oversized aluminum flash suppressor and recoil compensator, and an adjustable gas system. The final prototype featured an upper and lower receiver with the now-familiar hinge and takedown pins, and the charging handle was on top of the receiver placed inside of the carry handle. For a 7.62 mm NATO rifle, the AR-10 was incredibly lightweight at only empty. Initial comments by Springfield Armory test staff were favorable, and some testers commented that the AR-10 was the best lightweight automatic rifle ever tested by the Armory. In the end the U.S. Army chose the T44 now named M14 rifle which was an improved M1 Garand with a 20-round magazine and automatic fire capability. The U.S. also adopted the M60 general purpose machine gun (GPMG). Its NATO partners adopted the FN FAL and HK G3 rifles, as well as the FN MAG and Rheinmetall MG3 GPMGs.
The first confrontations between the AK-47 and the M14 came in the early part of the Vietnam War. Battlefield reports indicated that the M14 was uncontrollable in full-auto and that soldiers could not carry enough ammunition to maintain fire superiority over the AK-47. And, while the M2 carbine offered a high rate of fire, it was under-powered and ultimately outclassed by the AK-47. A replacement was needed: a medium between the traditional preference for high-powered rifles such as the M14, and the lightweight firepower of the M2 Carbine.
As a result, the Army was forced to reconsider a 1957 request by General Willard G. Wyman, commander of the U.S. Continental Army Command (CONARC) to develop a .223-inch caliber (5.56 mm) select-fire rifle weighing when loaded with a 20-round magazine. The 5.56 mm round had to penetrate a standard U.S. helmet at 500 yards (460 meters) and retain a velocity in excess of the speed of sound, while matching or exceeding the wounding ability of the .30 Carbine cartridge.
This request ultimately resulted in the development of a scaled-down version of the Armalite AR-10, named ArmaLite AR-15 rifle. In the late 1950s, designer Eugene Stoner was completing his work on the AR-15. The AR-15 used .22-caliber bullets, which destabilized when they hit a human body, as opposed to the .30 round, which typically passed through in a straight line. The smaller caliber meant that it could be controlled in autofire due to the reduced bolt thrust and free recoil impulse. Being almost one-third the weight of the .30 meant that the soldier could sustain fire for longer with the same load. Due to design innovations, the AR-15 could fire 600 to 700 rounds a minute with an extremely low jamming rate. Parts were stamped out, not hand-machined, so could be mass-produced, and the stock was plastic to reduce weight.
In 1958, the Army's Combat Developments Experimentation Command ran experiments with small squads in combat situations using the M14, AR-15, and another rifle designed by Winchester. The resulting study recommended adopting a lightweight rifle like the AR-15. In response, the Army declared that all rifles and machine guns should use the same ammunition, and ordered full production of the M-14. However, advocates for the AR-15 gained the attention of Air Force Chief of Staff General Curtis LeMay. After testing the AR-15 with the ammunition manufactured by Remington that Armalite and Colt recommended, the Air Force declared that the AR-15 was its 'standard model' and ordered 8,500 rifles and 8.5 million rounds. Advocates for the AR-15 in the Defense Advanced Research Projects Agency acquired 1,000 Air Force AR-15s and shipped them to be tested by the Army of the Republic of Vietnam (ARVN). The South Vietnam soldiers issued glowing reports of the weapon's reliability, recording zero broken parts while firing 80,000 rounds in one stage of testing, and requiring only two replacement parts for the 1,000 weapons over the entire course of testing. The report of the experiment recommended that the U.S. provide the AR-15 as the standard rifle of the ARVN, but Admiral Harry Felt, then Commander in Chief, Pacific Forces, rejected the recommendations on the advice of the U.S. Army.
Throughout 1962 and 1963, the U.S. military extensively tested the AR-15. Positive evaluations emphasized its lightness, "lethality", and reliability. However, the Army Materiel Command criticized its inaccuracy at longer ranges and lack of penetrating power at higher ranges. In early 1963, the U.S. Special Forces asked, and was given permission, to make the AR-15 its standard weapon. Other users included Army Airborne units in Vietnam and some units affiliated with the Central Intelligence Agency. As more units adopted the AR-15, Secretary of the Army Cyrus Vance ordered an investigation into why the weapon had been rejected by the Army. The resulting report found that Army Materiel Command had rigged the previous tests, selecting tests that would favor the M14 and choosing match grade M14s to compete against AR-15s out of the box. At this point, the bureaucratic battle lines were well-defined, with the Army ordnance agencies opposed to the AR-15 and the Air Force and civilian leadership of the Defense Department in favor.
In January 1963, Secretary of Defense Robert McNamara concluded that the AR-15 was the superior weapon system and ordered a halt to M14 production. In late 1963, the Defense Department began mass procurement of rifles for the Air Force and special Army units. Secretary McNamara designated the Army as the procurer for the weapon with the Department, which allowed the Army ordnance establishment to modify the weapon as they wished. The first modification was the addition of a "manual bolt closure," allowing a soldier to ram in a round if it failed to seat properly. The Air Force, which was buying the rifle, and the Marine Corps, which had tested it both objected to this addition, with the Air Force noting, "During three years of testing and operation of the AR-15 rifle under all types of conditions the Air Force has no record of malfunctions that could have been corrected by a manual bolt closing device." They also noted that the closure added weight and complexity, reducing the reliability of the weapon. Colonel Howard Yount, who managed the Army procurement, would later state the bolt closure was added after direction from senior leadership, rather than as a result of any complaint or test result, and testified about the reasons: "the M-1, the M-14, and the carbine had always had something for the soldier to push on; that maybe this would be a comforting feeling to him, or something."
After modifications, the new redesigned rifle was subsequently adopted as the M16 Rifle.
Despite its early failures the M16 proved to be a revolutionary design and stands as the longest continuously serving rifle in US military history. It has been adopted by many US allies and the 5.56×45 mm NATO cartridge has become not only the NATO standard, but "the standard assault-rifle cartridge in much of the world." It also led to the development of small-caliber high-velocity service rifles by every major army in the world. It is a benchmark against which other assault rifles are judged.
M16s were produced by Colt until the late 1980s, when FN Herstal began to manufacture them.
Adoption
In July 1960, General Curtis LeMay was impressed by a demonstration of the ArmaLite AR-15. In the summer of 1961, General LeMay was promoted to U.S. Air Force chief of staff, and requested 80,000 AR-15s. However, General Maxwell D. Taylor, chairman of the Joint Chiefs of Staff, advised President John F. Kennedy that having two different calibers within the military system at the same time would be problematic and the request was rejected. In October 1961, William Godel, a senior man at the Advanced Research Projects Agency, sent 10 AR-15s to South Vietnam. The reception was enthusiastic, and in 1962 another 1,000 AR-15s were sent. United States Army Special Forces personnel filed battlefield reports lavishly praising the AR-15 and the stopping-power of the 5.56 mm cartridge, and pressed for its adoption.
The damage caused by the 5.56 mm bullet was originally believed to be caused by "tumbling" due to the slow 1 turn in rifling twist rate. However, any pointed lead core bullet will "tumble" after penetration in flesh, because the center of gravity is towards the rear of the bullet. The large wounds observed by soldiers in Vietnam were actually caused by bullet fragmentation created by a combination of the bullet's velocity and construction. These wounds were so devastating, that the photographs remained classified into the 1980s.
However, despite overwhelming evidence that the AR-15 could bring more firepower to bear than the M14, the Army opposed the adoption of the new rifle. U.S. Secretary of Defense Robert McNamara now had two conflicting views: the ARPA report favoring the AR-15 and the Army's position favoring the M14. Even President Kennedy expressed concern, so McNamara ordered Secretary of the Army Cyrus Vance to test the M14, the AR-15 and the AK-47. The Army reported that only the M14 was suitable for service, but Vance wondered about the impartiality of those conducting the tests. He ordered the Army inspector general to investigate the testing methods used; the inspector general confirmed that the testers were biased towards the M14.
In January 1963, Secretary McNamara received reports that M14 production was insufficient to meet the needs of the armed forces and ordered a halt to M14 production. At the time, the AR-15 was the only rifle that could fulfill a requirement of a "universal" infantry weapon for issue to all services. McNamara ordered its adoption, despite receiving reports of several deficiencies, most notably the lack of a chrome-plated chamber.
After modifications (most notably, the charging handle was re-located from under the carrying handle like the AR-10, to the rear of the receiver), the new redesigned rifle was renamed the Rifle, Caliber 5.56 mm, M16. Inexplicably, the modification to the new M16 did not include a chrome-plated barrel. Meanwhile, the Army relented and recommended the adoption of the M16 for jungle warfare operations. However, the Army insisted on the inclusion of a forward assist to help push the bolt into battery in the event that a cartridge failed to seat into the chamber. The Air Force, Colt and Eugene Stoner believed that the addition of a forward assist was an unjustified expense. As a result, the design was split into two variants: the Air Force's M16 without the forward assist, and the XM16E1 with the forward assist for the other service branches.
In November 1963, McNamara approved the U.S. Army's order of 85,000 XM16E1s; and to appease General LeMay, the Air Force was granted an order for another 19,000 M16s. In March 1964, the M16 rifle went into production and the Army accepted delivery of the first batch of 2,129 rifles later that year, and an additional 57,240 rifles the following year.
In 1964, the Army was informed that DuPont could not mass-produce the IMR 4475 stick powder to the specifications demanded by the M16. Therefore, Olin Mathieson Company provided a high-performance ball propellant. While the Olin WC 846 powder achieved the desired per second muzzle velocity, it produced much more fouling, that quickly jammed the M16's action (unless the rifle was cleaned well and often).
In March 1965, the Army began to issue the XM16E1 to infantry units. However, the rifle was initially delivered without adequate cleaning kits or instructions because advertising from Colt asserted that the M16's materials made the weapon require little maintenance, which was interpreted by some as meaning the rifle was self-cleaning. Furthermore, cleaning was often conducted with improper equipment, such as insect repellent, water, and aircraft fuel, which induced further wear on the weapon. As a result, reports of stoppages in combat began to surface. The most severe problem was known as "failure to extract"—the spent cartridge case remained lodged in the chamber after the rifle was fired. Documented accounts of dead U.S. troops found next to disassembled rifles eventually led to a Congressional investigation.
In February 1967, the improved XM16E1 was standardized as the M16A1. The new rifle had a chrome-plated chamber and bore to eliminate corrosion and stuck cartridges, and other minor modifications. New cleaning kits, powder solvents, and lubricants were also issued. Intensive training programs in weapons cleaning were instituted including a comic book-style operations manual. As a result, reliability problems greatly diminished and the M16A1 rifle achieved widespread acceptance by U.S. troops in Vietnam.
In 1969, the M16A1 officially replaced the M14 rifle to become the U.S. military's standard service rifle. In 1970, the new WC 844 powder was introduced to reduce fouling.
Reliability
During the early part of its service, the M16 had a reputation for poor reliability and a malfunction rate of two per 1000 rounds fired. The M16's action works by passing high-pressure propellant gasses tapped from the barrel down a tube and into the carrier group within the upper receiver, and is commonly referred to as a "direct impingement gas system". The gas goes from the gas tube, through the bolt carrier key, and into the inside of the carrier where it expands in a donut-shaped gas cylinder. Because the bolt is prevented from moving forward by the barrel, the carrier is driven to the rear by the expanding gases and thus converts the energy of the gas to movement of the rifle's parts. The back part of the bolt forms a piston head and the cavity in the bolt carrier is the piston sleeve. It is more correct to call it an internal piston system.
This design is much lighter and more compact than a gas-piston design. However, this design requires that combustion byproducts from the discharged cartridge be blown into the receiver as well. This accumulating carbon and vaporized metal build-up within the receiver and bolt-carrier negatively affects reliability and necessitates more intensive maintenance on the part of the individual soldier. The channeling of gasses into the bolt carrier during operation increases the amount of heat that is deposited in the receiver while firing the M16 and causes essential lubricant to be "burned off". This requires frequent and generous applications of appropriate lubricant. Lack of proper lubrication is the most common source of weapon stoppages or jams.
The original M16 fared poorly in the jungles of Vietnam and was infamous for reliability problems in the harsh environment. Max Hastings was very critical of the M16's general field issue in Vietnam just as grievous design flaws were becoming apparent. He further states that the Shooting Times experienced repeated malfunctions with a test M16 and assumed these would be corrected before military use, but they were not. Many Marines and soldiers were so angry with the reliability problems they began writing home and on the 26th of March 1967 the Washington Daily News broke the story. Eventually the M16 became the target of a Congressional investigation. The investigation found that:
The M16 was issued to troops without cleaning kits or instruction on how to clean the rifle.
The M16 and 5.56×45 mm cartridge was tested and approved with the use of a DuPont IMR8208M extruded powder, that was switched to Olin Mathieson WC846 ball powder which produced much more fouling, that quickly jammed the action of the M16 (unless the gun was cleaned well and often).
The M16 lacked a forward assist (rendering the rifle inoperable when it failed to go fully forward).
The M16 lacked a chrome-plated chamber, which allowed corrosion problems and contributed to case extraction failures (which was considered the most severe problem and required extreme measures to clear, such as inserting the cleaning-rod down the barrel and knocking the spent cartridge out).
When these issues were addressed and corrected by the M16A1, the reliability problems decreased greatly. According to a 1968 Department of Army report, the M16A1 rifle achieved widespread acceptance by U.S. troops in Vietnam. "Most men armed with the M16 in Vietnam rated this rifle's performance high, however, many men entertained some misgivings about the M16's reliability. When asked what weapon they preferred to carry in combat, 85 percent indicated that they wanted either the M16 or its [smaller]carbine-length version, the XM177E2." Also "the M14 was preferred by 15 percent, while less than one percent wished to carry either the Stoner rifle, the AK-47, the carbine or a pistol." In March 1970, the "President's Blue Ribbon Defense Panel" concluded that the issuance of the M16 saved the lives of 20,000 U.S. servicemen during the Vietnam War, who would have otherwise died had the M14 remained in service. However, the M16 rifle's reputation continues to suffer.
Another underlying cause of the M16’s jamming problem was identified by ordnance staff that discovered that Stoner and ammunition manufacturers had initially tested the AR 15 using DuPont IMR8208M extruded (stick) powder. Later ammunition manufacturers adopted the more readily available Olin Mathieson WC846 ball powder. The ball powder produced a longer peak chamber pressure with undesired timing effects. Upon firing, the cartridge case expands and seals the chamber (obturation). When the peak pressure starts to drop the cartridge case contracts and then can be extracted. With ball powder, the cartridge case was not contracted enough during extraction due to the longer peak pressure period. The ejector would then fail to extract the cartridge case, tearing through the case rim, leaving an obturated case behind.
After the introduction of the M4 Carbine, it was found that the shorter barrel length of 14.5 inches also has a negative effect on reliability, as the gas port is located closer to the chamber than the gas port of the standard length M16 rifle: 7.5 inches instead of 13 inches. This affects the M4's timing and increases the amount of stress and heat on the critical components, thereby reducing reliability. In a 2002 assessment the USMC found that the M4 malfunctioned three times more often than the M16A4 (the M4 failed 186 times for 69,000 rounds fired, while the M16A4 failed 61 times). Thereafter, the Army and Colt worked to make modifications to the M4s and M16A4s in order to address the problems found. In tests conducted in 2005 and 2006 the Army found that on average, the new M4s and M16s fired approximately 5,000 rounds between stoppages.
In December 2006, the Center for Naval Analyses (CNA) released a report on U.S. small arms in combat. The CNA conducted surveys on 2,608 troops returning from combat in Iraq and Afghanistan over the past 12 months. Only troops who had fired their weapons at enemy targets were allowed to participate. 1,188 troops were armed with M16A2 or A4 rifles, making up 46 percent of the survey. 75 percent of M16 users (891 troops) reported they were satisfied with the weapon. 60 percent (713 troops) were satisfied with handling qualities such as handguards, size, and weight. Of the 40 percent dissatisfied, most were with its size. Only 19 percent of M16 users (226 troops) reported a stoppage, while 80 percent of those that experienced a stoppage said it had little impact on their ability to clear the stoppage and re-engage their target. Half of the M16 users experienced failures of their magazines to feed. 83 percent (986 troops) did not need their rifles repaired while in theater. 71 percent (843 troops) were confident in the M16's reliability, defined as level of soldier confidence their weapon will fire without malfunction, and 72 percent (855 troops) were confident in its durability, defined as level of soldier confidence their weapon will not break or need repair. Both factors were attributed to high levels of soldiers performing their own maintenance. 60 percent of M16 users offered recommendations for improvements. Requests included greater bullet lethality, new-built instead of rebuilt rifles, better quality magazines, decreased weight, and a collapsible stock. Some users recommended shorter and lighter weapons such as the M4 carbine. Some issues have been addressed with the issuing of the Improved STANAG magazine in March 2009, and the M855A1 Enhanced Performance Round in June 2010.
In early 2010, two journalists from The New York Times spent three months with soldiers and Marines in Afghanistan. While there, they questioned around 100 infantry troops about the reliability of their M16 rifles, as well as the M4 carbine. The troops did not report reliability problems with their rifles. While only 100 troops were asked, they engaged in daily fighting in Marja, including least a dozen intense engagements in Helmand Province, where the ground is covered in fine powdered sand (called "moon dust" by troops) that can stick to firearms. Weapons were often dusty, wet, and covered in mud. Intense firefights lasted hours with several magazines being expended. Only one soldier reported a jam when his M16 was covered in mud after climbing out of a canal. The weapon was cleared and resumed firing with the next chambered round. Furthermore, the Marine Chief Warrant Officer responsible for weapons training and performance of the Third Battalion, Sixth Marines, reported that "We've had nil in the way of problems; we've had no issues", with his battalion's 350 M16s and 700 M4s.
Design
The M16 is a lightweight, 5.56 mm, air-cooled, gas-operated, magazine-fed assault rifle, with a rotating bolt. The M16's receivers are made of 7075 aluminum alloy, its barrel, bolt, and bolt carrier of steel, and its handguards, pistol grip, and buttstock of plastics.
The M16 internal piston action was derived from the original ArmaLite AR-10 and ArmaLite AR-15 actions. This internal piston action system designed by Eugene Stoner is commonly called a direct impingement system, but it does not use a conventional direct impingement system. In , the designer states: ″This invention is a true expanding gas system instead of the conventional impinging gas system.″
The gas system, bolt carrier, and bolt-locking design were novel for the time.
The M16A1 was especially lightweight at with a loaded 30-round magazine. This was significantly less than the M14 that it replaced at with a loaded 20-round magazine. It is also lighter when compared to the AKM's with a loaded 30-round magazine.
The M16A2 weighs loaded with a 30-round magazine, because of the adoption of a thicker barrel profile. The thicker barrel is more resistant to damage when handled roughly and is also slower to overheat during sustained fire. Unlike a traditional "bull" barrel that is thick its entire length, the M16A2's barrel is only thick forward of the handguards. The barrel profile under the handguards remained the same as the M16A1 for compatibility with the M203 grenade launcher.
Barrel
Early model M16 barrels had a rifling twist of four grooves, right-hand twist, one turn in 14 inches (1:355.6 mm or 64 calibers) bore—as it was the same rifling used by the .222 Remington sporting round. This was shown to make the light .223 Remington bullet yaw in flight at long ranges and it was soon replaced. Later models had an improved rifling with six grooves, right-hand twist, one turn in 12 inches (1:304.8 mm or 54.8 calibers) for increased accuracy and was optimized for firing the M193 ball and M196 tracer bullets. Current models are optimized for firing the heavier NATO SS109 ball and long L110 tracer bullets and have six grooves, right-hand twist, one turn in 7 in (1:177.8 mm or 32 calibers).
Weapons designed to accept both the M193 or SS109 rounds (like civilian market clones) usually have a six-groove, right-hand twist, one turn in 9 inches (1:228.6 mm or 41.1 calibers) bore, although 1:8 inches and 1:7 inches twist rates are available as well.
Recoil
The M16 uses a "straight-line" recoil design, where the recoil spring is located in the stock directly behind the action, and serves the dual function of operating spring and recoil buffer. The stock being in line with the bore also reduces muzzle rise, especially during automatic fire. Because recoil does not significantly shift the point of aim, faster follow-up shots are possible and user fatigue is reduced. In addition, current model M16 flash-suppressors also act as compensators to reduce recoil further.
Notes: Free recoil is calculated by using the rifle weight, bullet weight, muzzle velocity, and charge weight. It is that which would be measured if the rifle were fired suspended from strings, free to recoil. A rifle's perceived recoil is also dependent on many other factors which are not readily quantified.
Sights
The M16's most distinctive ergonomic feature is the carrying handle and rear sight assembly on top of the receiver. This is a by-product of the original AR-10 design, where the carrying handle contained a rear sight that could be dialed in with an elevation wheel for specific range settings and also served to protect the charging handle.
The M16 carry handle also provided mounting groove interfaces and a hole at the bottom of the handle groove for mounting a Colt 3×20 telescopic sight featuring a Bullet Drop Compensation elevation adjustment knob for ranges from . This concurs with the pre-M16A2 maximum effective range of . The Colt 3×20 telescopic sight was factory adjusted to be parallax-free at . In Delft, the Netherlands Artillerie-Inrichtingen produced a roughly similar 3×25 telescopic sight for the carrying handle mounting interfaces.
The M16 elevated iron sight line has a sight radius. As the M16 series rear sight, front sight and sighting in targets designs were modified over time and non-iron sight (optical) aiming devices and new service ammunition were introduced zeroing procedures changed.
The standard pre-M16A2 "Daylight Sight System" uses an AR-15-style L-type flip, two aperture rear sight featuring two combat settings: short-range and long-range , marked 'L' The rear sight features a windage drum that can be adjusted during zeroing with about 1 MOA increments. The front sight is a tapered round post of approximately diameter adjustable during zeroing in about 1 MOA increments. A cartridge or tool is required to (re)zero the sight line.
An alternative pre-M16A2 "Low Light Level Sight System", includes a front sight post with a weak light source provided by tritium radioluminescence in an embedded small glass vial and a two aperture rear sight consisting of a diameter aperture marked 'L' intended for normal use out to and a diameter large aperture for night firing. Regulation stipulates the radioluminescant front sight post must be replaced if more than 144 months (12 years) elapsed after manufacture. The "Low Light Level Sight System" elevation and windage adjustment increments are somewhat coarser compared to the "Daylight Sight System".
With the advent of the M16A2, a less simple fully adjustable rear sight was added, allowing the rear sight to be dialed in with an elevation wheel for specific range settings between in 100 m increments and to allow windage adjustments with a windage knob without the need of a cartridge or tool. The unmarked approximately diameter aperture rear sight is for normal firing situations, zeroing and with the elevation knob for target distances up to 800 meters. The downsides of relatively small rear sight apertures are less light transmission through the aperture and a reduced field of view. A new larger approximately diameter aperture, marked '0-2' and featuring a windage setting index mark, offers a larger field of view during battle conditions and is used as a ghost ring for quick target engagement and during limited visibility. When flipped down, the engraved windage mark on top of the '0-2' aperture ring shows the dialed in windage setting on a windage scale at the rear of the rear sight assembly. When the normal use rear aperture sight is zeroed at 300 m with SS109/M855 ammunition, first used in the M16A2, the '0-2' rear sight will be zeroed for 200 m.
The front sight post was widened to approximately diameter and became square and became adjustable during zeroing in about 1.2 MOA increments.
The M16A4 omitted the carrying handle and rear sight assembly on top of the receiver. Instead, it features a MIL-STD-1913 Picatinny railed flat-top upper receiver for mounting various optical sighting devices or a new detachable carrying handle and M16A2-style rear sight assembly.
The current U.S. Army and Air Force issue M4(A1) Carbine comes with the M68 Close Combat Optic and Back-up Iron Sight. The U.S. Marine Corps uses the 4×32 ACOG Rifle Combat Optic and the U.S. Navy uses the EOTech Holographic Weapon Sight.
Range and accuracy
The M16 rifle is considered to be very accurate for a service rifle. Its light recoil, high-velocity and flat trajectory allow shooters to take head shots out to 300 meters. Newer M16s use the newer M855 cartridge increasing their effective range to 600 meters. They are more accurate than their predecessors and are capable of shooting 1–3-inch groups at 100 yards. "In Fallujah, Iraq Marines with ACOG-equipped M16A4s created a stir by taking so many head shots that until the wounds were closely examined, some observers thought the insurgents had been executed." The newest M855A1 EPR cartridge is even more accurate and during testing "...has shown that, on average, 95 percent of the rounds will hit within an 8 × 8-inch (20.3 × 20.3 cm) target at 600 meters."
Note *: The effective range of a firearm is the maximum distance at which a weapon may be expected to be accurate and achieve the desired effect. Note **: The horizontal range is the distance traveled by a bullet, fired from the rifle at a height of 1.6 meters and 0° elevation, until the bullet hits the ground. Note ***: The lethal range is the maximum range of a small-arms projectile, while still maintaining the minimum energy required to put a man out of action, which is generally believed to be 15 kilogram-meters (108 ft-lb). This is the equivalent of the muzzle energy of a .22LR handgun. Note ****: The maximum range of a small-arms projectile is attained at about 30° elevation. This maximum range is only of safety interest, not for combat firing.
Terminal ballistics
The 5.56×45 mm cartridge had several advantages over the 7.62×51 mm NATO round used in the M14 rifle. It enabled each soldier to carry more ammunition and was easier to control during automatic or burst fire. The 5.56×45 mm NATO cartridge can also produce massive wounding effects when the bullet impacts at high speed and yaws ("tumbles") in tissue leading to fragmentation and rapid transfer of energy.
The original ammunition for the M16 was the 55-grain M193 cartridge. When fired from a barrel at ranges of up to , the thin-jacketed lead-cored round traveled fast enough (above ) that the force of striking a human body would cause the round to yaw (or tumble) and fragment into about a dozen pieces of various sizes thus created wounds that were out of proportion to its caliber. These wounds were so devastating that many considered the M16 to be an inhumane weapon. As the 5.56 mm round's velocity decreases, so does the number of fragments that it produces. The 5.56 mm round does not normally fragment at distances beyond 200 meters or at velocities below 2500 ft/s, and its lethality becomes largely dependent on shot placement.
With the development of the M16A2, the new 62-grain M855 cartridge was adopted in 1983. The heavier bullet had more energy and was made with a steel core to penetrate Soviet body armor. However, this caused less fragmentation on impact and reduced effects against targets without armor, both of which lessened kinetic energy transfer and wounding ability. Some soldiers and Marines coped with this through training, with requirements to shoot vital areas three times to guarantee killing the target.
However, there have been repeated and consistent reports of the M855's inability to wound effectively (i.e., fragment) when fired from the short barreled M4 carbine (even at close ranges). The M4's 14.5-in. barrel length reduces muzzle velocity to about 2900 ft/s. This reduced wounding ability is one reason that, despite the Army's transition to short-barrel M4s, the Marine Corps has decided to continue using the M16A4 with its 20-inch barrel as the 5.56×45 mm M855 is largely dependent upon high velocity in order to wound effectively.
In 2003, the U.S. Army contended that the lack of lethality of the 5.56×45 mm was more a matter of perception than fact. With good shot placement to the head and chest, the target was usually defeated without issue. The majority of failures were the result of hitting the target in non-vital areas such as extremities. However, a minority of failures occurred in spite of multiple hits to the chest. In 2006, a study found that 20% of soldiers using the M4 Carbine wanted more lethality or stopping power. In June 2010, the U.S. Army announced it began shipping its new 5.56 mm, lead-free, M855A1 Enhanced Performance Round to active combat zones. This upgrade is designed to maximize performance of the 5.56×45 mm round, to extend range, improve accuracy, increase penetration and to consistently fragment in soft-tissue when fired from not only standard length M16s, but also the short-barreled M4 carbines. The U.S. Army has been impressed with the new M855A1 EPR round. A 7.62 NATO M80A1 EPR variant was also developed.
Magazines
The M16's magazine was meant to be a lightweight, disposable item. As such, it is made of pressed/stamped aluminum and was not designed to be durable. The M16 originally used a 20-round magazine which was later replaced by a bent 30-round design. As a result, the magazine follower tends to rock or tilt, causing malfunctions. Many non-U.S. and commercial magazines have been developed to effectively mitigate these shortcomings (e.g., H&K's all-stainless-steel magazine, Magpul's polymer P-MAG, etc.).
Production of the 30-round magazine started late 1967 but did not fully replace the 20-round magazine until the mid-1970s. Standard USGI aluminum 30-round M16 magazines weigh empty and are long. The newer plastic magazines are about a half-inch longer. The newer steel magazines are about 0.5-inch longer and four ounces heavier. The M16's magazine has become the unofficial NATO STANAG magazine and is currently used by many Western nations, in numerous weapon systems.
In 2009, the U.S. Military began fielding an "improved magazine" identified by a tan-colored follower. "The new follower incorporates an extended rear leg and modified bullet protrusion for improved round stacking and orientation. The self-leveling/anti-tilt follower minimizes jamming while a wider spring coil profile creates even force distribution. The performance gains have not added weight or cost to the magazines."
In July 2016, the U.S. Army introduced another improvement, the new Enhanced Performance Magazine, which it says will result in a 300% increase in reliability in the M4 Carbine. Developed by the United States Army Armament Research, Development and Engineering Center and the Army Research Laboratory in 2013, it is tan colored with blue follower to distinguish it from earlier, incompatible magazines.
Muzzle devices
Most M16 rifles have a barrel threaded in 1⁄2-28" threads to incorporate the use of a muzzle device such as a flash suppressor or sound suppressor. The initial flash suppressor design had three tines or prongs and was designed to preserve the shooter's night vision by disrupting the flash. Unfortunately it was prone to breakage and getting entangled in vegetation. The design was later changed to close the end to avoid this and became known as the "A1" or "bird cage" flash suppressor on the M16A1. Eventually on the M16A2 version of the rifle, the bottom port was closed to reduce muzzle climb and prevent dust from rising when the rifle was fired in the prone position. For these reasons, the U.S. military declared the A2 flash suppressor as a compensator or a muzzle brake; but it is more commonly known as the "GI" or "A2" flash suppressor.
The M16's Vortex Flash Hider weighs 3 ounces, is 2.25 inches long, and does not require a lock washer to attach to barrel. It was developed in 1984, and is one of the earliest privately designed muzzle devices. The U.S. military uses the Vortex Flash Hider on M4 carbines and M16 rifles. A version of the Vortex has been adopted by the Canadian Military for the Colt Canada C8 CQB rifle. Other flash suppressors developed for the M16 include the Phantom Flash Suppressor by Yankee Hill Machine (YHM) and the KX-3 by Noveske Rifleworks.
The threaded barrel allows sound suppressors with the same thread pattern to be installed directly to the barrel; however this can result in complications such as being unable to remove the suppressor from the barrel due to repeated firing on full auto or three-round burst. A number of suppressor manufacturers have designed "direct-connect" sound suppressors which can be installed over an existing M16's flash suppressor as opposed to using the barrel's threads.
Grenade launchers and shotguns
All current M16 type rifles can mount under-barrel 40 mm grenade-launchers, such as the M203 and M320. Both use the same 40 mm grenades as the older, stand-alone M79 grenade launcher. The M16 can also mount under-barrel 12 gauge shotguns such as KAC Masterkey or the M26 Modular Accessory Shotgun System.
Riot Control Launcher
The M234 Riot Control Launcher is an M16-series rifle attachment firing an M755 blank round. The M234 mounts on the muzzle, bayonet lug, and front sight post of the M16. It fires either the M734 64 mm Kinetic Riot Control or the M742 64 mm CSI Riot Control Ring Airfoil Projectiles. The latter produces a 4 to 5-foot tear gas cloud on impact. The main advantage to using Ring Airfoil Projectiles is that their design does not allow them be thrown back by rioters with any real effect. The M234 is no longer used by U.S. forces. It has been replaced by the M203 40 mm grenade launcher and nonlethal ammunition.
Bayonet
The M16 is 44.25 inches (1124 mm) long with an M7 bayonet attached. The M7 bayonet is based on earlier designs such as the M4, M5, & M6 bayonets, all of which are direct descendants of the M3 Fighting Knife and have spear-point blade with a half sharpened secondary edge. The newer M9 bayonet has a clip-point blade with saw teeth along the spine, and can be used as a multi-purpose knife and wire-cutter when combined with its scabbard. The current USMC OKC-3S bayonet bears a resemblance to the Marines' iconic Ka-Bar fighting knife with serrations near the handle.
Bipod
For use as an ad-hoc automatic rifle, the M16 and M16A1 could be equipped with the XM3 bipod, later standardized as the Bipod, M3 (1966) and Rifle Bipod M3 (1983). Weighing only 0.6 lb, the simple and non-adjustable bipod clamps to the barrel of the rifle to allow for supported fire.
The M3 bipod continues to be referenced in at least one official manual as late as 1985, where it is stated that one of the most stable firing positions is "the prone biped [sic] supported for automatic fire."
NATO standards
In March 1970, the U.S. recommended that all NATO forces adopt the 5.56×45 mm cartridge. This shift represented a change in the philosophy of the military's long-held position about caliber size. By the mid 1970s, other armies were looking at M16-style weapons. A NATO standardization effort soon started and tests of various rounds were carried out starting in 1977. The U.S. offered the 5.56×45 mm M193 round, but there were concerns about its penetration in the face of the wider introduction of body armor. In the end the Belgian 5.56×45 mm SS109 round was chosen (STANAG 4172) in October 1980. The SS109 round was based on the U.S. cartridge but included a new stronger, heavier, 62 grain bullet design, with better long range performance and improved penetration (specifically, to consistently penetrate the side of a steel helmet at 600 meters). Due to its design and lower muzzle velocity (about 3110 ft/s) the Belgian SS109 round is considered more humane because it is less likely to fragment than the U.S. M193 round. The NATO 5.56×45 mm standard ammunition produced for U.S. forces is designated M855.
In October 1980, shortly after NATO accepted the 5.56×45 mm NATO rifle cartridge. Draft Standardization Agreement 4179 (STANAG 4179) was proposed to allow NATO members to easily share rifle ammunition and magazines down to the individual soldier level. The magazine chosen to become the STANAG magazine was originally designed for the U.S. M16 rifle. Many NATO member nations, but not all, subsequently developed or purchased rifles with the ability to accept this type of magazine. However, the standard was never ratified and remains a 'Draft STANAG'.
All current M16 type rifles are designed to fire STANAG 22 mm rifle grenades from their integral flash hiders without the use of an adapter. These 22 mm grenade types range from anti-tank rounds to simple finned tubes with a fragmentation hand grenade attached to the end. They come in the "standard" type which are propelled by a blank cartridge inserted into the chamber of the rifle. They also come in the "bullet trap" and "shoot through" types, as their names imply, they use live ammunition. The U.S. military does not generally use rifle grenades; however, they are used by other nations.
The NATO Accessory Rail STANAG 4694, or Picatinny rail STANAG 2324, or a "Tactical Rail" is a bracket used on M16 type rifles to provide a standardized mounting platform. The rail comprises a series of ridges with a T-shaped cross-section interspersed with flat "spacing slots". Scopes are mounted either by sliding them on from one end or the other; by means of a "rail-grabber" which is clamped to the rail with bolts, thumbscrews or levers; or onto the slots between the raised sections. The rail was originally for scopes. However, once established, the use of the system was expanded to other accessories, such as tactical lights, laser aiming modules, night vision devices, reflex sights, foregrips, bipods, and bayonets.
Currently, the M16 is in use by 15 NATO countries and more than 80 countries worldwide.
Variants
M16
This was the first M16 variant adopted operationally, originally by the U.S. Air Force. It was equipped with triangular handguards, butt stocks without a compartment for the storage of a cleaning kit, a three-pronged flash suppressor, full auto, and no forward assist. Bolt carriers were originally chrome plated and slick-sided, lacking forward assist notches. Later, the chrome plated carriers were dropped in favor of Army issued notched and parkerized carriers though the interior portion of the bolt carrier is still chrome-lined.
The barrel rifling had a 1:12 (305 mm) twist rate to adequately stabilize M193 ball and M196 tracer ammunition.
The Air Force continued to operate these weapons until around 2001, at which time the Air Force converted all of its M16s to the M16A2 configuration.
The M16 was also adopted by the British SAS, who used it during the Falklands War.
XM16E1 and M16A1 (Colt Model 603)
The U.S. Army XM16E1 was essentially the same weapon as the M16 with the addition of a forward assist and corresponding notches in the bolt carrier. The M16A1 was the finalized production model in 1967 and was produced until 1982.
To address issues raised by the XM16E1's testing cycle, a closed, bird-cage flash suppressor replaced the XM16E1's three-pronged flash suppressor which caught on twigs and leaves. Various other changes were made after numerous problems in the field. Cleaning kits were developed and issued while barrels with chrome-plated chambers and later fully lined bores were introduced.
With these and other changes, the malfunction rate slowly declined and new soldiers were generally unfamiliar with early problems. A rib was built into the side of the receiver on the XM16E1 to help prevent accidentally pressing the magazine release button while closing the ejection port cover. This rib was later extended on production M16A1s to help in preventing the magazine release from inadvertently being pressed. The hole in the bolt that accepts the cam pin was crimped inward on one side, in such a way that the cam pin may not be inserted with the bolt installed backwards, which would cause failures to eject until corrected. The M16A1 saw limited use in training capacities until the early 2000s, but is no longer in active service with the U.S., although is still standard issue in many world armies.
M16A2
The development of the M16A2 rifle was originally requested by the United States Marine Corps as a result of combat experience in Vietnam with the XM16E1 and M16A1. It was officially adopted by the Department of Defense as the "US Rifle, 5.56 mm, M16A2" in 1982. The Marines were the first branch of the U.S. Armed Forces to adopt it, in the early/mid-1980s, with the United States Army following suit in the late 1980s. The weapon's reliability allowed it to be widely used around the Marine Corps' special operations divisions as well.
Modifications to the M16A2 were extensive. In addition to the then new STANAG 4172 5.56×45mm NATO chambering and its accompanying rifling, the barrel was made with a greater thickness in front of the front sight post, to resist bending in the field and to allow a longer period of sustained fire without overheating. The rest of the barrel was maintained at the original thickness to enable the M203 grenade launcher to be attached.
The barrel rifling was revised to a faster 1:7 (178 mm) twist rate to adequately stabilize the new 5.56×45 mm NATO SS109/M855 ball and L110/M856 tracer ammunition. The heavier longer SS109/M855 bullet reduced muzzle velocity from , to about .
A new adjustable rear sight was added, allowing the rear sight to be dialed in for specific range settings between 300 and 800 meters to take full advantage of the ballistic characteristics of the SS109/M855 rounds and to allow windage adjustments without the need of a tool or cartridge. The flash suppressor was again modified, this time to be closed on the bottom so it would not kick up dirt or snow when being fired from the prone position, and acting as a recoil compensator.
A spent case deflector was incorporated into the upper receiver immediately behind the ejection port to prevent (hot) cartridge cases from striking left-handed users. The action was also modified, replacing the fully automatic setting with a three-round burst setting. When using a fully automatic weapon, inexperienced troops often hold down the trigger and "spray" when under fire. The U.S. Army concluded that three-shot groups provide an optimum combination of ammunition conservation, accuracy, and firepower. The USMC has retired the M16A2 in favor of the newer M16A4; a few M16A2s remain in service with the U.S. Army Reserve and National Guard, Air Force, Navy and Coast Guard.
The handguard was modified from the original triangular shape to a round one, which better fit smaller hands and could be fitted to older models of the M16. The new handguards were also symmetrical so armories need not separate left- and right-hand spares. The handguard retention ring was tapered to make it easier to install and uninstall the handguards. A notch for the middle finger was added to the pistol grip, as well as more texture to enhance the grip. The buttstock was lengthened by . The new buttstock became ten times stronger than the original due to advances in polymer technology since the early 1960s. Original M16 stocks were made from cellulose-impregnated phenolic resin; the newer stocks were engineered from DuPont Zytel glass-filled thermoset polymers. The new stock included a fully textured polymer buttplate for better grip on the shoulder, and retained a panel for accessing a small compartment inside the stock, often used for storing a basic cleaning kit.
M16A3
The M16A3 is a modified version of the M16A2 adopted in small numbers by the U.S. Navy SEAL, Seabee, and Security units. It features the M16A1 trigger group providing "safe", "semi-automatic" and "fully automatic" modes instead of the A2's "safe", "semi-automatic", and "three-round burst" modes. Otherwise it is externally identical to the M16A2.
M16A4
The M16A4 is the fourth generation of the M16 series. The iron sight/carrying handle assembly on the M16A2/M16A3 upper receiver, was replaced by a MIL-STD-1913 "Picatinny railed" flat-top upper receiver for mounting aiming optics or a removable iron sight/carrying handle assembly. The M16A4 rear aperture sights integrated in the Picatinny rail mounted carry handle assembly are adjustable from 300 m (328 yd) up to 600 m (656 yd), where the further similar M16A2 iron sights line can reach up to 800 m (875 yd). The FN M16A4, using safe/semi/three-round burst selective fire, became standard issue for the U.S. Marine Corps.
Military issue rifles were also equipped with a full length quad Knight's Armament Company M5 RAS Piacatinny railed hand guard (that holds zero on the top rail), allowing vertical grips, lasers, tactical lights, and other accessories to be attached, coining the designation M16A4 MWS (or Modular Weapon System) in U.S. Army field manuals.
Colt also produces M16A4 models for international purchases:
R0901 / RO901/ NSN 1005-01-383-2872 (Safe/Semi/Auto)
R0905 / RO905 (Safe/Semi/Burst)
A study of significant changes to Marine M16A4 rifles released in February 2015 outlined several new features that could be added from inexpensive and available components. Those features included: a muzzle compensator in place of the flash suppressor to manage recoil and allow for faster follow-on shots, though at the cost of noise and flash signature and potential overpressure in close quarters; a heavier and/or free-floating barrel to increase accuracy from 4.5 MOA (Minute(s) Of Angle) to potentially 2 MOA; changing the reticle on the Rifle Combat Optic from chevron-shaped to a semi-circlar reticle with a dot at the center used in the M27 IAR's Squad Day Optic so as not to obscure the target at long distance; using a trigger group with a more consistent pull force, even a reconsideration of the burst capability; and the addition of ambidextrous charging handles and bolt catch releases for easier use with left-handed shooters.
In 2014, Marine units were provided with a limited number of adjustable stocks in place of the traditional fixed stock for their M16A4s to issue to smaller Marines who would have trouble comfortably reaching the trigger when wearing body armor. The adjustable stocks were added as a standard authorized accessory, meaning units can use operations and maintenance funds to purchase more if needed.
The Marine Corps had long maintained the full-length M16 as their standard infantry rifle, but in October 2015 the switch to the M4 carbine was approved as the standard-issue weapon, giving Marine infantry a smaller and more compact weapon. Enough M4s were already in the inventory to re-equip all necessary units by September 2016, and M16A4s were moved to support and non-infantry Marines.
M16S1
In the 1970s, Singapore was looking for an assault rifle for the Singapore Armed Forces and chose both the M16 and ArmaLite AR-15. Since importing M16s from the US would be difficult, they made their own copies of the M16, designated M16S1; "S" stood for Singapore. It was replaced by the SAR 21, which was introduced during 1999 and 2000, but is still kept for reserve forces.
Summary of differences
Derivatives
Colt Commando (AKA: XM177 & GAU-5)
In Vietnam, some soldiers were issued a carbine version of the M16 named XM177. The XM177 had a shorter barrel and a telescoping stock, which made it substantially more compact. It also possessed a combination flash hider/sound moderator to reduce problems with muzzle flash and loud report. The Air Force's GAU-5/A (XM177) and the Army's XM177E1 variants differed over the latter's inclusion of a forward assist, although some GAU-5s do have the forward assist. The final Air Force GAU-5/A and Army XM177E2 had an barrel with a longer flash/sound suppressor. The lengthening of the barrel was to support the attachment of Colt's own XM148 40 mm grenade launcher. These versions were also known as the Colt Commando model commonly referenced and marketed as the CAR-15. The variants were issued in limited numbers to special forces, helicopter crews, Air Force pilots, Air Force Security Police Military Working Dog (MWD) handlers, officers, radio operators, artillerymen, and troops other than front line riflemen. Some USAF GAU-5A/As were later equipped with even longer 1/12 rifled barrels as the two shorter versions were worn out. The barrel allowed the use of MILES gear and for bayonets to be used with the sub-machine guns (as the Air Force described them). By 1989, the Air Force started to replace the earlier barrels with 1/7 rifled models for use with the M855-round. The weapons were given the redesignation of GUU-5/P.
These were used by the British Special Air Service during the Falklands War.
M4 carbine
The M4 carbine was developed from various outgrowths of these designs, including a number of -barreled A1 style carbines. The XM4 (Colt Model 720) started its trials in 1984, with a barrel of . The weapon became the M4 in 1991. Officially adopted as a replacement for the M3 "Grease Gun" (and the Beretta M9 and M16A2 for select troops) in 1994, it was used with great success in the Balkans and in more recent conflicts, including the Afghanistan and Iraq theaters. The M4 carbine has a three-round burst firing mode, while the M4A1 carbine has a fully automatic firing mode. Both have a Picatinny rail on the upper receiver, allowing the carry handle/rear sight assembly to be replaced with other sighting devices.
M4 Commando
Colt also returned to the original "Commando" idea, with its Model 733, essentially a modernized XM177E2 with many of the features introduced on the M16A2.
Diemaco C7 and C8
The Diemaco C7 and C8 are updated variants of the M16 developed and used by the Canadian Forces and are now manufactured by Colt Canada. The C7 is a further development of the experimental M16A1E1. Like earlier M16s, it can be fired in either semi-automatic or automatic mode, instead of the burst function selected for the M16A2. The C7 also features the structural strengthening, improved handguards, and longer stock developed for the M16A2. Diemaco changed the trapdoor in the buttstock to make it easier to access and a spacer of is available to adjust stock length to user preference. The most easily noticeable external difference between American M16A2s and Diemaco C7s is the retention of the A1 style rear sights. Not easily apparent is Diemaco's use of hammer-forged barrels. The Canadians originally desired to use a heavy barrel profile instead.
The C7 has been developed to the C7A1, with a Weaver rail on the upper receiver for a C79 optical sight, and to the C7A2, with different furniture and internal improvements. The Diemaco produced Weaver rail on the original C7A1 variants does not meet the M1913 "Picatinny" standard, leading to some problems with mounting commercial sights. This is easily remedied with minor modification to the upper receiver or the sight itself. Since Diemaco's acquisition by Colt to form Colt Canada, all Canadian produced flattop upper receivers are machined to the M1913 standard.
The C8 is the carbine version of the C7. The C7 and C8 are also used by Hærens Jegerkommando, Marinejegerkommandoen and FSK (Norway), Military of Denmark (all branches), the Royal Netherlands Army and Netherlands Marine Corps as its main infantry weapon. Following trials, variants became the weapon of choice of the British SAS.
Mk 4 Mod 0
The Mk 4 Mod 0 was a variant of the M16A1 produced for the U.S. Navy SEALs during the Vietnam War and adopted in April 1970. It differed from the basic M16A1 primarily in being optimized for maritime operations and coming equipped with a sound suppressor. Most of the operating parts of the rifle were coated in Kal-Guard, a hole of was drilled through the stock and buffer tube for drainage, and an O-ring was added to the end of the buffer assembly. The weapon could reportedly be carried to the depth of 200 feet (60 m) in water without damage. The initial Mk 2 Mod 0 Blast Suppressor was based on the U.S. Army's Human Engineering Lab's (HEL) M4 noise suppressor. The HEL M4 vented gas directly from the action, requiring a modified bolt carrier. A gas deflector was added to the charging handle to prevent gas from contacting the user. Thus, the HEL M4 suppressor was permanently mounted though it allowed normal semi-automatic and automatic operation. If the HEL M4 suppressor were removed, the weapon would have to be manually loaded after each single shot. On the other hand, the Mk 2 Mod 0 blast suppressor was considered an integral part of the Mk 4 Mod 0 rifle, but it would function normally if the suppressor were removed. The Mk 2 Mod 0 blast suppressor also drained water much more quickly and did not require any modification to the bolt carrier or to the charging handle. In the late 1970s, the Mk 2 Mod 0 blast suppressor was replaced by the Mk 2 blast suppressor made by Knight's Armament Company (KAC). The KAC suppressor can be fully submerged and water will drain out in less than eight seconds. It will operate without degradation even if the rifle is fired at the maximum rate of fire. The U.S. Army replaced the HEL M4 with the much simpler Studies in Operational Negation of Insurgency and Counter-Subversion (SIONICS) MAW-A1 noise and flash suppressor.
US Navy Mk 12 Special Purpose Rifle
Developed to increase the effective range of soldiers in the designated marksman role, the U.S. Navy developed the Mark 12 Special Purpose Rifle (SPR). Configurations in service vary, but the core of the Mark 12 SPR is an 18" heavy barrel with muzzle brake and free float tube. This tube relieves pressure on the barrel caused by standard handguards and greatly increases the potential accuracy of the system. Also common are higher magnification optics ranging from the 6× power Trijicon ACOG to the Leupold Mark 4 Tactical rifle scopes. Firing Mark 262 Mod 0 ammunition with a 77gr Open tip Match bullet, the system has an official effective range of 600+ meters. However published reports of confirmed kills beyond 800 m from Iraq and Afghanistan were not uncommon.
M231 Firing Port Weapon (FPW)
The M231 Firing Port Weapon (FPW) is an adapted version of the M16 assault rifle for firing from ports on the M2 Bradley. The infantry's normal M16s are too long for use in a "buttoned up" fighting vehicle, so the FPW was developed to provide a suitable weapon for this role.
Colt Model 655 and 656 "Sniper" variants
With the expanding Vietnam War, Colt developed two rifles of the M16 pattern for evaluation as possible light sniper or designated marksman rifles. The Colt Model 655 M16A1 Special High Profile was essentially a standard A1 rifle with a heavier barrel and a scope bracket that attached to the rifle's carry handle. The Colt Model 656 M16A1 Special Low Profile had a special upper receiver with no carrying handle. Instead, it had a low-profile iron sight adjustable for windage and a Weaver base for mounting a scope, a precursor to the Colt and Picatinny rails. It also had a hooded front iron sight in addition to the heavy barrel. Both rifles came standard with either a Leatherwood/Realist scope 3–9× Adjustable Ranging Telescope. Some of them were fitted with a Sionics noise and flash suppressor. Neither of these rifles were ever standardized.
These weapons can be seen in many ways to be predecessors of the U.S. Army's SDM-R and the USMC's SAM-R weapons.
Others
The Chinese Norinco CQ is an unlicensed derivative of the M16A1 made specifically for export, with the most obvious external differences being in its handguard and revolver-style pistol grip.
The ARMADA rifle (a copy of the Norinco CQ) and TRAILBLAZER carbine (a copy of the Norinco CQ Type A) are manufactured by S.A.M. – Shooter's Arms Manufacturing, a.k.a. Shooter's Arms Guns & Ammo Corporation, headquartered in Metro Cebu, Republic of the Philippines.
The S-5.56 rifle, a clone of the Type CQ, is manufactured by the Defense Industries Organization of Iran. The rifle itself is offered in two variants: the S-5.56 A1 with a 19.9-inch barrel and 1:12 pitch rifling (1 turn in 305 mm), optimized for the use of the M193 Ball cartridge; and the S-5.56 A3 with a 20-inch barrel and a 1:7 pitch rifling (1 turn in 177, 8 mm), optimized for the use of the SS109 cartridge.
The KH-2002 is an Iranian bullpup conversion of the locally produced S-5.56 rifle. Iran intends to replace the standard issue weapon of its armed forces with this rifle.
The Terab rifle is a copy of the DIO S-5.56 manufactured by the Military Industry Corporation of Sudan.
The M16S1 is the M16A1 rifle made under license by ST Kinetics in Singapore. It was the standard issue weapon of the Singapore Armed Forces. It is being replaced by the newer SAR 21 in most branches. It is, in the meantime, the standard issue weapon in the reserve forces.
The MSSR rifle is a sniper rifle developed by the Philippine Marine Corps Scout Snipers that serves as their primary sniper weapon system.
The Special Operations Assault Rifle (SOAR) assault carbine was developed by Ferfrans based on the M16 rifle. It is used by the Special Action Force of the Philippine National Police.
Taiwan uses piston-driven M16-based weapons as their standard rifle. These include the T65, T86 and T91 assault rifles.
Ukraine has announced plans in January 2017 for Ukroboronservis and Aeroscraft to produce the M16 WAC47, an accurized M4 variation that uses standard 7.62×39 mm AK-47 magazines.
As of November 2019, no weapon manufactured as described in the above lines, has been produced.
New Zealand has adopted the Lewis Machine and Tool Company's upgraded version of the M16 system to replace the Steyr AUG. This CQB16 rifle will be fielded in 2017 and is named MARS-L (Modular Ambidextrous Rifle System-Light).
Production and users
The M16 is the most commonly manufactured 5.56×45 mm rifle in the world. Currently, the M16 is in use by 15 NATO countries and more than 80 countries worldwide. Together, numerous companies in the United States, Canada, and China have produced more than 8,000,000 rifles of all variants. Approximately 90% are still in operation. The M16 replaced both the M14 rifle and M2 carbine as standard infantry rifle of the U.S. armed forces. Although, the M14 continues to see limited service, mostly in sniper, designated marksman, and ceremonial roles.
Users
: Taliban use M16A2 and M16A4 rifles previously supplied for Afghan National Army. Also in use with the Badri 313 Battalion.
: Special Forces used the M16A1 in the Falklands War and they currently use the M16A2 (by all Armed Forces).
: M16A4, used by the special forces and State Border Service (DSX).
: M16A1
: M16A2s used by Brazilian Marine Corps
M16A2 is used by the Royal Brunei Armed Forces as their main service rifle.
: Burundian rebels
M16A1
: C7 and C8 variants made by Colt Canada are used by the Canadian Forces.
M16A1 used by Chilean Marine Corps.
Democratic Forces for the Liberation of Rwanda
:
M16A2
M16A1/A2/A3/A4
Ex-U.S. M16A1s
: Used by counter-terrorism and special operations forces
M16A2
M16A2/A3/A4/M4 is used by the Special Forces of the Hellenic Army ISAF Forces in Afghanistan, Hellenic Air Force and the Hellenic Navy.
M16A1/M16A2.
M16A1
: M16A1 is used by Western Army Infantry Regiment along with Howa Type 89 rifles.
M16A1/A
M16A1/A2.
:
M16A1/A2/A4.
M16A2
: Lithuanian Armed Forces
Malaysian Armed Forces, Royal Johor Military Force, Royal Malaysia Police, Malaysian Maritime Enforcement Agency and RELA Corps.
: M16A2 is used by the Mexican Marines in the Mexican Drug War.
: Compagnie des Carabiniers du Prince
M16A1/M16A2/M16A3/M16A4
M16A2 and M16A4; captured M16A2 were also used by Maoist rebels of the People's Liberation Army, Nepal during the Nepalese Civil War.
: C7 and C8 variants are used by the Military of the Netherlands and LSW is used by Netherlands Marine Corps.
: Used by the National Police of Nicaragua and army.
: M16A1 (probably unlicensed copies) used by KPA special forces. Used during the Gangneung incident in 1996.
M16A1
: Used by Palestinian Security Forces and various local militant forces.
M16A1.
M16A2.
: Used by Bougainville Revolutionary Army. Captured from Papua New Guinea Defence Force.
M16A2.
: Manufactured under license by Elisco Tool and Manufacturing. M16A1s and M653Ps in use. Supplemented in Special Forces by the M4 carbine.
:The Polish Military Unit GROM used civilian M4 clones, or Bushmaster XM15E2S M4A3 and KAC SR-16 Carbine, as the basic weapon. Since 2008, they have been replaced by the HK416 rifle.
: A small number of M16A2s are used by the Special Actions Detachment of the Portuguese Navy.
M16A1.
: M16A1 and M16A2
: 1,000+ M16A1s in use
: Local variant of the M16A1 (M16S1) manufactured under license by ST Kinetics.
: Used by Special Forces. Likely received from Moroccan stocks.
: During the Vietnam War, the U.S. provided 27,000 M16 rifles to the Republic of Korea Armed Forces in Vietnam. Also, 600,000 M16A1s (Colt Model 603K) were manufactured under license by Daewoo Precision Industries with deliveries from 1974 to 1985. KATUSA (Korean Augmentation to the U.S. Army) soldiers who serve in the U.S. Army use the M16A2.
A small number of M16A2s are used by the Swedish Armed Forces for familiarization training, as well as a similar number of AKMs, but they are not issued to combat units. The Ak 4 and Ak 5 rifles are used by Swedish Army.
M16A1, as well as indigenous Type 65/65K1/65K2, Type 86 and Type 91 (with AR-18 style gas piston system).
M16A1/A2/A4. A variant of XM177 replica called Type 49 carbine (ปลส.49) Used in South Thailand insurgency.
M16A2/A4.
M16A1/A2/A4.
: One of first military customers as UK purchased first AR-15s to be used in jungle warfare in Indonesia-Malaysia confrontation. The Colt Canada C8 (L119A1/L119A2) variant is used by Royal Military Police Close Protection Units, the Pathfinder Group, United Kingdom Special Forces and 43 Commando Fleet Protection Group Royal Marines.
: Obtained from South Vietnam following Vietnam War Over 946,000 M16s were captured in 1975 alone.
Non-state users
East Indonesia Mujahideen
Bangsamoro Islamic Freedom Fighters
Maute Group
Kurdistan Workers' Party
New People's Army: Captured from AFP and PNP, supplied by sympathizers, or purchased from the black market.
Viet Cong: Captured from U.S. and ARVN forces.
Former users
Islamic Republic of Afghanistan: Standard issue rifle of the Afghan National Army. Colt Canada C7 variants also saw limited service.
M16A1 introduced during the Vietnam War and replaced by the F88 Austeyr in 1989.
Bangsamoro Republik
FARC
Free Aceh Movement
: M16A2 variant. Used by the Royal Hong Kong Regiment.
: Received from the US government during the Vietnam War and Laotian Civil War.
Moro Islamic Liberation Front
M16; replaced in 1988 by Steyr AUG, which was being replaced with a non-Colt M16 variant in 2016.
Provisional IRA – received a number of M16s during The Troubles in Northern Ireland.
: M16A1
: 6,000 M16 and 938,000 M16A1, 1966–1975
Conflicts
1960s
Vietnam War (1955–1975)
Laotian Civil War (1959–1975)
Indonesia–Malaysia confrontation (1963–1966)
The Troubles (Late 1960s–1998)
Colombian conflict (1964–present)
Rhodesian Bush War (1964–1979)
Communist insurgency in Thailand (1965–1983)
Cambodian Civil War (1968–1975)
Communist insurgency in Malaysia (1968–1989)
Moro conflict (1969–2019)
Communist rebellion in the Philippines (1969–present)
1970s
Yom Kippur War (1973)
Lebanese Civil War (1975–1990)
East Timor conflict (1975-1999)
Insurgency in Aceh (1976–2005)
Shaba II (1978)
Cambodian–Vietnamese War (1978–1989)
Salvadoran Civil War (1979–1992)
1980s
Falklands War (1982)
Sri Lankan Civil War (1983–2009)
United States invasion of Grenada (1983)
Armed resistance in Chile (1973–1990)
Bougainville Civil War (1988–1998)
First Liberian Civil War (1989–1997)
United States invasion of Panama (1989-1990)
1990s
Gulf War (1990–1991)
Somali Civil War (1991–present)
Sierra Leone Civil War (1991–2002)
Burundian Civil War (1993–2005)
Cenepa War (1995)
Nepalese Civil War (1996–2006)
First Congo War (1996–1997)
Second Liberian Civil War (1999–2003)
2000s
War in Afghanistan (2001–2021)
War in Darfur (2003–present)
Iraq War (2003–2011)
South Thailand insurgency (2004–present)
Kivu conflict (2004–present)
Insurgency in Paraguay (2005–present)
2006 Lebanon War
Mexican drug war (2006–present)
2010s
Syrian civil war (2011–present)
Infighting in the Gulf Cartel (2011–present)
2013 Lahad Datu standoff
Iraqi Civil War (2014–2017)
Operation Madago Raya
Battle of Marawi (2017)
See also
Adaptive Combat Rifle
List of Colt AR-15 and M16 rifle variants
Colt 9 mm SMG
Comparison of the AK-47 and M16
Daewoo K2, Republic of Korea Armed Forces (South Korea) assault rifle
Heckler & Koch HK416
List of individual weapons of the U.S. armed forces
M203 40 mm grenade launcher
Norinco CQ, M16 clone developed by China
Robinson Arms XCR
Rubber duck (military)
T65 assault rifle, AR-15 variant developed by ROC Army
Winchester LMR
Table of handgun and rifle cartridges
List of assault rifles
References
Further reading
Modern Warfare, Published by Mark Dartford, Marshall Cavendish (London) 1985
Afonso, Aniceto and Gomes, Carlos de Matos, Guerra Colonial (2000),
Bartocci, Christopher R. Black Rifle II The M16 into the 21st Century. Cobourg, Ontario, Canada: Collector Grade Publications Incorporated, 2004.
Hutton, Robert, The .223, Guns & Ammo Annual Edition, 1971.
McNaugher, Thomas L. "Marksmanship, Mcnamara and the M16 Rifle: Organisations, Analysis and Weapons Acquisition"
Pikula, Sam (Major), The ArmaLite AR-10, 1998
Rose, Alexander. American Rifle-A Biography. 2008; Bantam Dell Publishing. .
Stevens, R. Blake and Edward C. Ezell. The Black Rifle M16 Retrospective. Enhanced second printing. Cobourg, Ontario, Canada: Collector Grade Publications Incorporated, 1994.
Urdang, Laurence, Editor in Chief. The Random House Dictionary of the English Language. 1969; Random House/New York.
U.S. Army; Sadowski, Robert A., Editor. The M16A1 Rifle: Operation and Preventive Maintenance Enhanced, hardcover edition 2013; Skyhorse, New York, NY.
External links
Colt's Manufacturing: The M16A4 Rifle
PEO Soldier M16 fact sheet
Combat Training with the M16 Manual
Rifle Marksmanship M16A1, M16A2/3, M16A4 and M4 Carbine (Army Field Manual)
, artwork by Will Eisner.
ArmaLite AR-10 derivatives
5.56×45mm NATO assault rifles
Modular firearms
Cold War firearms of the United States
Rifles of the Cold War
Colt rifles
Assault rifles of the United States
United States Marine Corps equipment
Weapons and ammunition introduced in 1964
AR-15 style rifles
Gas-operated firearms |
19903 | https://en.wikipedia.org/wiki/Marlon%20Brando | Marlon Brando | Marlon Brando Jr. (April 3, 1924 – July 1, 2004) was an American actor. Considered one of the most influential actors of the 20th century, he received numerous accolades throughout his career which spanned six decades, including two Academy Awards, two Golden Globe Awards, and three British Academy Film Awards. Brando was also an activist for many causes, notably the civil rights movement and various Native American movements. Having studied with Stella Adler in the 1940s, he is credited with being one of the first actors to bring the Stanislavski system of acting and method acting, derived from the Stanislavski system, to mainstream audiences.
He initially gained acclaim and his first Academy Award nomination for Best Actor in a Leading Role for reprising the role of Stanley Kowalski in the 1951 film adaptation of Tennessee Williams' play A Streetcar Named Desire, a role that he originated successfully on Broadway. He received further praise, and a first Academy Award and Golden Globe Award, for his performance as Terry Malloy in On the Waterfront, and his portrayal of the rebellious motorcycle gang leader Johnny Strabler in The Wild One proved to be a lasting image in popular culture. Brando received Academy Award nominations for playing Emiliano Zapata in Viva Zapata! (1952); Mark Antony in Joseph L. Mankiewicz's 1953 film adaptation of Shakespeare's Julius Caesar; and Air Force Major Lloyd Gruver in Sayonara (1957), an adaptation of James A. Michener's 1954 novel.
The 1960s saw Brando's career take a commercial and critical downturn. He directed and starred in the cult western One-Eyed Jacks, a critical and commercial flop, after which he delivered a series of notable box-office failures, beginning with Mutiny on the Bounty (1962). After ten years of underachieving, he agreed to do a screen test as Vito Corleone in Francis Ford Coppola's The Godfather (1972). He got the part and subsequently won his second Academy Award and Golden Globe Award in a performance critics consider among his greatest. He declined the Academy Award due to alleged mistreatment and misportrayal of Native Americans by Hollywood. The Godfather was one of the most commercially successful films of all time, and alongside his Oscar-nominated performance in Last Tango in Paris (1972), Brando reestablished himself in the ranks of top box-office stars.
After a hiatus in the early 1970s, Brando was generally content with being a highly paid character actor in supporting roles, such as Jor-El in Superman (1978), as Colonel Kurtz in Apocalypse Now (1979), and Adam Steiffel in The Formula (1980), before taking a nine-year break from film. According to the Guinness Book of World Records, Brando was paid a record $3.7 million ($ million in inflation-adjusted dollars) and 11.75% of the gross profits for 13 days' work on Superman.
Brando was ranked by the American Film Institute as the fourth-greatest movie star among male movie stars whose screen debuts occurred in or before 1950. He was one of only six actors named in 1999 by Time magazine in its list of the 100 Most Important People of the Century. In this list, Time also designated Brando as the "Actor of the Century".
Early life and education
Brando was born in Omaha, Nebraska, on April 3, 1924, to Marlon Brando (1895–1965), a pesticide and chemical feed manufacturer, and Dorothy Julia Pennebaker (1897–1954). Brando had two elder sisters, named Jocelyn (1919–2005) and Frances (1922–1994). His ancestry was mostly German, Dutch, English, and Irish. His patrilineal immigrant ancestor, Johann Wilhelm Brandau, arrived in New York City in the early 1700s from the Palatinate in Germany. He is also a descendant of Louis DuBois, a French Huguenot, who arrived in New York around 1660. His maternal great-grandfather, Myles Joseph Gahan, was an Irish immigrant who served as a medic in the American Civil War. In 1995, he gave an interview in Ireland in which he said, "I have never been so happy in my life. When I got off the plane I had this rush of emotion. I have never felt at home in a place as I do here. I am seriously contemplating Irish citizenship." Brando was raised a Christian Scientist.
His mother, known as Dodie, was unconventional for her time; she smoked, wore pants, and drove cars. An actress herself and a theater administrator, she helped Henry Fonda begin his acting career. However, she was an alcoholic and often had to be brought home from bars in Chicago by her husband. In his autobiography, Songs My Mother Taught Me, Brando expressed sadness when writing about his mother: "The anguish that her drinking produced was that she preferred getting drunk to caring for us." Dodie and Brando's father eventually joined Alcoholics Anonymous. Brando harbored far more enmity for his father, stating, "I was his namesake, but nothing I did ever pleased or even interested him. He enjoyed telling me I couldn't do anything right. He had a habit of telling me I would never amount to anything." When he was four, Brando was sexually abused by his teenage governess. Brando became attached to her, and was distraught when she left him. For the rest of his life, Brando was distraught over her loss.Around 1930, Brando's parents moved to Evanston, Illinois, when his father's work took him to Chicago, but separated in 1935 when Brando was 11 years old. His mother took the three children to Santa Ana, California, where they lived with her mother. Brando's parents reconciled by 1937, and by the next year left Evanston and moved together to a farm in Libertyville, Illinois, a small town north of Chicago. Between 1939 and 1941, he worked as an usher at the town's only movie theater, The Liberty.
Brando, whose childhood nickname was "Bud", was a mimic from his youth. He developed an ability to absorb the mannerisms of children he played with and display them dramatically while staying in character. He was introduced to neighborhood boy Wally Cox and the two were closest friends until Cox's death in 1973. In the 2007 TCM biopic Brando: The Documentary, childhood friend George Englund recalls Brando's earliest acting as imitating the cows and horses on the family farm as a way to distract his mother from drinking. His sister Jocelyn was the first to pursue an acting career, going to study at the American Academy of Dramatic Arts in New York City. She appeared on Broadway, then films and television. Brando's sister Frances left college in California to study art in New York. Brando had been held back a year in school and was later expelled from Libertyville High School for riding his motorcycle through the corridors.
He was sent to Shattuck Military Academy in Minnesota, where his father had studied before him. Brando excelled at theater and did well in the school. In his final year (1943), he was put on probation for being insubordinate to a visiting army colonel during maneuvers. He was confined to his room, but sneaked into town and was caught. The faculty voted to expel him, though he was supported by the students, who thought expulsion was too harsh. He was invited back for the following year, but decided instead to drop out of high school. Brando worked as a ditch-digger as a summer job arranged by his father. He tried to enlist in the Army, but his induction physical revealed that a football injury he had sustained at Shattuck had left him with a trick knee. He was classified 4-F and not inducted.
New York and acting
Brando decided to follow his sisters to New York, studying at the American Theatre Wing Professional School, part of the Dramatic Workshop of the New School, with influential German director Erwin Piscator. In a 1988 documentary, Marlon Brando: The Wild One, Brando's sister Jocelyn remembered, "He was in a school play and enjoyed it ... So he decided he would go to New York and study acting because that was the only thing he had enjoyed. That was when he was 18." In the A&E Biography episode on Brando, George Englund said Brando fell into acting in New York because "he was accepted there. He wasn't criticized. It was the first time in his life that he heard good things about himself." He spent his first few months in New York sleeping on friends' couches. For a time he lived with Roy Somlyo, who later became a four time Emmy winning Broadway producer.
Brando was an avid student and proponent of Stella Adler, from whom he learned the techniques of the Stanislavski system. This technique encouraged the actor to explore both internal and external aspects to fully realize the character being portrayed. Brando's remarkable insight and sense of realism were evident early on. Adler used to recount that when teaching Brando, she had instructed the class to act like chickens, and added that a nuclear bomb was about to fall on them. Most of the class clucked and ran around wildly, but Brando sat calmly and pretended to lay an egg. Asked by Adler why he had chosen to react this way, he said, "I'm a chicken—what do I know about bombs?" Despite being commonly regarded as a method actor, Brando disagreed. He claimed to have abhorred Lee Strasberg's teachings:
Brando was the first to bring a natural approach to acting on film. According to Dustin Hoffman in his online Masterclass, Brando would often talk to camera men and fellow actors about their weekend even after the director would call action. Once Brando felt he could deliver the dialogue as natural as that conversation he would start the dialogue. In his 2015 documentary, Listen To Me Marlon, he said before that actors were like breakfast cereals, meaning they were predictable. Critics would later say this was Brando being difficult, but actors who worked opposite would say it was just all part of his technique.
Career
Early career: 1944–1951
Brando used his Stanislavski System skills for his first summer stock roles in Sayville, New York, on Long Island. Brando established a pattern of erratic, insubordinate behavior in the few shows he had been in. His behavior had him kicked out of the cast of the New School's production in Sayville, but he was soon afterwards discovered in a locally produced play there. Then, in 1944, he made it to Broadway in the bittersweet drama I Remember Mama, playing the son of Mady Christians. The Lunts wanted Brando to play the role of Alfred Lunt's son in O Mistress Mine, and Lunt even coached him for his audition, but Brando's reading during the audition was so desultory that they couldn't hire him. New York Drama Critics voted him "Most Promising Young Actor" for his role as an anguished veteran in Truckline Café, although the play was a commercial failure. In 1946, he appeared on Broadway as the young hero in the political drama A Flag is Born, refusing to accept wages above the Actors' Equity rate. In that same year, Brando played the role of Marchbanks alongside Katharine Cornell in her production's revival of Candida, one of her signature roles. Cornell also cast him as the Messenger in her production of Jean Anouilh's Antigone that same year. He was also offered the opportunity to portray one of the principal characters in the Broadway premiere of Eugene O'Neill's The Iceman Cometh, but turned the part down after falling asleep while trying to read the massive script and pronouncing the play "ineptly written and poorly constructed".
In 1945, Brando's agent recommended he take a co-starring role in The Eagle Has Two Heads with Tallulah Bankhead, produced by Jack Wilson. Bankhead had turned down the role of Blanche Dubois in A Streetcar Named Desire, which Williams had written for her, to tour the play for the 1946–1947 season. Bankhead recognized Brando's potential, despite her disdain (which most Broadway veterans shared) for method acting, and agreed to hire him even though he auditioned poorly. The two clashed greatly during the pre-Broadway tour, with Bankhead reminding Brando of his mother, being her age and also having a drinking problem. Wilson was largely tolerant of Brando's behavior, but he reached his limit when Brando mumbled through a dress rehearsal shortly before the November 28, 1946, opening. "I don't care what your grandmother did," Wilson exclaimed, "and that Method stuff, I want to know what you're going to do!" Brando in turn raised his voice, and acted with great power and passion. "It was marvelous," a cast member recalled. "Everybody hugged him and kissed him. He came ambling offstage and said to me, 'They don't think you can act unless you can yell.'"
Critics were not as kind, however. A review of Brando's performance in the opening assessed that Brando was "still building his character, but at present fails to impress." One Boston critic remarked of Brando's prolonged death scene, "Brando looked like a car in midtown Manhattan searching for a parking space." He received better reviews at subsequent tour stops, but what his colleagues recalled was only occasional indications of the talent he would later demonstrate. "There were a few times when he was really magnificent," Bankhead admitted to an interviewer in 1962. "He was a great young actor when he wanted to be, but most of the time I couldn't even hear him on the stage."
Brando displayed his apathy for the production by demonstrating some shocking onstage manners. He "tried everything in the world to ruin it for her," Bankhead's stage manager claimed. "He nearly drove her crazy: scratching his crotch, picking his nose, doing anything." After several weeks on the road, they reached Boston, by which time Bankhead was ready to dismiss him. This proved to be one of the greatest blessings of his career, as it freed him up to play the role of Stanley Kowalski in Tennessee Williams's 1947 play A Streetcar Named Desire, directed by Elia Kazan. Bankhead had recommended him to Williams for the role of Stanley, thinking he was perfect for the part.
Pierpont writes that John Garfield was first choice for the role, but "made impossible demands." It was Kazan's decision to fall back on the far less experienced (and technically too young for the role) Brando. In a letter dated August 29, 1947, Williams confided to his agent Audrey Wood: "It had not occurred to me before what an excellent value would come through casting a very young actor in this part. It humanizes the character of Stanley in that it becomes the brutality and callousness of youth rather than a vicious old man ... A new value came out of Brando's reading which was by far the best reading I have ever heard." Brando based his portrayal of Kowalski on the boxer Rocky Graziano, whom he had studied at a local gymnasium. Graziano did not know who Brando was, but attended the production with tickets provided by the young man. He said, "The curtain went up and on the stage is that son of a bitch from the gym, and he's playing me."
In 1947, Brando performed a screen test for an early Warner Brothers script for the novel Rebel Without a Cause (1944), which bore no relation to the film eventually produced in 1955. The screen test is included as an extra in the 2006 DVD release of A Streetcar Named Desire.
Brando's first screen role was a bitter paraplegic veteran in The Men (1950). He spent a month in bed at the Birmingham Army Hospital in Van Nuys to prepare for the role. The New York Times reviewer Bosley Crowther wrote that Brando as Ken "is so vividly real, dynamic and sensitive that his illusion is complete" and noted, "Out of stiff and frozen silences he can lash into a passionate rage with the tearful and flailing frenzy of a taut cable suddenly cut."
By Brando's own account, it may have been because of this film that his draft status was changed from 4-F to 1-A. He had had surgery on his trick knee, and it was no longer physically debilitating enough to incur exclusion from the draft. When Brando reported to the induction center, he answered a questionnaire by saying his race was "human", his color was "Seasonal-oyster white to beige", and he told an Army doctor that he was psychoneurotic. When the draft board referred him to a psychiatrist, Brando explained that he had been expelled from military school and had severe problems with authority. Coincidentally, the psychiatrist knew a doctor friend of Brando. Brando avoided military service during the Korean War.
Early in his career, Brando began using cue cards instead of memorizing his lines. Despite the objections of several of the film directors he worked with, Brando felt that this helped bring realism and spontaneity to his performances. He felt otherwise he would appear to be reciting a writer's speech. In the TV documentary The Making of Superman: The Movie, Brando explained:
However, some thought Brando used the cards out of laziness or an inability to memorize his lines. Once on The Godfather set, Brando was asked why he wanted his lines printed out. He responded, "Because I can read them that way."
Rise to fame: 1951–1954
Brando brought his performance as Stanley Kowalski to the screen in Tennessee William's A Streetcar Named Desire (1951). The role is regarded as one of Brando's greatest. The reception of Brando's performance was so positive that Brando quickly became a male sex symbol in Hollywood. The role earned him his first Academy Award nomination in the Best Actor category.
He was also nominated the next year for Viva Zapata! (1952), a fictionalized account of the life of Mexican revolutionary Emiliano Zapata. The film recounted Zapata's peasant upbringing, his rise to power in the early 20th century, and death. The film was directed by Elia Kazan and co-starred Anthony Quinn. In the biopic Marlon Brando: The Wild One, Sam Shaw says, "Secretly, before the picture started, he went to Mexico to the very town where Zapata lived and was born in and it was there that he studied the speech patterns of people, their behavior, movement." Most critics focused on the actor rather than the film, with Time and Newsweek publishing rave reviews.
Years later, in his autobiography, Brando remarked: "Tony Quinn, whom I admired professionally and liked personally, played my brother, but he was extremely cold to me while we shot that picture. During our scenes together, I sensed a bitterness toward me, and if I suggested a drink after work, he either turned me down or else was sullen and said little. Only years later did I learn why." Brando explained that, to create on-screen tension between the two, "Gadg" (Kazan) had told Quinn—who had taken over the role of Stanley Kowalski on Broadway after Brando had finished—that Brando had been unimpressed with his work. After achieving the desired effect, Kazan never told Quinn that he had misled him. It was only many years later, after comparing notes, that Brando and Quinn realized the deception.
Brando's next film, Julius Caesar (1953), received highly favorable reviews. Brando portrayed Mark Antony. While most acknowledged Brando's talent, some critics felt Brando's "mumbling" and other idiosyncrasies betrayed a lack of acting fundamentals and, when his casting was announced, many remained dubious about his prospects for success. Directed by Joseph L. Mankiewicz and co-starring British stage actor John Gielgud, Brando delivered an impressive performance, especially during Antony's noted "Friends, Romans, countrymen ..." speech. Gielgud was so impressed that he offered Brando a full season at the Hammersmith Theatre, an offer he declined. In his biography on the actor, Stefan Kanfer writes, "Marlon's autobiography devotes one line to his work on that film: Among all those British professionals, 'for me to walk onto a movie set and play Mark Anthony was asinine'—yet another example of his persistent self-denigration, and wholly incorrect." Kanfer adds that after a screening of the film, director John Huston commented, "Christ! It was like a furnace door opening—the heat came off the screen. I don't know another actor who could do that." During the filming of Julius Caesar, Brando learned that Elia Kazan had cooperated with congressional investigators, naming a whole string of "subversives" to the House Committee on Un-American Activities (HUAC). By all accounts, Brando was upset by his mentor's decision, but he worked with him again in On The Waterfront. "None of us is perfect," he later wrote in his memoir, "and I think that Gadg has done injury to others, but mostly to himself."
In 1953, Brando also starred in The Wild One, riding his own Triumph Thunderbird 6T motorcycle. Triumph's importers were ambivalent at the exposure, as the subject matter was rowdy motorcycle gangs taking over a small town. The film was criticized for its perceived gratuitous violence at the time, with Time stating, "The effect of the movie is not to throw light on the public problem, but to shoot adrenaline through the moviegoer's veins." Brando allegedly did not see eye to eye with the Hungarian director László Benedek and did not get on with costar Lee Marvin.
To Brando's expressed puzzlement, the movie inspired teen rebellion and made him a role model to the nascent rock-and-roll generation and future stars such as James Dean and Elvis Presley. After the movie's release, the sales of leather jackets and motorcycles skyrocketed. Reflecting on the movie in his autobiography, Brando concluded that it had not aged very well but said:
Later that same year, Brando co-starred with fellow Studio member William Redfield in a summer stock production of George Bernard Shaw's Arms and the Man.
On the Waterfront
In 1954, Brando starred in On the Waterfront, a crime drama film about union violence and corruption among longshoremen. The film was directed by Elia Kazan and written by Budd Schulberg; it also starred Karl Malden, Lee J. Cobb, Rod Steiger and, in her film debut, Eva Marie Saint. When initially offered the role, Brando—still stung by Kazan's testimony to HUAC—demurred and the part of Terry Malloy nearly went to Frank Sinatra. According to biographer Stefan Kanfer, the director believed that Sinatra, who grew up in Hoboken (where the film takes place and was shot), would work as Malloy, but eventually producer Sam Spiegel wooed Brando to the part, signing him for $100,000. "Kazan made no protest because, he subsequently confessed, 'I always preferred Brando to anybody.'"
Brando won the Oscar for his role as Irish-American stevedore Terry Malloy in On the Waterfront. His performance, spurred on by his rapport with Eva Marie Saint and Kazan's direction, was praised as a tour de force. For the scene in which Terry laments his failings, saying I coulda been a contender, he convinced Kazan that the scripted scene was unrealistic. Schulberg's script had Brando acting the entire scene with his character being held at gunpoint by his brother Charlie, played by Rod Steiger. Brando insisted on gently pushing away the gun, saying that Terry would never believe that his brother would pull the trigger and doubting that he could continue his speech while fearing a gun on him. Kazan let Brando improvise and later expressed deep admiration for Brando's instinctive understanding, saying:
Upon its release, On the Waterfront received glowing reviews from critics and was a commercial success, earning an estimated $4.2 million in rentals at the North American box office in 1954. In his July 29, 1954, review, The New York Times critic A. H. Weiler praised the film, calling it "an uncommonly powerful, exciting, and imaginative use of the screen by gifted professionals." Film critic Roger Ebert lauded the film, stating that Brando and Kazan changed acting in American films forever and added it to his "Great Movies" list. In his autobiography, Brando was typically dismissive of his performance: "On the day Gadg showed me the complete picture, I was so depressed by my performance I got up and left the screening room ... I thought I was a huge failure." After Brando won the Academy Award for Best Actor, the statue was stolen. Much later, it turned up at a London auction house, which contacted the actor and informed him of its whereabouts.
Box office successes and directorial debut: 1954–1959
Following On the Waterfront, Brando remained a top box office draw, but critics increasingly felt his performances were half-hearted, lacking the intensity and commitment found in his earlier work, especially in his work with Kazan. He portrayed Napoleon in the 1954 film Désirée. According to co-star Jean Simmons, Brando's contract forced him to star in the movie. He put little effort into the role, claiming he didn't like the script, and later dismissed the entire movie as "superficial and dismal". Brando was especially contemptuous of director Henry Koster.
Brando and Simmons were paired together again in the film adaptation of the musical Guys and Dolls (1955). Guys and Dolls would be Brando's first and last musical role. Time found the picture "false to the original in its feeling", remarking that Brando "sings in a faraway tenor that sometimes tends to be flat." Appearing in Edward Murrow's Person to Person interview in early 1955, he admitted to having problems with his singing voice, which he called "pretty terrible." In the 1965 documentary Meet Marlon Brando, he revealed that the final product heard in the movie was a result of countless singing takes being cut into one and later joked, "I couldn't hit a note with a baseball bat; some notes I missed by extraordinary margins ... They sewed my words together on one song so tightly that when I mouthed it in front of the camera, I nearly asphyxiated myself". Relations between Brando and costar Frank Sinatra were also frosty, with Stefan Kanfer observing: "The two men were diametrical opposites: Marlon required multiple takes; Frank detested repeating himself." Upon their first meeting Sinatra reportedly scoffed, "Don't give me any of that Actors Studio shit." Brando later quipped, "Frank is the kind of guy, when he dies, he's going to heaven and give God a hard time for making him bald." Frank Sinatra called Brando "the world's most overrated actor", and referred to him as "mumbles". The film was commercially though not critically successful, costing $5.5 million to make and grossing $13 million.
Brando played Sakini, a Japanese interpreter for the U.S. Army in postwar Japan, in The Teahouse of the August Moon (1956). Pauline Kael was not particularly impressed by the movie, but noted "Marlon Brando starved himself to play the pixie interpreter Sakini, and he looks as if he's enjoying the stunt—talking with a mad accent, grinning boyishly, bending forward, and doing tricky movements with his legs. He's harmlessly genial (and he is certainly missed when he's offscreen), though the fey, roguish role doesn't allow him to do what he's great at and it's possible that he's less effective in it than a lesser actor might have been." In Sayonara (1957) he appeared as a United States Air Force officer. Newsweek found the film a "dull tale of the meeting of the twain", but it was nevertheless a box-office success. According to Stefan Kanfer's biography of the actor, Brando's manager Jay Kanter negotiated a profitable contract with ten percent of the gross going to Brando, which put him in the millionaire category. The movie was controversial due to openly discussing interracial marriage, but proved a great success, earning 10 Academy Award nominations, with Brando being nominated for Best Actor. The film went on to win four Academy Awards. Teahouse and Sayonara were the first in a string of films Brando would strive to make over the next decade which contained socially relevant messages, and he formed a partnership with Paramount to establish his own production company called Pennebaker, its declared purpose to develop films that contained "social value that would improve the world." The name was a tribute in honor of his mother, who had died in 1954. By all accounts, Brando was devastated by her death, with biographer Peter Manso telling A&E's Biography, "She was the one who could give him approval like no one else could and, after his mother died, it seems that Marlon stops caring." Brando appointed his father to run Pennebaker. In the same A&E special, George Englund claims that Brando gave his father the job because "it gave Marlon a chance to take shots at him, to demean and diminish him".
In 1958, Brando appeared in The Young Lions, dyeing his hair blonde and assuming a German accent for the role, which he later admitted was not convincing. The film is based on the novel by Irwin Shaw, and Brando's portrayal of the character Christian Diestl was controversial for its time. He later wrote, "The original script closely followed the book, in which Shaw painted all Germans as evil caricatures, especially Christian, whom he portrayed as a symbol of everything that was bad about Nazism; he was mean, nasty, vicious, a cliché of evil ... I thought the story should demonstrate that there are no inherently 'bad' people in the world, but they can easily be misled." Shaw and Brando even appeared together for a televised interview with CBS correspondent David Schoenbrun and, during a bombastic exchange, Shaw charged that, like most actors, Brando was incapable of playing flat-out villainy; Brando responded by stating "Nobody creates a character but an actor. I play the role; now he exists. He is my creation." The Young Lions also features Brando's only appearance in a film with friend and rival Montgomery Clift (although they shared no scenes together). Brando closed out the decade by appearing in The Fugitive Kind (1960) opposite Anna Magnani. The film was based on another play by Tennessee Williams but was hardly the success A Streetcar Named Desire had been, with the Los Angeles Times labeling Williams's personae "psychologically sick or just plain ugly" and The New Yorker calling it a "cornpone melodrama".
One-Eyed Jacks and Mutiny on the Bounty
In 1961, Brando made his directorial debut in the western One-Eyed Jacks. The picture was originally directed by Stanley Kubrick, but he was fired early in the production. Paramount then made Brando the director. Brando portrays the lead character Rio, and Karl Malden plays his partner "Dad" Longworth. The supporting cast features Katy Jurado, Ben Johnson, and Slim Pickens. Brando's penchant for multiple retakes and character exploration as an actor carried over into his directing, however, and the film soon went over budget; Paramount expected the film to take three months to complete but shooting stretched to six and the cost doubled to more than six million dollars. Brando's inexperience as an editor also delayed postproduction and Paramount eventually took control of the film. Brando later wrote, "Paramount said it didn't like my version of the story; I'd had everyone lie except Karl Malden. The studio cut the movie to pieces and made him a liar, too. By then, I was bored with the whole project and walked away from it." One-Eyed Jacks was poorly reviewed by critics. While the film did solid business, it ran so over budget that it lost money.
Brando's revulsion with the film industry reportedly boiled over on the set of his next film, Metro-Goldwyn-Mayer's remake of Mutiny on the Bounty, which was filmed in Tahiti. The actor was accused of deliberately sabotaging nearly every aspect of the production. On June 16, 1962, The Saturday Evening Post ran an article by Bill Davidson with the headline "Six million dollars down the drain: the mutiny of Marlon Brando". Mutiny director Lewis Milestone claimed that the executives "deserve what they get when they give a ham actor, a petulant child, complete control over an expensive picture." Mutiny on the Bounty nearly capsized MGM and, while the project had indeed been hampered with delays other than Brando's behavior, the accusations would dog the actor for years as studios began to fear Brando's difficult reputation. Critics also began taking note of his fluctuating weight.
Box office decline: 1963–1971
Distracted by his personal life and becoming disillusioned with his career, Brando began to view acting as a means to a financial end. Critics protested when he started accepting roles in films many perceived as being beneath his talent, or criticized him for failing to live up to the better roles. Previously only signing short-term deals with film studios, in 1961 Brando uncharacteristically signed a five-picture deal with Universal Studios that would haunt him for the rest of the decade. The Ugly American (1963) was the first of these films. Based on the 1958 novel of the same title that Pennebaker had optioned, the film, which featured Brando's sister Jocelyn, was rated fairly positively but died at the box office. Brando was nominated for a Golden Globe for his performance. All of Brando's other Universal films during this period, including Bedtime Story (1964), The Appaloosa (1966), A Countess from Hong Kong (1967) and The Night of the Following Day (1969), were also critical and commercial flops. Countess in particular was a disappointment for Brando, who had looked forward to working with one of his heroes, director Charlie Chaplin. The experience turned out to be an unhappy one; Brando was horrified at Chaplin's didactic style of direction and his authoritarian approach. Brando had also appeared in the spy thriller Morituri in 1965; that, too, failed to attract an audience.
Brando acknowledged his professional decline, writing later, "Some of the films I made during the sixties were successful; some weren't. Some, like The Night of the Following Day, I made only for the money; others, like Candy, I did because a friend asked me to and I didn't want to turn him down ... In some ways I think of my middle age as the Fuck You Years." Candy was especially appalling for many; a 1968 sex farce film directed by Christian Marquand and based on the 1958 novel by Terry Southern, the film satirizes pornographic stories through the adventures of its naive heroine, Candy, played by Ewa Aulin. It is generally regarded as the nadir of Brando's career. The Washington Post observed: "Brando's self-indulgence over a dozen years is costing him and his public his talents." In the March 1966 issue of The Atlantic, Pauline Kael wrote that in his rebellious days, Brando "was antisocial because he knew society was crap; he was a hero to youth because he was strong enough not to take the crap", but now Brando and others like him had become "buffoons, shamelessly, pathetically mocking their public reputations." In an earlier review of The Appaloosa in 1966, Kael wrote that the actor was "trapped in another dog of a movie ... Not for the first time, Mr. Brando gives us a heavy-lidded, adenoidally openmouthed caricature of the inarticulate, stalwart loner." Although he feigned indifference, Brando was hurt by the critical mauling, admitting in the 2015 film Listen to Me Marlon, "They can hit you every day and you have no way of fighting back. I was very convincing in my pose of indifference, but I was very sensitive and it hurt a lot."
Brando portrayed a repressed gay army officer in Reflections in a Golden Eye, directed by John Huston and co-starring Elizabeth Taylor. The role turned out as one of his most acclaimed in years, with Stanley Crouch marveling, "Brando's main achievement was to portray the taciturn but stoic gloom of those pulverized by circumstances." The film overall received mixed reviews. Another notable film was The Chase (1966), which paired the actor with Arthur Penn, Robert Duvall, Jane Fonda and Robert Redford. The film deals with themes of racism, sexual revolution, small-town corruption, and vigilantism. The film was received mostly positively.
Brando cited Burn! (1969) as his personal favorite of the films he had made, writing in his autobiography, "I think I did some of the best acting I've ever done in that picture, but few people came to see it." Brando dedicated a full chapter to the film in his memoir, stating that the director, Gillo Pontecorvo, was the best director he had ever worked with next to Kazan and Bernardo Bertolucci. Brando also detailed his clashes with Pontecorvo on the set and how "we nearly killed each other." Loosely based on events in the history of Guadeloupe, the film got a hostile reception from critics. In 1971, Michael Winner directed him in the British horror film The Nightcomers with Stephanie Beacham, Thora Hird, Harry Andrews and Anna Palk. It is a prequel to The Turn of the Screw, which later became the 1961 film The Innocents. Brando's performance earned him a nomination for a Best Actor BAFTA, but the film bombed at the box office.
The Godfather and Last Tango in Paris
During the 1970s, Brando was considered "unbankable". Critics were becoming increasingly dismissive of his work and he had not appeared in a box office hit since The Young Lions in 1958, the last year he had ranked as one of the Top Ten Box Office Stars and the year of his last Academy Award nomination, for Sayonara. Brando's performance as Vito Corleone, the "Don," in The Godfather (1972), Francis Ford Coppola's adaptation of Mario Puzo's 1969 bestselling novel of the same name, was a career turning point, putting him back in the Top Ten and winning him his second Best Actor Oscar.
Paramount production chief Robert Evans, who had given Puzo an advance to write The Godfather so that Paramount would own the film rights, hired Coppola after many major directors had turned the film down. Evans wanted an Italian-American director who could provide the film with cultural authenticity. Coppola also came cheap. Evans was conscious of the fact that Paramount's last Mafia film, The Brotherhood (1968) had been a box office bomb, and he believed it was partly due to the fact that the director, Martin Ritt, and the star, Kirk Douglas, were Jews and the film lacked an authentic Italian flavor. The studio originally intended the film to be a low-budget production set in contemporary times without any major actors, but the phenomenal success of the novel gave Evans the clout to turn The Godfather into a prestige picture.
Coppola had developed a list of actors for all the roles, and his list of potential Dons included the Oscar-winning Italian-American Ernest Borgnine, the Italian-American Frank de Kova (best known for playing Chief Wild Eagle on the TV sitcom F-Troop), John Marley (a Best Supporting Oscar-nominee for Paramount's 1970 hit film Love Story who was cast as the film producer Jack Woltz in the picture), the Italian-American Richard Conte (who was cast as Don Corleone's deadly rival Don Emilio Barzini), and Italian film producer Carlo Ponti. Coppola admitted in a 1975 interview, "We finally figured we had to lure the best actor in the world. It was that simple. That boiled down to Laurence Olivier or Marlon Brando, who are the greatest actors in the world." The holographic copy of Coppola's cast list shows Brando's name underlined.
Evans told Coppola that he had been thinking of Brando for the part two years earlier, and Puzo had imagined Brando in the part when he wrote the novel and had actually written to him about the part, so Coppola and Evans narrowed it down to Brando. (Ironically, Olivier would compete with Brando for the Best Actor Oscar for his part in Sleuth. He bested Brando at the 1972 New York Film Critics Circle Awards.) Albert S. Ruddy, whom Paramount assigned to produce the film, agreed with the choice of Brando. However, Paramount studio executives were opposed to casting Brando due to his reputation for difficulty and his long string of box office flops. Brando also had One-Eyed Jacks working against him, a troubled production that lost money for Paramount when it was released in 1961. Paramount Pictures President Stanley Jaffe told an exasperated Coppola, "As long as I'm president of this studio, Marlon Brando will not be in this picture, and I will no longer allow you to discuss it."
Jaffe eventually set three conditions for the casting of Brando: That he would have to take a fee far below what he typically received; he'd have to agree to accept financial responsibility for any production delays his behavior cost; and he had to submit to a screen test. Coppola convinced Brando to a videotaped "make-up" test, in which Brando did his own makeup (he used cotton balls to simulate the character's puffed cheeks). Coppola had feared Brando might be too young to play the Don, but was electrified by the actor's characterization as the head of a crime family. Even so, he had to fight the studio in order to cast the temperamental actor. Brando had doubts himself, stating in his autobiography, "I had never played an Italian before, and I didn't think I could do it successfully." Eventually, Charles Bluhdorn, the president of Paramount parent Gulf+Western, was won over to letting Brando have the role; when he saw the screen test, he asked in amazement, "What are we watching? Who is this old guinea?" Brando was signed for a low fee of $50,000, but in his contract, he was given a percentage of the gross on a sliding scale: 1% of the gross for each $10 million over a $10 million threshold, up to 5% if the picture exceeded $60 million. According to Evans, Brando sold back his points in the picture for $100,000, as he was in dire need of funds. "That $100,000 cost him $11 million," Evans claimed.
In a 1994 interview that can be found on the Academy of Achievement website, Coppola insisted, "The Godfather was a very unappreciated movie when we were making it. They were very unhappy with it. They didn't like the cast. They didn't like the way I was shooting it. I was always on the verge of getting fired." When word of this reached Brando, he threatened to walk off the picture, writing in his memoir, "I strongly believe that directors are entitled to independence and freedom to realize their vision, though Francis left the characterizations in our hands and we had to figure out what to do." In a 2010 television interview with Larry King, Al Pacino also talked about how Brando's support helped him keep the role of Michael Corleone in the movie—despite the fact Coppola wanted to fire him. (Pacino also explained in the Larry King interview that while Coppola expressed disappointment in Pacino's early scenes he did not specifically threaten to fire him; Coppola himself was feeling pressure from studio executives who were puzzled by Pacino's performance. In the same interview, Pacino credits Coppola with getting him the part.)Brando was on his best behavior during filming, buoyed by a cast that included Pacino, Robert Duvall, James Caan, and Diane Keaton. In the Vanity Fair article "The Godfather Wars", Mark Seal writes, "With the actors, as in the movie, Brando served as the head of the family. He broke the ice by toasting the group with a glass of wine." 'When we were young, Brando was like the godfather of actors,' says Robert Duvall. 'I used to meet with Dustin Hoffman in Cromwell's Drugstore, and if we mentioned his name once, we mentioned it 25 times in a day.' Caan adds, 'The first day we met Brando everybody was in awe.'"
Brando's performance was glowingly reviewed by critics. "I thought it would be interesting to play a gangster, maybe for the first time in the movies, who wasn't like those bad guys Edward G. Robinson played, but who is kind of a hero, a man to be respected," Brando recalled in his autobiography. "Also, because he had so much power and unquestioned authority, I thought it would be an interesting contrast to play him as a gentle man, unlike Al Capone, who beat up people with baseball bats." Duvall later marveled to A&E's Biography, "He minimized the sense of beginning. In other words he, like, deemphasized the word action. He would go in front of that camera just like he was before. Cut! It was all the same. There was really no beginning. I learned a lot from watching that." Brando won the Academy Award for Best Actor for his performance, but he declined it, becoming the second actor to refuse a Best Actor award (after George C. Scott for Patton). He boycotted the award ceremony, instead sending indigenous American rights activist Sacheen Littlefeather, who appeared in full Apache attire, to state Brando's reasons, which were based on his objection to the depiction of indigenous Americans by Hollywood and television.
The actor followed The Godfather with Bernardo Bertolucci's 1972 film Last Tango in Paris, playing opposite Maria Schneider, but Brando's highly noted performance threatened to be overshadowed by an uproar over the sexual content of the film. Brando portrays a recent American widower named Paul, who begins an anonymous sexual relationship with a young, betrothed Parisian woman named Jeanne. As with previous films, Brando refused to memorize his lines for many scenes; instead, he wrote his lines on cue cards and posted them around the set for easy reference, leaving Bertolucci with the problem of keeping them out of the picture frame. The film features several intense, graphic scenes involving Brando, including Paul anally raping Jeanne using butter as a lubricant, which it was alleged was not consensual, and Paul's angry, emotionally charged final confrontation with the corpse of his dead wife. The controversial movie was a hit, however, and Brando made the list of Top Ten Box Office Stars for the last time. His gross participation deal earned him $3 million. The voting membership of the Academy of Motion Picture Arts & Sciences again nominated Brando for Best Actor, his seventh nomination. Although Brando won the 1973 New York Film Critics Circle Awards, he did not attend the ceremony or send a representative to pick up the award if he won.
Pauline Kael, in The New Yorker review, wrote "The movie breakthrough has finally come. Bertolucci and Brando have altered the face of an art form." Brando confessed in his autobiography, "To this day I can't say what Last Tango in Paris was about", and added the film "required me to do a lot of emotional arm wrestling with myself, and when it was finished, I decided that I wasn't ever again going to destroy myself emotionally to make a movie".
In 1973, Brando was devastated by the death of his childhood best friend Wally Cox. Brando slept in Cox's pajamas and wrenched his ashes from his widow. She was going to sue for their return, but finally said "I think Marlon needs the ashes more than I do."
Late 1970s
In 1976, Brando appeared in The Missouri Breaks with his friend Jack Nicholson. The movie also reunited the actor with director Arthur Penn. As biographer Stefan Kanfer describes, Penn had difficulty controlling Brando, who seemed intent on going over the top with his border-ruffian-turned-contract-killer Robert E. Lee Clayton: "Marlon made him a cross-dressing psychopath. Absent for the first hour of the movie, Clayton enters on horseback, dangling upside down, caparisoned in white buckskin, Littlefeather-style. He speaks in an Irish accent for no apparent reason. Over the next hour, also for no apparent reason, Clayton assumes the intonation of a British upper-class twit and an elderly frontier woman, complete with a granny dress and matching bonnet. Penn, who believed in letting actors do their thing, indulged Marlon all the way." Critics were unkind, with The Observer calling Brando's performance "one of the most extravagant displays of grandedamerie since Sarah Bernhardt", while The Sun complained, "Marlon Brando at fifty-two has the sloppy belly of a sixty-two-year-old, the white hair of a seventy-two-year-old, and the lack of discipline of a precocious twelve-year-old." However, Kanfer noted: "Even though his late work was met with disapproval, a re-examination shows that often, in the middle of the most pedestrian scene, there would be a sudden, luminous occurrence, a flash of the old Marlon that showed how capable he remained."
In 1978, Brando narrated the English version of Raoni, a French-Belgian documentary film directed by Jean-Pierre Dutilleux and Luiz Carlos Saldanha that focused on the life of Raoni Metuktire and issues surrounding the survival of the indigenous Indian tribes of north central Brazil. Brando portrayed Superman's father Jor-El in the 1978 film Superman. He agreed to the role only on assurance that he would be paid a large sum for what amounted to a small part, that he would not have to read the script beforehand, and that his lines would be displayed somewhere off-camera. It was revealed in a documentary contained in the 2001 DVD release of Superman that he was paid $3.7 million for two weeks of work. Brando also filmed scenes for the movie's sequel, Superman II, but after producers refused to pay him the same percentage he received for the first movie, he denied them permission to use the footage. "I asked for my usual percentage," he recollected in his memoir, "but they refused, and so did I." However, after Brando's death, the footage was reincorporated into the 2006 recut of the film, Superman II: The Richard Donner Cut and in the 2006 "loose sequel" Superman Returns, in which both used and unused archive footage of him as Jor-El from the first two Superman films was remastered for a scene in the Fortress of Solitude, and Brando's voice-overs were used throughout the film. In 1979, he made a rare television appearance in the miniseries Roots: The Next Generations, portraying George Lincoln Rockwell; he won a Primetime Emmy Award for Outstanding Supporting Actor in a Miniseries or a Movie for his performance.
Brando starred as Colonel Walter E. Kurtz in Francis Ford Coppola's Vietnam epic Apocalypse Now (1979). He plays a highly decorated U.S. Army Special Forces officer who goes renegade, running his own operation based in Cambodia and is feared by the U.S. military as much as the Vietnamese. Brando was paid $1 million a week for 3 weeks work. The film drew attention for its lengthy and troubled production, as Eleanor Coppola's documentary Hearts of Darkness: A Filmmaker's Apocalypse documents: Brando showed up on the set overweight, Martin Sheen suffered a heart attack, and severe weather destroyed several expensive sets. The film's release was also postponed several times while Coppola edited millions of feet of footage. In the documentary, Coppola talks about how astonished he was when an overweight Brando turned up for his scenes and, feeling desperate, decided to portray Kurtz, who appears emaciated in the original story, as a man who had indulged every aspect of himself. Coppola: "He was already heavy when I hired him and he promised me that he was going to get in shape and I imagined that I would, if he were heavy, I could use that. But he was so fat, he was very, very shy about it ... He was very, very adamant about how he didn't want to portray himself that way." Brando admitted to Coppola that he had not read the book, Heart of Darkness, as the director had asked him to, and the pair spent days exploring the story and the character of Kurtz, much to the actor's financial benefit, according to producer Fred Roos: "The clock was ticking on this deal he had and we had to finish him within three weeks or we'd go into this very expensive overage ... And Francis and Marlon would be talking about the character and whole days would go by. And this is at Marlon's urging—and yet he's getting paid for it."
Upon release, Apocalypse Now earned critical acclaim, as did Brando's performance. His whispering of Kurtz's final words "The horror! The horror!", has become particularly famous. Roger Ebert, writing in the Chicago Sun-Times, defended the movie's controversial denouement, opining that the ending, "with Brando's fuzzy, brooding monologues and the final violence, feels much more satisfactory than any conventional ending possibly could." Brando received a fee of $2 million plus 10% of the gross theatrical rental and 10% of the TV sale rights, earning him around $9 million.
Later work
After appearing as oil tycoon Adam Steiffel in 1980's The Formula, which was poorly received critically, Brando announced his retirement from acting. However, he returned in 1989 in A Dry White Season, based on André Brink's 1979 anti-apartheid novel. Brando agreed to do the film for free, but fell out with director Euzhan Palcy over how the film was edited; he even made a rare television appearance in an interview with Connie Chung to voice his disapproval. In his memoir, he maintained that Palcy "had cut the picture so poorly, I thought, that the inherent drama of this conflict was vague at best." Brando received praise for his performance, earning an Academy Award nomination for Best Supporting Actor and winning the Best Actor Award at the Tokyo Film Festival.
Brando scored enthusiastic reviews for his caricature of his Vito Corleone role as Carmine Sabatini in 1990's The Freshman. In his original review, Roger Ebert wrote, "There have been a lot of movies where stars have repeated the triumphs of their parts—but has any star ever done it more triumphantly than Marlon Brando does in The Freshman?" Variety also praised Brando's performance as Sabatini and noted, "Marlon Brando's sublime comedy performance elevates The Freshman from screwball comedy to a quirky niche in film history." Brando also starred alongside his friend Johnny Depp in the box office hit Don Juan DeMarco (1995) and in Depp's controversial The Brave (1997), which was never released in the United States.
Later performances, such as his appearance in Christopher Columbus: The Discovery (1992) (for which he was nominated for a Raspberry as "Worst Supporting Actor"), The Island of Dr. Moreau (in which he won a "Worst Supporting Actor" Raspberry) (1996), and his barely recognizable appearance in Free Money (1998), resulted in some of the worst reviews of his career. The Island of Dr. Moreau screenwriter Ron Hutchinson would later say in his memoir, Clinging to the Iceberg: Writing for a Living on the Stage and in Hollywood (2017), that Brando sabotaged the film's production by feuding and refusing to cooperate with his colleagues and the film crew.
Unlike its immediate predecessors, Brando's last completed film, The Score (2001), was received generally positively. In the film, in which he portrays a fence, he starred with Robert De Niro.
After Brando's death, the novel Fan-Tan was released. Brando conceived the novel with director Donald Cammell in 1979, but it was not released until 2005.
Final years and death
Brando's notoriety, his troubled family life, and his obesity attracted more attention than his late acting career. He gained a great deal of weight in the 1970s; by the early-to-mid-1990s he weighed over and suffered from Type 2 diabetes. He had a history of weight fluctuation throughout his career that, by and large, he attributed to his years of stress-related overeating followed by compensatory dieting. He also earned a reputation for being difficult on the set, often unwilling or unable to memorize his lines and less interested in taking direction than in confronting the film director with odd demands. He also dabbled with some innovation in his last years. He had several patents issued in his name from the U.S. Patent and Trademark Office, all of which involve a method of tensioning drumheads, between June 2002 and November 2004 (for example, see and its equivalents).
In 2004, Brando recorded voice tracks for the character Mrs. Sour in the unreleased animated film Big Bug Man. This was his last role and his only role as a female character.
A longtime close friend of entertainer Michael Jackson, Brando paid regular visits to his Neverland Ranch, resting there for weeks at a time. Brando also participated in the singer's two-day solo career 30th-anniversary celebration concerts in 2001, and starred in his 13-minute-long music video "You Rock My World", in the same year.
The actor's son, Miko, was Jackson's bodyguard and assistant for several years, and was a friend of the singer. "The last time my father left his house to go anywhere, to spend any kind of time, it was with Michael Jackson", Miko stated. "He loved it ... He had a 24-hour chef, 24-hour security, 24-hour help, 24-hour kitchen, 24-hour maid service. Just carte blanche." "Michael was instrumental helping my father through the last few years of his life. For that I will always be indebted to him. Dad had a hard time breathing in his final days, and he was on oxygen much of the time. He loved the outdoors, so Michael would invite him over to Neverland. Dad could name all the trees there, and the flowers, but being on oxygen it was hard for him to get around and see them all, it's such a big place. So Michael got Dad a golf cart with a portable oxygen tank so he could go around and enjoy Neverland. They'd just drive around—Michael Jackson, Marlon Brando, with an oxygen tank in a golf cart." In April 2001, Brando was hospitalized with pneumonia.
In 2004, Brando signed with Tunisian film director Ridha Behi and began preproduction on a project to be titled Brando and Brando. Up to a week before his death, he was working on the script in anticipation of a July/August 2004 start date. Production was suspended in July 2004 following Brando's death, at which time Behi stated that he would continue the film as an homage to Brando, with a new title of Citizen Brando.
On July 1, 2004, Brando died of respiratory failure from pulmonary fibrosis with congestive heart failure at the UCLA Medical Center. The cause of death was initially withheld, with his lawyer citing privacy concerns. He also suffered from diabetes and liver cancer. Shortly before his death and despite needing an oxygen mask to breathe, he recorded his voice to appear in The Godfather: The Game, once again as Don Vito Corleone. However, Brando recorded only one line due to his health, and an impersonator was hired to finish his lines. His single recorded line was included within the final game as a tribute to the actor. Some additional lines from his character were directly lifted from the film. Karl Malden—Brando's co-star in three films, A Streetcar Named Desire, On the Waterfront, and One-Eyed Jacks—spoke in a documentary accompanying the DVD of A Streetcar Named Desire about a phone call he received from Brando shortly before Brando's death. A distressed Brando told Malden he kept falling over. Malden wanted to come over, but Brando put him off, telling him there was no point. Three weeks later, Brando was dead. Shortly before his death, he had apparently refused permission for tubes carrying oxygen to be inserted into his lungs, which, he was told, was the only way to prolong his life.
Brando was cremated, and his ashes were put in with those of his good friend Wally Cox and another longtime friend, Sam Gilman. They were then scattered partly in Tahiti and partly in Death Valley. In 2007, a 165-minute biopic of Brando for Turner Classic Movies, Brando: The Documentary, produced by Mike Medavoy (the executor of Brando's will), was released.
Personal life
Brando was known for his tumultuous personal life and his large number of partners and children. He was the father to at least 11 children, three of whom were adopted. In 1976, he told a French journalist, "Homosexuality is so much in fashion, it no longer makes news. Like a large number of men, I, too, have had homosexual experiences, and I am not ashamed. I have never paid much attention to what people think about me. But if there is someone who is convinced that Jack Nicholson and I are lovers, may they continue to do so. I find it amusing."
In Songs My Mother Taught Me, Brando wrote that he met Marilyn Monroe at a party where she played piano, unnoticed by anybody else there, that they had an affair and maintained an intermittent relationship for many years, and that he received a telephone call from her several days before she died. He also claimed numerous other romances, although he did not discuss his marriages, his wives, or his children in his autobiography.
He met nisei actress and dancer Reiko Sato in the early 1950s. Though their relationship cooled, they remained friends for the rest of Sato's life, with her dividing her time between Los Angeles and Tetiaroa in her later years. In 1954 Dorothy Kilgallen reported they were an item.
Brando was smitten with the Mexican actress Katy Jurado after seeing her in High Noon. They met when Brando was filming Viva Zapata! in Mexico. Brando told Joseph L. Mankiewicz that he was attracted to "her enigmatic eyes, black as hell, pointing at you like fiery arrows". Their first date became the beginning of an extended affair that lasted many years and peaked at the time they worked together on One-Eyed Jacks (1960), a film directed by Brando.
Brando met actress Rita Moreno in 1954, and they began a love affair. Moreno later revealed in her memoir that when she became pregnant by Brando he arranged for an abortion. After the abortion was botched and Brando fell in love with Tarita Teriipaia, Moreno attempted suicide by overdosing on Brando's sleeping pills. Years after they broke up, Moreno played his love interest in the film The Night of the Following Day.
Brando married actress Anna Kashfi in 1957. Kashfi was born in Calcutta and moved to Wales from India in 1947. She is the daughter of a Welsh steel worker of Irish descent, William O'Callaghan, who had been superintendent on the Indian State railways, and his Welsh wife Phoebe. However, in her book, Brando for Breakfast, Kashfi claimed that she was half Indian and that O'Callaghan was her stepfather. She claimed that her biological father was Indian and that she was the result of an "unregistered alliance" between her parents. Brando and Kashfi had a son, Christian Brando, on May 11, 1958; they divorced in 1959.
In 1960, Brando married Movita Castaneda, a Mexican-American actress; the marriage was annulled in 1968 after it was discovered her previous marriage was still active. Castaneda had appeared in the first Mutiny on the Bounty film in 1935, some 27 years before the 1962 remake with Brando as Fletcher Christian. They had two children together: Miko Castaneda Brando (born 1961) and Rebecca Brando (born 1966).
French actress Tarita Teriipaia, who played Brando's love interest in Mutiny on the Bounty, became his third wife on August 10, 1962. She was 20 years old, 18 years younger than Brando, who was reportedly delighted by her naïveté. Because Teriipaia was a native French speaker, Brando became fluent in the language and gave numerous interviews in French. Brando and Teriipaia had two children together: Simon Teihotu Brando (born 1963) and Tarita Cheyenne Brando (1970–1995). Brando also adopted Teriipaia's daughter, Maimiti Brando (born 1977) and niece, Raiatua Brando (born 1982). Brando and Teriipaia divorced in July 1972.
After Brando's death, the daughter of actress Cynthia Lynn claimed that Brando had had a short-lived affair with her mother, who appeared with Brando in Bedtime Story, and that this affair resulted in her birth in 1964. Throughout the late 1960s and into the early 1980s, he had a tempestuous, long-term relationship with actress Jill Banner.
Brando had a long-term relationship with his housekeeper Maria Cristina Ruiz, with whom he had three children: Ninna Priscilla Brando (born May 13, 1989), Myles Jonathan Brando (born January 16, 1992), and Timothy Gahan Brando (born January 6, 1994). Brando also adopted Petra Brando-Corval (born 1972), the daughter of his assistant Caroline Barrett and novelist James Clavell.
Brando's close friendship with Wally Cox was the subject of rumors. Brando told a journalist: "If Wally had been a woman, I would have married him and we would have lived happily ever after." Two of Cox's wives, however, dismissed the suggestion that the love was more than platonic.
Brando's grandson Tuki Brando (born 1990), son of Cheyenne Brando, is a fashion model. His numerous grandchildren also include Prudence Brando and Shane Brando, children of Miko C. Brando; the children of Rebecca Brando; and the three children of Teihotu Brando among others.
Stephen Blackehart has been reported to be the son of Brando, but Blackehart disputes this claim.
In 2018, Quincy Jones and Jennifer Lee claimed that Brando had had a sexual relationship with comedian and Superman III actor Richard Pryor. Pryor's daughter Rain Pryor later disputed the claim.
Lifestyle
Brando earned a reputation as a 'bad boy' for his public outbursts and antics. According to Los Angeles magazine, "Brando was rock and roll before anybody knew what rock and roll was." His behavior during the filming of Mutiny on the Bounty (1962) seemed to bolster his reputation as a difficult star. He was blamed for a change in director and a runaway budget, though he disclaimed responsibility for either. On June 12, 1973, Brando broke paparazzo Ron Galella's jaw. Galella had followed Brando, who was accompanied by talk show host Dick Cavett, after a taping of The Dick Cavett Show in New York City. He paid a $40,000 out-of-court settlement and suffered an infected hand as a result. Galella wore a football helmet the next time he photographed Brando at a gala benefiting the American Indians Development Association in 1974.
The filming of Mutiny on the Bounty affected Brando's life in a profound way, as he fell in love with Tahiti and its people. He bought a 12-island atoll, Tetiaroa, and in 1970 hired an award-winning young Los Angeles architect, Bernard Judge, to build his home and natural village there without despoiling the environment. An environmental laboratory protecting sea birds and turtles was established, and for many years student groups visited. The 1983 hurricane destroyed many of the structures including his resort. A hotel using Brando's name, The Brando Resort opened in 2014. Brando was an active ham radio operator, with the call signs KE6PZH and FO5GJ (the latter from his island). He was listed in the Federal Communications Commission (FCC) records as Martin Brandeaux to preserve his privacy.
In the A&E Biography episode on Brando, biographer Peter Manso comments, "On the one hand, being a celebrity allowed Marlon to take his revenge on the world that had so deeply hurt him, so deeply scarred him. On the other hand he hated it because he knew it was false and ephemeral." In the same program another biographer, David Thomson, relates, "Many, many people who worked with him, and came to work with him with the best intentions, went away in despair saying he's a spoiled kid. It has to be done his way or he goes away with some vast story about how he was wronged, he was offended, and I think that fits with the psychological pattern that he was a wronged kid."
Politics
In 1946, Brando performed in Ben Hecht's Zionist play A Flag is Born. He attended some fundraisers for John F. Kennedy in the 1960 presidential election. In August 1963, he participated in the March on Washington along with fellow celebrities Harry Belafonte, James Garner, Charlton Heston, Burt Lancaster and Sidney Poitier. Along with Paul Newman, Brando also participated in the freedom rides.
In autumn of 1967, Brando visited Helsinki, Finland at a charity party organized by UNICEF at the Helsinki City Theatre. The gala was televised in thirteen countries. Brando’s visit was based on the famine he had seen in Bihar, India, and he presented the film he shot there to the press and invited guests. He spoke in favor of children’s rights and development aid in developing countries.
In the aftermath of the 1968 assassination of Martin Luther King Jr., Brando made one of the strongest commitments to furthering King's work. Shortly after King's death, he announced that he was bowing out of the lead role of a major film (The Arrangement) (1969) which was about to begin production in order to devote himself to the civil rights movement. "I felt I'd better go find out where it is; what it is to be black in this country; what this rage is all about," Brando said on the late-night ABC-TV talk show Joey Bishop Show. In A&E's Biography episode on Brando, actor and co-star Martin Sheen states, "I'll never forget the night that Reverend King was shot and I turned on the news and Marlon was walking through Harlem with Mayor Lindsay. And there were snipers and there was a lot of unrest and he kept walking and talking through those neighborhoods with Mayor Lindsay. It was one of the most incredible acts of courage I ever saw, and it meant a lot and did a lot."
Brando's participation in the civil rights movement actually began well before King's death. In the early 1960s, he contributed thousands of dollars to both the Southern Christian Leadership Conference (S.C.L.C.) and to a scholarship fund established for the children of slain Mississippi N.A.A.C.P. leader Medgar Evers. In 1964 Brando was arrested at a "fish-in" held to protest a broken treaty that had promised Native Americans fishing rights in Puget Sound. By this time, Brando was already involved in films that carried messages about human rights: Sayonara, which addressed interracial romance, and The Ugly American, depicting the conduct of U.S. officials abroad and the deleterious effect on the citizens of foreign countries. For a time, he was also donating money to the Black Panther Party and considered himself a friend of founder Bobby Seale. Brando ended his financial support for the group over his perception of its increasing radicalization, specifically a passage in a Panther pamphlet put out by Eldridge Cleaver advocating indiscriminate violence, "for the Revolution."
Brando was also a supporter of the American Indian Movement. At the 1973 Academy Awards ceremony, Brando refused to accept the Oscar for his career-reviving performance in The Godfather. Sacheen Littlefeather represented him at the ceremony. She appeared in full Apache attire and stated that owing to the "poor treatment of Native Americans in the film industry", Brando would not accept the award. This occurred while the standoff at Wounded Knee was ongoing. The event grabbed the attention of the US and the world media. This was considered a major event and victory for the movement by its supporters and participants.
Outside of his film work, Brando appeared before the California Assembly in support of a fair housing law and personally joined picket lines in demonstrations protesting discrimination in housing developments in 1963.
He was also an activist against apartheid. In 1964, he favored a boycott of his films in South Africa to prevent them from being shown to a segregated audience. He took part at a 1975 protest rally against American investments in South Africa and for the release of Nelson Mandela. In 1989, Brando also starred in the film A Dry White Season, based upon André Brink's novel of the same name.
Comments on Jews and Hollywood
In an interview in Playboy magazine in January 1979, Brando said: "You've seen every single race besmirched, but you never saw an image of the kike because the Jews were ever so watchful for that—and rightly so. They never allowed it to be shown on screen. The Jews have done so much for the world that, I suppose, you get extra disappointed because they didn't pay attention to that."
Brando made a similar comment on Larry King Live in April 1996, saying:
Larry King, who was Jewish, replied: "When you say—when you say something like that, you are playing right in, though, to anti-Semitic people who say the Jews are—" Brando interrupted: "No, no, because I will be the first one who will appraise the Jews honestly and say 'Thank God for the Jews'."
Jay Kanter, Brando's agent, producer, and friend, defended him in Daily Variety: "Marlon has spoken to me for hours about his fondness for the Jewish people, and he is a well-known supporter of Israel." Similarly, Louie Kemp, in his article for Jewish Journal, wrote: "You might remember him as Don Vito Corleone, Stanley Kowalski or the eerie Col. Walter E. Kurtz in 'Apocalypse Now', but I remember Marlon Brando as a mensch and a personal friend of the Jewish people when they needed it most."
Legacy
Brando was one of the most respected actors of the post-war era. He is listed by the American Film Institute as the fourth greatest male star whose screen debut occurred before or during 1950 (it occurred in 1950). He earned respect among critics for his memorable performances and charismatic screen presence. He helped popularize 'method acting'. He is regarded as one of the greatest cinema actors of the 20th century.<ref>Movies in American History: An Encyclopedia'</ref>Encyclopedia Britannica describes him as "the most celebrated of the method actors, and his slurred, mumbling delivery marked his rejection of classical dramatic training. His true and passionate performances proved him one of the greatest actors of his generation". It also notes the apparent paradox of his talent: "He is regarded as the most influential actor of his generation, yet his open disdain for the acting profession ... often manifested itself in the form of questionable choices and uninspired performances. Nevertheless, he remains a riveting screen presence with a vast emotional range and an endless array of compulsively watchable idiosyncrasies."
Cultural influence
Marlon Brando is a cultural icon with enduring popularity. His rise to national attention in the 1950s had a profound effect on American culture. According to film critic Pauline Kael, "Brando represented a reaction against the post-war mania for security. As a protagonist, the Brando of the early fifties had no code, only his instincts. He was a development from the gangster leader and the outlaw. He was antisocial because he knew society was crap; he was a hero to youth because he was strong enough not to take the crap ... Brando represented a contemporary version of the free American ... Brando is still the most exciting American actor on the screen." Sociologist Dr. Suzanne McDonald-Walker states: "Marlon Brando, sporting leather jacket, jeans, and moody glare, became a cultural icon summing up 'the road' in all its maverick glory." His portrayal of the gang leader Johnny Strabler in The Wild One has become an iconic image, used both as a symbol of rebelliousness and a fashion accessory that includes a Perfecto style motorcycle jacket, a tilted cap, jeans and sunglasses. Johnny's haircut inspired a craze for sideburns, followed by James Dean and Elvis Presley, among others. Dean copied Brando's acting style extensively and Presley used Brando's image as a model for his role in Jailhouse Rock. The "I coulda been a contender" scene from On the Waterfront, according to the author of Brooklyn Boomer, Martin H. Levinson, is "one of the most famous scenes in motion picture history, and the line itself has become part of America's cultural lexicon." An example of the endurance of Brando's popular "Wild One" image was the 2009 release of replicas of the leather jacket worn by Brando's Johnny Strabler character. The jackets were marketed by Triumph, the manufacturer of the Triumph Thunderbird motorcycles featured in The Wild One, and were officially licensed by Brando's estate.
Brando was also considered a male sex symbol. Linda Williams writes: "Marlon Brando [was] the quintessential American male sex symbol of the late fifties and early sixties". Brando was an early lesbian icon who, along with James Dean, influenced the butch look and self-image in the 1950s and after.
Brando has also been immortalized in music; most notably, he was mentioned in the lyrics of "It's Hard to Be a Saint in the City" by Bruce Springsteen, in which one of the opening lines read "I could walk like Brando right in to the sun", and in Neil Young's "Pocahontas" as a tribute to his lifetime support of Native Americans and in which he is depicted sitting by a fire with Neil and Pocahontas. He was also mentioned in "Vogue" by Madonna, "Is This What You Wanted" by Leonard Cohen on the album New Skin for the Old Ceremony, "Eyeless" by Slipknot on their self-titled album, and most recently in the song simply titled "Marlon Brando" off the Australian singer Alex Cameron's 2017 album Forced Witness. Bob Dylan's 2020 song "My Own Version of You" references one of his most famous performances in the line, "I'll take the Scarface Pacino and the Godfather Brando / Mix 'em up in a tank and get a robot commando".
He is also one of the many faces on the cover of The Beatles' album "Sgt Pepper's Lonely Hearts Club Band", directly above the wax model of Ringo Starr.
Brando's films, along with those of James Dean, caused Honda to come forward with its "You Meet the Nicest People on a Honda" ads, in order to curb the negative association motorcycles had gotten with rebels and outlaws.
Views on acting
In his autobiography Songs My Mother Taught Me, Brando observed:
He also confessed that, while having great admiration for the theater, he did not return to it after his initial success primarily because the work left him drained emotionally:
Brando repeatedly credited Stella Adler and her understanding of the Stanislavski acting technique for bringing realism to American cinema, but also added:
In the 2015 documentary Listen to Me Marlon, Brando shared his thoughts on playing a death scene, stating, "That's a tough scene to play. You have to make 'em believe that you are dying ... Try to think of the most intimate moment you've ever had in your life." His favorite actors were Spencer Tracy, John Barrymore, Fredric March, James Cagney and Paul Muni. He also showed admiration for Sean Penn, Jack Nicholson, Johnny Depp and Daniel Day-Lewis.
Financial legacy
On his death in 2004, Brando left an estate valued at $21.6 million. According to Forbes, his estate still earned about $9 million in 2005, and that year the magazine named him as one of the top-earning deceased celebrities in the world.
In December 2019, the Rolex GMT Master Ref. 1675 worn by Brando in Francis Ford Coppola's Vietnam War epic Apocalypse Now was announced to be sold at an auction, with an expected price tag of up to $1 million.
Filmography
Awards and honors
Brando was named the fourth greatest male star whose screen debut occurred before or during 1950 by the American Film Institute, and part of TIME magazine's Time 100: The Most Important People of the Century. He was also named one of the top 10 "Icons of the Century" by Variety magazine."100 Icons of the Century: Marlon Brando" Variety. Retrieved August 19, 2011.
See also
List of actors who have appeared in multiple Best Picture Academy Award winners
List of oldest and youngest Academy Award winners and nominees
List of actors with Academy Award nominations
List of actors with two or more Academy Award nominations in acting categories
List of actors with two or more Academy Awards in acting categories
List of LGBT Academy Award winners and nominees
References
Notes
Citations
Bibliography
Bain, David Haward. The Old Iron Road: An Epic of Rails, Roads, and the Urge to Go West. New York: Penguin Books, 2004. .
Brando, Marlon and Donald Cammell. Fan-Tan. New York: Knopf, 2005. .
Englund, George. The Way It's Never Been Done Before: My Friendship With Marlon Brando. New York: Harper Collins Publishers, 2004. .
Grobel, Lawrence. "Conversations with Brando." New York, Hyperion, 1990. Cooper Square Press 1999. Rat Press, 2009
Judge, Bernard. Waltzing With Brando: Planning a Paradise in Tahiti. New York: ORO Editions, 2011.
McDonough, Jimmy. Big Bosoms and Square Jaws: The Biography of Russ Meyer, King of the Sex Film. New York: Crown, 2005. .
Pendergast, Tom and Sara. St. James Encyclopedia of Popular Culture, Volume 1. Detroit, Michigan: St. James Press, 2000. .
Petkovich, Anthony. "Burn, Brando, Burn!". UK: Headpress 19: World Without End (1999), pp. 91–112.
Schoell, William. The Sundance Kid: A Biography of Robert Redford.'' Boulder, CO: Taylor Trade Publishing, 2006. .
External links
Vanity Fair: "The King Who Would Be Man" by Budd Schulberg
The New Yorker: "The Duke in His Domain" – Truman Capote's influential 1957 interview.
Excess after success: Marlon Brando
1924 births
2004 deaths
20th-century American male actors
21st-century American male actors
Amateur radio people
American male film actors
American male stage actors
American male television actors
American people of Irish descent
Best Actor Academy Award winners
Best Drama Actor Golden Globe (film) winners
Best Foreign Actor BAFTA Award winners
Cannes Film Festival Award for Best Actor winners
David di Donatello winners
Deaths from pulmonary fibrosis
Deaths from respiratory failure
Donaldson Award winners
Film directors from Illinois
Former Christian Scientists
LGBT actors from the United States
Bisexual male actors
LGBT people from Illinois
LGBT people from Nebraska
Male actors from Evanston, Illinois
Male actors from Omaha, Nebraska
Method actors
Native Americans' rights activists
Outstanding Performance by a Supporting Actor in a Miniseries or Movie Primetime Emmy Award winners
People from Libertyville, Illinois
People from Sayville, New York
Stella Adler Studio of Acting alumni
Brando family
American people of German descent
American people of Dutch descent
American people of English descent
American bisexual actors
Golden Raspberry Award winners |
19904 | https://en.wikipedia.org/wiki/Meteorology | Meteorology | Meteorology is a branch of the atmospheric sciences (which include atmospheric chemistry and atmospheric physics), with a major focus on weather forecasting. The study of meteorology dates back millennia, though significant progress in meteorology did not begin until the 18th century. The 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data. It was not until after the elucidation of the laws of physics and more particularly, the development of the computer, allowing for the automated solution of a great many equations that model the weather, in the latter half of the 20th century that significant breakthroughs in weather forecasting were achieved. An important branch of weather forecasting is marine weather forecasting as it relates to maritime and coastal safety, in which weather effects also include atmospheric interactions with large bodies of water.
Meteorological phenomena are observable weather events that are explained by the science of meteorology. Meteorological phenomena are described and quantified by the variables of Earth's atmosphere: temperature, air pressure, water vapour, mass flow, and the variations and interactions of these variables, and how they change over time. Different spatial scales are used to describe and predict weather on local, regional, and global levels.
Meteorology, climatology, atmospheric physics, and atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology. The interactions between Earth's atmosphere and its oceans are part of a coupled ocean-atmosphere system. Meteorology has application in many diverse fields such as the military, energy production, transport, agriculture, and construction.
The word meteorology is from the Ancient Greek μετέωρος metéōros (meteor) and -λογία -logia (-(o)logy), meaning "the study of things high in the air."
History
The ability to predict rains and floods based on annual cycles was evidently used by humans at least from the time of agricultural settlement if not earlier. Early approaches to predicting weather were based on astrology and were practiced by priests. Cuneiform inscriptions on Babylonian tablets included associations between thunder and rain. The Chaldeans differentiated the 22° and 46° halos.
Ancient Indian Upanishads contain mentions of clouds and seasons. The Samaveda mentions sacrifices to be performed when certain phenomena were noticed. Varāhamihira's classical work Brihatsamhita, written about 500 AD, provides evidence of weather observation.
In 350 BC, Aristotle wrote Meteorology. Aristotle is considered the founder of meteorology. One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle.
The book De Mundo (composed before 250 BC or between 350 and 200 BC) noted:
If the flashing body is set on fire and rushes violently to the Earth it is called a thunderbolt; if it is only half of fire, but violent also and massive, it is called a meteor; if it is entirely free from fire, it is called a smoking bolt. They are all called 'swooping bolts' because they swoop down upon the Earth. Lightning is sometimes smoky, and is then called 'smoldering lightning"; sometimes it darts quickly along, and is then said to be vivid. At other times, it travels in crooked lines, and is called forked lightning. When it swoops down upon some object it is called 'swooping lightning'.
The Greek scientist Theophrastus compiled a book on weather forecasting, called the Book of Signs. The work of Theophrastus remained a dominant influence in the study of weather and in weather forecasting for nearly 2,000 years. In 25 AD, Pomponius Mela, a geographer for the Roman Empire, formalized the climatic zone system. According to Toufic Fahd, around the 9th century, Al-Dinawari wrote the Kitab al-Nabat (Book of Plants), in which he deals with the application of meteorology to agriculture during the Arab Agricultural Revolution. He describes the meteorological character of the sky, the planets and constellations, the sun and moon, the lunar phases indicating seasons and rain, the anwa (heavenly bodies of rain), and atmospheric phenomena such as winds, thunder, lightning, snow, floods, valleys, rivers, lakes.
Early attempts at predicting weather were often related to prophecy and divining, and were sometimes based on astrological ideas. Admiral FitzRoy tried to separate scientific approaches from prophetic ones.
Research of visual atmospheric phenomena
Ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations. In 1021, Alhazen showed that atmospheric refraction is also responsible for twilight; he estimated that twilight begins when the sun is 19 degrees below the horizon, and also used a geometric determination based on this to estimate the maximum possible height of the Earth's atmosphere as 52,000 passim (about 49 miles, or 79 km).
St. Albert the Great was the first to propose that each drop of falling rain had the form of a small sphere, and that this form meant that the rainbow was produced by light interacting with each raindrop. Roger Bacon was the first to calculate the angular size of the rainbow. He stated that a rainbow summit can not appear higher than 42 degrees above the horizon. In the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow. In 1716, Edmund Halley suggested that aurorae are caused by "magnetic effluvia" moving along the Earth's magnetic field lines.
Instruments and classification scales
In 1441, King Sejong's son, Prince Munjong of Korea, invented the first standardized rain gauge. These were sent throughout the Joseon dynasty of Korea as an official tool to assess land taxes based upon a farmer's potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer. In 1607, Galileo Galilei constructed a thermoscope. In 1611, Johannes Kepler wrote the first scientific treatise on snow crystals: "Strena Seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow)." In 1643, Evangelista Torricelli invented the mercury barometer. In 1662, Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. In 1714, Gabriel Fahrenheit created a reliable scale for measuring temperature with a mercury-type thermometer. In 1742, Anders Celsius, a Swedish astronomer, proposed the "centigrade" temperature scale, the predecessor of the current Celsius scale. In 1783, the first hair hygrometer was demonstrated by Horace-Bénédict de Saussure. In 1802–1803, Luke Howard wrote On the Modification of Clouds, in which he assigns cloud types Latin names. In 1806, Francis Beaufort introduced his system for classifying wind speeds. Near the end of the 19th century the first cloud atlases were published, including the International Cloud Atlas, which has remained in print ever since. The April 1960 launch of the first successful weather satellite, TIROS-1, marked the beginning of the age where weather information became available globally.
Atmospheric composition research
In 1648, Blaise Pascal rediscovered that atmospheric pressure decreases with height, and deduced that there is a vacuum above the atmosphere. In 1738, Daniel Bernoulli published Hydrodynamics, initiating the Kinetic theory of gases and established the basic laws for the theory of gases. In 1761, Joseph Black discovered that ice absorbs heat without changing its temperature when melting. In 1772, Black's student Daniel Rutherford discovered nitrogen, which he called phlogisticated air, and together they developed the phlogiston theory. In 1777, Antoine Lavoisier discovered oxygen and developed an explanation for combustion. In 1783, in Lavoisier's essay "Reflexions sur le phlogistique," he deprecates the phlogiston theory and proposes a caloric theory. In 1804, Sir John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation. In 1808, John Dalton defended caloric theory in A New System of Chemistry and described how it combines with matter, especially gases; he proposed that the heat capacity of gases varies inversely with atomic weight. In 1824, Sadi Carnot analyzed the efficiency of steam engines using caloric theory; he developed the notion of a reversible process and, in postulating that no such thing exists in nature, laid the foundation for the second law of thermodynamics.
Research into cyclones and air flow
In 1494, Christopher Columbus experienced a tropical cyclone, which led to the first written European account of a hurricane. In 1686, Edmund Halley presented a systematic study of the trade winds and monsoons and identified solar heating as the cause of atmospheric motions. In 1735, an ideal explanation of global circulation through study of the trade winds was written by George Hadley. In 1743, when Benjamin Franklin was prevented from seeing a lunar eclipse by a hurricane, he decided that cyclones move in a contrary manner to the winds at their periphery. Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes, and the air within deflected by the Coriolis force resulting in the prevailing westerly winds. Late in the 19th century, the motion of air masses along isobars was understood to be the result of the large-scale interaction of the pressure gradient force and the deflecting force. By 1912, this deflecting force was named the Coriolis effect. Just after World War I, a group of meteorologists in Norway led by Vilhelm Bjerknes developed the Norwegian cyclone model that explains the generation, intensification and ultimate decay (the life cycle) of mid-latitude cyclones, and introduced the idea of fronts, that is, sharply defined boundaries between air masses. The group included Carl-Gustaf Rossby (who was the first to explain the large scale atmospheric flow in terms of fluid dynamics), Tor Bergeron (who first determined how rain forms) and Jacob Bjerknes.
Observation networks and weather forecasting
In the late 16th century and first half of the 17th century a range of meteorological instruments were invented – the thermometer, barometer, hydrometer, as well as wind and rain gauges. In the 1650s natural philosophers started using these instruments to systematically record weather observations. Scientific academies established weather diaries and organised observational networks. In 1654, Ferdinando II de Medici established the first weather observing network, that consisted of meteorological stations in Florence, Cutigliano, Vallombrosa, Bologna, Parma, Milan, Innsbruck, Osnabrück, Paris and Warsaw. The collected data were sent to Florence at regular time intervals. In the 1660s Robert Hooke of the Royal Society of London sponsored networks of weather observers. Hippocrates' treatise Airs, Waters, and Places had linked weather to disease. Thus early meteorologists attempted to correlate weather patterns with epidemic outbreaks, and the climate with public health.
During the Age of Enlightenment meteorology tried to rationalise traditional weather lore, including astrological meteorology. But there were also attempts to establish a theoretical understanding of weather phenomena. Edmond Halley and George Hadley tried to explain trade winds. They reasoned that the rising mass of heated equator air is replaced by an inflow of cooler air from high latitudes. A flow of warm air at high altitude from equator to poles in turn established an early picture of circulation. Frustration with the lack of discipline among weather observers, and the poor quality of the instruments, led the early modern nation states to organise large observation networks. Thus by the end of the 18th century, meteorologists had access to large quantities of reliable weather data. In 1832, an electromagnetic telegraph was created by Baron Schilling. The arrival of the electrical telegraph in 1837 afforded, for the first time, a practical method for quickly gathering surface weather observations from a wide area.
This data could be used to produce maps of the state of the atmosphere for a region near the Earth's surface and to study how these states evolved through time. To make frequent weather forecasts based on these data required a reliable network of observations, but it was not until 1849 that the Smithsonian Institution began to establish an observation network across the United States under the leadership of Joseph Henry. Similar observation networks were established in Europe at this time. The Reverend William Clement Ley was key in understanding of cirrus clouds and early understandings of Jet Streams. Charles Kenneth Mackinnon Douglas, known as 'CKM' Douglas read Ley's papers after his death and carried on the early study of weather systems.
Nineteenth century researchers in meteorology were drawn from military or medical backgrounds, rather than trained as dedicated scientists. In 1854, the United Kingdom government appointed Robert FitzRoy to the new office of Meteorological Statist to the Board of Trade with the task of gathering weather observations at sea. FitzRoy's office became the United Kingdom Meteorological Office in 1854, the second oldest national meteorological service in the world (the Central Institution for Meteorology and Geodynamics (ZAMG) in Austria was founded in 1851 and is the oldest weather service in the world). The first daily weather forecasts made by FitzRoy's Office were published in The Times newspaper in 1860. The following year a system was introduced of hoisting storm warning cones at principal ports when a gale was expected.
Over the next 50 years, many countries established national meteorological services. The India Meteorological Department (1875) was established to follow tropical cyclone and monsoon. The Finnish Meteorological Central Office (1881) was formed from part of Magnetic Observatory of Helsinki University. Japan's Tokyo Meteorological Observatory, the forerunner of the Japan Meteorological Agency, began constructing surface weather maps in 1883. The United States Weather Bureau (1890) was established under the United States Department of Agriculture. The Australian Bureau of Meteorology (1906) was established by a Meteorology Act to unify existing state meteorological services.
Numerical weather prediction
In 1904, Norwegian scientist Vilhelm Bjerknes first argued in his paper Weather Forecasting as a Problem in Mechanics and Physics that it should be possible to forecast weather from calculations based upon natural laws.
It was not until later in the 20th century that advances in the understanding of atmospheric physics led to the foundation of modern numerical weather prediction. In 1922, Lewis Fry Richardson published "Weather Prediction By Numerical Process," after finding notes and derivations he worked on as an ambulance driver in World War I. He described how small terms in the prognostic fluid dynamics equations that govern atmospheric flow could be neglected, and a numerical calculation scheme that could be devised to allow predictions. Richardson envisioned a large auditorium of thousands of people performing the calculations. However, the sheer number of calculations required was too large to complete without electronic computers, and the size of the grid and time steps used in the calculations led to unrealistic results. Though numerical analysis later found that this was due to numerical instability.
Starting in the 1950s, numerical forecasts with computers became feasible. The first weather forecasts derived this way used barotropic (single-vertical-level) models, and could successfully predict the large-scale movement of midlatitude Rossby waves, that is, the pattern of atmospheric lows and highs. In 1959, the UK Meteorological Office received its first computer, a Ferranti Mercury.
In the 1960s, the chaotic nature of the atmosphere was first observed and mathematically described by Edward Lorenz, founding the field of chaos theory. These advances have led to the current use of ensemble forecasting in most major forecasting centers, to take into account uncertainty arising from the chaotic nature of the atmosphere. Mathematical models used to predict the long term weather of the Earth (climate models), have been developed that have a resolution today that are as coarse as the older weather prediction models. These climate models are used to investigate long-term climate shifts, such as what effects might be caused by human emission of greenhouse gases.
Meteorologists
Meteorologists are scientists who study and work in the field of meteorology. The American Meteorological Society publishes and continually updates an authoritative electronic Meteorology Glossary. Meteorologists work in government agencies, private consulting and research services, industrial enterprises, utilities, radio and television stations, and in education. In the United States, meteorologists held about 10,000 jobs in 2018.
Although weather forecasts and warnings are the best known products of meteorologists for the public, weather presenters on radio and television are not necessarily professional meteorologists. They are most often reporters with little formal meteorological training, using unregulated titles such as weather specialist or weatherman. The American Meteorological Society and National Weather Association issue "Seals of Approval" to weather broadcasters who meet certain requirements but this is not mandatory to be hired by the media.
Equipment
Each science has its own unique sets of laboratory equipment. In the atmosphere, there are many things or qualities of the atmosphere that can be measured. Rain, which can be observed, or seen anywhere and anytime was one of the first atmospheric qualities measured historically. Also, two other accurately measured qualities are wind and humidity. Neither of these can be seen but can be felt. The devices to measure these three sprang up in the mid-15th century and were respectively the rain gauge, the anemometer, and the hygrometer. Many attempts had been made prior to the 15th century to construct adequate equipment to measure the many atmospheric variables. Many were faulty in some way or were simply not reliable. Even Aristotle noted this in some of his work as the difficulty to measure the air.
Sets of surface measurements are important data to meteorologists. They give a snapshot of a variety of weather conditions at one single location and are usually at a weather station, a ship or a weather buoy. The measurements taken at a weather station can include any number of atmospheric observables. Usually, temperature, pressure, wind measurements, and humidity are the variables that are measured by a thermometer, barometer, anemometer, and hygrometer, respectively. Professional stations may also include air quality sensors (carbon monoxide, carbon dioxide, methane, ozone, dust, and smoke), ceilometer (cloud ceiling), falling precipitation sensor, flood sensor, lightning sensor, microphone (explosions, sonic booms, thunder), pyranometer/pyrheliometer/spectroradiometer (IR/Vis/UV photodiodes), rain gauge/snow gauge, scintillation counter (background radiation, fallout, radon), seismometer (earthquakes and tremors), transmissometer (visibility), and a GPS clock for data logging. Upper air data are of crucial importance for weather forecasting. The most widely used technique is launches of radiosondes. Supplementing the radiosondes a network of aircraft collection is organized by the World Meteorological Organization.
Remote sensing, as used in meteorology, is the concept of collecting data from remote weather events and subsequently producing weather information. The common types of remote sensing are Radar, Lidar, and satellites (or photogrammetry). Each collects data about the atmosphere from a remote location and, usually, stores the data where the instrument is located. Radar and Lidar are not passive because both use EM radiation to illuminate a specific portion of the atmosphere. Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to El Niño.
Spatial scales
The study of the atmosphere can be divided into distinct areas that depend on both time and spatial scales. At one extreme of this scale is climatology. In the timescales of hours to days, meteorology separates into micro-, meso-, and synoptic scale meteorology. Respectively, the geospatial size of each of these three scales relates directly with the appropriate timescale.
Other subclassifications are used to describe the unique, local, or broad effects within those subclasses.
Microscale
Microscale meteorology is the study of atmospheric phenomena on a scale of about or less. Individual thunderstorms, clouds, and local turbulence caused by buildings and other obstacles (such as individual hills) are modeled on this scale.
Mesoscale
Mesoscale meteorology is the study of atmospheric phenomena that has horizontal scales ranging from 1 km to 1000 km and a vertical scale that starts at the Earth's surface and includes the atmospheric boundary layer, troposphere, tropopause, and the lower section of the stratosphere. Mesoscale timescales last from less than a day to multiple weeks. The events typically of interest are thunderstorms, squall lines, fronts, precipitation bands in tropical and extratropical cyclones, and topographically generated weather systems such as mountain waves and sea and land breezes.
Synoptic scale
Synoptic scale meteorology predicts atmospheric changes at scales up to 1000 km and 105 sec (28 days), in time and space. At the synoptic scale, the Coriolis acceleration acting on moving air masses (outside of the tropics) plays a dominant role in predictions. The phenomena typically described by synoptic meteorology include events such as extratropical cyclones, baroclinic troughs and ridges, frontal zones, and to some extent jet streams. All of these are typically given on weather maps for a specific time. The minimum horizontal scale of synoptic phenomena is limited to the spacing between surface observation stations.
Global scale
Global scale meteorology is the study of weather patterns related to the transport of heat from the tropics to the poles. Very large scale oscillations are of importance at this scale. These oscillations have time periods typically on the order of months, such as the Madden–Julian oscillation, or years, such as the El Niño–Southern Oscillation and the Pacific decadal oscillation. Global scale meteorology pushes into the range of climatology. The traditional definition of climate is pushed into larger timescales and with the understanding of the longer time scale global oscillations, their effect on climate and weather disturbances can be included in the synoptic and mesoscale timescales predictions.
Numerical Weather Prediction is a main focus in understanding air–sea interaction, tropical meteorology, atmospheric predictability, and tropospheric/stratospheric processes. The Naval Research Laboratory in Monterey, California, developed a global atmospheric model called Navy Operational Global Atmospheric Prediction System (NOGAPS). NOGAPS is run operationally at Fleet Numerical Meteorology and Oceanography Center for the United States Military. Many other global atmospheric models are run by national meteorological agencies.
Some meteorological principles
Boundary layer meteorology
Boundary layer meteorology is the study of processes in the air layer directly above Earth's surface, known as the atmospheric boundary layer (ABL). The effects of the surface – heating, cooling, and friction – cause turbulent mixing within the air layer. Significant movement of heat, matter, or momentum on time scales of less than a day are caused by turbulent motions.<ref>Garratt, J.R., The atmospheric boundary layer, Cambridge University Press, 1992; .</ref> Boundary layer meteorology includes the study of all types of surface–atmosphere boundary, including ocean, lake, urban land and non-urban land for the study of meteorology.
Dynamic meteorology
Dynamic meteorology generally focuses on the fluid dynamics of the atmosphere. The idea of air parcel is used to define the smallest element of the atmosphere, while ignoring the discrete molecular and chemical nature of the atmosphere. An air parcel is defined as a point in the fluid continuum of the atmosphere. The fundamental laws of fluid dynamics, thermodynamics, and motion are used to study the atmosphere. The physical quantities that characterize the state of the atmosphere are temperature, density, pressure, etc. These variables have unique values in the continuum.
Applications
Weather forecasting
Weather forecasting is the application of science and technology to predict the state of the atmosphere at a future time and given location. Humans have attempted to predict the weather informally for millennia and formally since at least the 19th century. Weather forecasts are made by collecting quantitative data about the current state of the atmosphere and using scientific understanding of atmospheric processes to project how the atmosphere will evolve.
Once an all-human endeavor based mainly upon changes in barometric pressure, current weather conditions, and sky condition, forecast models are now used to determine future conditions. Human input is still required to pick the best possible forecast model to base the forecast upon, which involves pattern recognition skills, teleconnections, knowledge of model performance, and knowledge of model biases. The chaotic nature of the atmosphere, the massive computational power required to solve the equations that describe the atmosphere, error involved in measuring the initial conditions, and an incomplete understanding of atmospheric processes mean that forecasts become less accurate as the difference in current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus help narrow the error and pick the most likely outcome.
There are a variety of end uses to weather forecasts. Weather warnings are important forecasts because they are used to protect life and property. Forecasts based on temperature and precipitation are important to agriculture, and therefore to commodity traders within stock markets. Temperature forecasts are used by utility companies to estimate demand over coming days. On an everyday basis, people use weather forecasts to determine what to wear. Since outdoor activities are severely curtailed by heavy rain, snow, and wind chill, forecasts can be used to plan activities around these events, and to plan ahead and survive them.
Aviation meteorology
Aviation meteorology deals with the impact of weather on air traffic management. It is important for air crews to understand the implications of weather on their flight plan as well as their aircraft, as noted by the Aeronautical Information Manual:
The effects of ice on aircraft are cumulative—thrust is reduced, drag increases, lift lessens, and weight increases. The results are an increase in stall speed and a deterioration of aircraft performance. In extreme cases, 2 to 3 inches of ice can form on the leading edge of the airfoil in less than 5 minutes. It takes but 1/2 inch of ice to reduce the lifting power of some aircraft by 50 percent and increases the frictional drag by an equal percentage.
Agricultural meteorology
Meteorologists, soil scientists, agricultural hydrologists, and agronomists are people concerned with studying the effects of weather and climate on plant distribution, crop yield, water-use efficiency, phenology of plant and animal development, and the energy balance of managed and natural ecosystems. Conversely, they are interested in the role of vegetation on climate and weather.
Hydrometeorology
Hydrometeorology is the branch of meteorology that deals with the hydrologic cycle, the water budget, and the rainfall statistics of storms. A hydrometeorologist prepares and issues forecasts of accumulating (quantitative) precipitation, heavy rain, heavy snow, and highlights areas with the potential for flash flooding. Typically the range of knowledge that is required overlaps with climatology, mesoscale and synoptic meteorology, and other geosciences.
The multidisciplinary nature of the branch can result in technical challenges, since tools and solutions from each of the individual disciplines involved may behave slightly differently, be optimized for different hard- and software platforms and use different data formats. There are some initiatives – such as the DRIHM project – that are trying to address this issue.
Nuclear meteorology
Nuclear meteorology investigates the distribution of radioactive aerosols and gases in the atmosphere.
Maritime meteorology
Maritime meteorology deals with air and wave forecasts for ships operating at sea. Organizations such as the Ocean Prediction Center, Honolulu National Weather Service forecast office, United Kingdom Met Office, and JMA prepare high seas forecasts for the world's oceans.
Military meteorology
Military meteorology is the research and application of meteorology for military purposes. In the United States, the United States Navy's Commander, Naval Meteorology and Oceanography Command oversees meteorological efforts for the Navy and Marine Corps while the United States Air Force's Air Force Weather Agency is responsible for the Air Force and Army.
Environmental meteorology
Environmental meteorology mainly analyzes industrial pollution dispersion physically and chemically based on meteorological parameters such as temperature, humidity, wind, and various weather conditions.
Renewable energy
Meteorology applications in renewable energy includes basic research, "exploration," and potential mapping of wind power and solar radiation for wind and solar energy.
See also
References
Further reading
Byers, Horace. General Meteorology. New York: McGraw-Hill, 1994.
Dictionaries and encyclopedias
External linksPlease see weather forecasting for weather forecast sites.Air Quality Meteorology – Online course that introduces the basic concepts of meteorology and air quality necessary to understand meteorological computer models. Written at a bachelor's degree level.
The GLOBE Program – (Global Learning and Observations to Benefit the Environment) An international environmental science and education program that links students, teachers, and the scientific research community in an effort to learn more about the environment through student data collection and observation.
Glossary of Meteorology – From the American Meteorological Society, an excellent reference of nomenclature, equations, and concepts for the more advanced reader.
JetStream – An Online School for Weather – National Weather Service
Learn About Meteorology – Australian Bureau of Meteorology
The Weather Guide – Weather Tutorials and News at About.com
Meteorology Education and Training (MetEd) – The COMET Program
NOAA Central Library – National Oceanic & Atmospheric Administration
The World Weather 2010 Project The University of Illinois at Urbana–Champaign
Ogimet – online data from meteorological stations of the world, obtained through NOAA free services
National Center for Atmospheric Research Archives, documents the history of meteorology
Weather forecasting and Climate science – United Kingdom Meteorological Office
Meteorology, BBC Radio 4 discussion with Vladimir Janković, Richard Hambyn and Iba Taub (In Our Time, 6 March 2003)
Virtual exhibition about meteorology on the digital library of Paris Observatory
Applied and interdisciplinary physics
Oceanography
Physical geography
Greek words and phrases |
19908 | https://en.wikipedia.org/wiki/Mount | Mount | Mount is often used as part of the name of specific mountains, e.g. Mount Everest.
Mount or Mounts may also refer to:
Places
Mounts, Indiana, a community in Gibson County, Indiana, United States
People
Mount (surname)
William L. Mounts (1862–1929), American lawyer and politician
Computing and software
Mount (computing), the process of making a file system accessible
Mount (Unix), the utility in Unix-like operating systems which mounts file systems
Displays and equipment
Mount, a fixed point for attaching equipment, such as a hardpoint on an airframe
Mount, a hanging scroll for mounting paintings
Mount, to display an item on a heavy backing such as foamcore, e.g.:
To attach a picture or a painting to a support, followed by framing it
To pin a biological specimen, on a heavy backing in a stretched stable position for ease of dissection or display
To prepare dead animals for display in taxidermy
Lens mount, an interface used to fix a lens to a camera
Mounting, placing a cover slip on a specimen on a microscopic slide
Telescope mount, a device used to support a telescope
Weapon mount, equipment used to secure an armament
Picture mount
Sports
Mount (grappling), a grappling position
Mount, to board an apparatus used for gymnastics, such as a balance beam
Other uses
Mount, in copulation, the union of the sex organs in mating
Mount, a riding animal
Mount, or Vahana, an animal or mythical entity closely associated with a particular deity in Hindu mythology
Mount, to add butter to a sauce in order to thicken it, as with beurre monté
See also
The Mount (disambiguation)
Mountain (disambiguation)
Massif (disambiguation)
Hill (disambiguation) |
19916 | https://en.wikipedia.org/wiki/Meitnerium | Meitnerium | Meitnerium is a synthetic chemical element with the symbol Mt and atomic number 109. It is an extremely radioactive synthetic element (an element not found in nature, but can be created in a laboratory). The most stable known isotope, meitnerium-278, has a half-life of 4.5 seconds, although the unconfirmed meitnerium-282 may have a longer half-life of 67 seconds. The GSI Helmholtz Centre for Heavy Ion Research near Darmstadt, Germany, first created this element in 1982. It is named after Lise Meitner.
In the periodic table, meitnerium is a d-block transactinide element. It is a member of the 7th period and is placed in the group 9 elements, although no chemical experiments have yet been carried out to confirm that it behaves as the heavier homologue to iridium in group 9 as the seventh member of the 6d series of transition metals. Meitnerium is calculated to have similar properties to its lighter homologues, cobalt, rhodium, and iridium.
Introduction
History
Discovery
Meitnerium was first synthesized on August 29, 1982, by a German research team led by Peter Armbruster and Gottfried Münzenberg at the Institute for Heavy Ion Research (Gesellschaft für Schwerionenforschung) in Darmstadt. The team bombarded a target of bismuth-209 with accelerated nuclei of iron-58 and detected a single atom of the isotope meitnerium-266:
+ → +
This work was confirmed three years later at the Joint Institute for Nuclear Research at Dubna (then in the Soviet Union).
Naming
Using Mendeleev's nomenclature for unnamed and undiscovered elements, meitnerium should be known as eka-iridium. In 1979, during the Transfermium Wars (but before the synthesis of meitnerium), IUPAC published recommendations according to which the element was to be called unnilennium (with the corresponding symbol of Une), a systematic element name as a placeholder, until the element was discovered (and the discovery then confirmed) and a permanent name was decided on. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who either called it "element 109", with the symbol of E109, (109) or even simply 109, or used the proposed name "meitnerium".
The naming of meitnerium was discussed in the element naming controversy regarding the names of elements 104 to 109, but meitnerium was the only proposal and thus was never disputed. The name meitnerium (Mt) was suggested by the GSI team in September 1992 in honor of the Austrian physicist Lise Meitner, a co-discoverer of protactinium (with Otto Hahn), and one of the discoverers of nuclear fission. In 1994 the name was recommended by IUPAC, and was officially adopted in 1997. It is thus the only element named specifically after a non-mythological woman (curium being named for both Pierre and Marie Curie).
Isotopes
Meitnerium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Eight different isotopes of meitnerium have been reported with atomic masses 266, 268, 270, and 274–278, two of which, meitnerium-268 and meitnerium-270, have known but unconfirmed metastable states. A ninth isotope with atomic mass 282 is unconfirmed. Most of these decay predominantly through alpha decay, although some undergo spontaneous fission.
Stability and half-lives
All meitnerium isotopes are extremely unstable and radioactive; in general, heavier isotopes are more stable than the lighter. The most stable known meitnerium isotope, 278Mt, is also the heaviest known; it has a half-life of 4.5 seconds. The unconfirmed 282Mt is even heavier and appears to have a longer half-life of 67 seconds. The isotopes 276Mt and 274Mt have half-lives of 0.45 and 0.44 seconds respectively. The remaining five isotopes have half-lives between 1 and 20 milliseconds.
The isotope 277Mt, created as the final decay product of 293Ts for the first time in 2012, was observed to undergo spontaneous fission with a half-life of 5 milliseconds. Preliminary data analysis considered the possibility of this fission event instead originating from 277Hs, for it also has a half-life of a few milliseconds, and could be populated following undetected electron capture somewhere along the decay chain. This possibility was later deemed very unlikely based on observed decay energies of 281Ds and 281Rg and the short half-life of 277Mt, although there is still some uncertainty of the assignment. Regardless, the rapid fission of 277Mt and 277Hs is strongly suggestive of a region of instability for superheavy nuclei with N = 168–170. The existence of this region, characterized by a decrease in fission barrier height between the deformed shell closure at N = 162 and spherical shell closure at N = 184, is consistent with theoretical models.
Predicted properties
Other than nuclear properties, no properties of meitnerium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that meitnerium and its parents decay very quickly. Properties of meitnerium metal remain unknown and only predictions are available.
Chemical
Meitnerium is the seventh member of the 6d series of transition metals, and should be much like the platinum group metals. Calculations on its ionization potentials and atomic and ionic radii are similar to that of its lighter homologue iridium, thus implying that meitnerium's basic properties will resemble those of the other group 9 elements, cobalt, rhodium, and iridium.
Prediction of the probable chemical properties of meitnerium has not received much attention recently. Meitnerium is expected to be a noble metal. The standard electrode potential for the Mt3+/Mt couple is expected to be 0.8 V. Based on the most stable oxidation states of the lighter group 9 elements, the most stable oxidation states of meitnerium are predicted to be the +6, +3, and +1 states, with the +3 state being the most stable in aqueous solutions. In comparison, rhodium and iridium show a maximum oxidation state of +6, while the most stable states are +4 and +3 for iridium and +3 for rhodium. The oxidation state +9, represented only by iridium in [IrO4]+, might be possible for its congener meitnerium in the nonafluoride (MtF9) and the [MtO4]+ cation, although [IrO4]+ is expected to be more stable than these meitnerium compounds. The tetrahalides of meitnerium have also been predicted to have similar stabilities to those of iridium, thus also allowing a stable +4 state. It is further expected that the maximum oxidation states of elements from bohrium (element 107) to darmstadtium (element 110) may be stable in the gas phase but not in aqueous solution.
Physical and atomic
Meitnerium is expected to be a solid under normal conditions and assume a face-centered cubic crystal structure, similarly to its lighter congener iridium. It should be a very heavy metal with a density of around 27–28 g/cm3, which would be among the highest of any of the 118 known elements. Meitnerium is also predicted to be paramagnetic.
Theoreticians have predicted the covalent radius of meitnerium to be 6 to 10 pm larger than that of iridium. The atomic radius of meitnerium is expected to be around 128 pm.
Experimental chemistry
Meitnerium is the first element on the periodic table whose chemistry has not yet been investigated. Unambiguous determination of the chemical characteristics of meitnerium has yet to have been established due to the short half-lives of meitnerium isotopes and a limited number of likely volatile compounds that could be studied on a very small scale. One of the few meitnerium compounds that are likely to be sufficiently volatile is meitnerium hexafluoride (), as its lighter homologue iridium hexafluoride () is volatile above 60 °C and therefore the analogous compound of meitnerium might also be sufficiently volatile; a volatile octafluoride () might also be possible. For chemical studies to be carried out on a transactinide, at least four atoms must be produced, the half-life of the isotope used must be at least 1 second, and the rate of production must be at least one atom per week. Even though the half-life of 278Mt, the most stable confirmed meitnerium isotope, is 4.5 seconds, long enough to perform chemical studies, another obstacle is the need to increase the rate of production of meitnerium isotopes and allow experiments to carry on for weeks or months so that statistically significant results can be obtained. Separation and detection must be carried out continuously to separate out the meitnerium isotopes and have automated systems experiment on the gas-phase and solution chemistry of meitnerium, as the yields for heavier elements are predicted to be smaller than those for lighter elements; some of the separation techniques used for bohrium and hassium could be reused. However, the experimental chemistry of meitnerium has not received as much attention as that of the heavier elements from copernicium to livermorium.
The Lawrence Berkeley National Laboratory attempted to synthesize the isotope 271Mt in 2002–2003 for a possible chemical investigation of meitnerium because it was expected that it might be more stable than the isotopes around it as it has 162 neutrons, a magic number for deformed nuclei; its half-life was predicted to be a few seconds, long enough for a chemical investigation. However, no atoms of 271Mt were detected, and this isotope of meitnerium is currently unknown.
An experiment determining the chemical properties of a transactinide would need to compare a compound of that transactinide with analogous compounds of some of its lighter homologues: for example, in the chemical characterization of hassium, hassium tetroxide (HsO4) was compared with the analogous osmium compound, osmium tetroxide (OsO4). In a preliminary step towards determining the chemical properties of meitnerium, the GSI attempted sublimation of the rhodium compounds rhodium(III) oxide (Rh2O3) and rhodium(III) chloride (RhCl3). However, macroscopic amounts of the oxide would not sublimate until 1000 °C and the chloride would not until 780 °C, and then only in the presence of carbon aerosol particles: these temperatures are far too high for such procedures to be used on meitnerium, as most of the current methods used for the investigation of the chemistry of superheavy elements do not work above 500 °C.
Following the 2014 successful synthesis of seaborgium hexacarbonyl, Sg(CO)6, studies were conducted with the stable transition metals of groups 7 through 9, suggesting that carbonyl formation could be extended to further probe the chemistries of the early 6d transition metals from rutherfordium to meitnerium inclusive. Nevertheless, the challenges of low half-lives and difficult production reactions make meitnerium difficult to access for radiochemists, though the isotopes 278Mt and 276Mt are long-lived enough for chemical research and may be produced in the decay chains of 294Ts and 288Mc respectively. 276Mt is likely more suitable, since producing tennessine requires a rare and rather short-lived berkelium target. The isotope 270Mt, observed in the decay chain of 278Nh with a half-life of 0.69 seconds, may also be sufficiently long-lived for chemical investigations, though a direct synthesis route leading to this isotope and more precise measurements of its decay properties would be required.
Notes
References
Bibliography
External links
Meitnerium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Transition metals
Synthetic elements
Chemical elements with face-centered cubic structure |
19918 | https://en.wikipedia.org/wiki/Megabyte | Megabyte | The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB. The unit prefix mega is a multiplier of (106) in the International System of Units (SI). Therefore, one megabyte is one million bytes of information. This definition has been incorporated into the International System of Quantities.
However, in the computer and information technology fields, two other definitions are used that arose for historical reasons of convenience. A common usage has been to designate one megabyte as (220 B), a measurement that conveniently expresses the binary multiples inherent in digital computer memory architectures. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes, in which this quantity is designated by the unit mebibyte (MiB). In one context, the megabyte has been used to mean 1000×1024 () bytes.
Definitions
The megabyte is commonly used to measure either 10002 bytes or 10242 bytes. The interpretation of using base 1024 originated as technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name. As 1024 (210) approximates 1000 (103), roughly corresponding to the SI prefix kilo-, it was a convenient term to denote the binary multiple. In 1998 the International Electrotechnical Commission (IEC) proposed standards for binary prefixes requiring the use of megabyte to strictly denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO and NIST. Nevertheless, the term megabyte continues to be widely used with different meanings:
Base 10
1 MB = bytes (= 10002 B = 106 B) is the definition recommended for the International System of Units (SI) and by the International Electrotechnical Commission IEC. This definition is used in networking contexts and most storage media, particularly hard drives, flash-based storage, and DVDs, and is also consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance. The Mac OS X 10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units.
In this convention, one thousand megabytes (1000 MB) is equal to one gigabyte (1 GB), where 1 GB is one billion bytes.
Base 2
1 MB = bytes (= 10242 B = 220 B) is the definition used by Microsoft Windows in reference to computer memory, such as RAM. This definition is synonymous with the unambiguous binary prefix mebibyte.
In this convention, one thousand and twenty-four megabytes (1024 MB) is equal to one gigabyte (1 GB), where 1 GB is 10243 bytes (i.e., 1 GiB).
Mixed
1 MB = bytes (= 1000×1024 B) is the definition used to describe the formatted capacity of the 1.44 MB HD floppy disk, which actually has a capacity of .
Randomly addressable semiconductor memory doubles in size for each address lane added to an integrated circuit package, which favors counts that are powers of two. The capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, and the number of disk platters in the drive. Changes in any of these factors would not usually double the size.
Examples of use
Depending on compression methods and file format, a megabyte of data can roughly be:
a 1megapixel bitmap image (e.g. ~1152 × 864) with 256 colors (8 bits/pixel color depth) stored without any compression.
a 4megapixel JPEG image (e.g. ~2560 × 1600) with normal compression.
6seconds of 44.1 kHz/16 bit uncompressed CD audio.
1minute of 128kbit/s MP3 lossy compressed audio.
a typical English book volume in plain text format (500 pages × 2000 characters per page).
The human genome consists of DNA representing 800MB of data. The parts that differentiate one person from another can be compressed to 4MB.
See also
Timeline of binary prefixes
References
External links
Historical Notes About The Cost Of Hard Drive Storage Space
the megabyte (established definition in Networking and Storage industries; from whatis.com)
International Electrotechnical Commission definitions
IEC prefixes and symbols for binary multiples
Added Archived to How Many MB in a GB
Units of information |
19919 | https://en.wikipedia.org/wiki/Monosaccharide | Monosaccharide | Monosaccharides (from Greek monos: single, sacchar: sugar), also called simple sugars, are the simplest form of sugar and the most basic units (monomers) of carbohydrates. The general formula is , or [Cn(H2O)n] or { CH2O}n albeit not all molecules fitting this formula (e.g. acetic acid) are carbohydrates. They are usually colorless, water-soluble, and crystalline solids. Contrary to their name (sugars), only some monosaccharides have a sweet taste.
Examples of monosaccharides include glucose (dextrose), fructose (levulose), and galactose. Monosaccharides are the building blocks of disaccharides (such as sucrose and lactose) and polysaccharides (such as cellulose and starch). Each carbon atom that supports a hydroxyl group is chiral, except those at the end of the chain. This gives rise to a number of isomeric forms, all with the same chemical formula. For instance, galactose and glucose are both aldohexoses, but have different physical structures and chemical properties.
The monosaccharide glucose plays a pivotal role in metabolism, where the chemical energy is extracted through glycolysis and the citric acid cycle to provide energy to living organisms. Some other monosaccharides can be converted in the living organism to glucose.
Structure and nomenclature
With few exceptions (e.g., deoxyribose), monosaccharides have this chemical formula: (CH2O)x, where conventionally x ≥ 3. Monosaccharides can be classified by the number x of carbon atoms they contain: triose (3), tetrose (4), pentose (5), hexose (6), heptose (7), and so on.
Glucose, used as an energy source and for the synthesis of starch, glycogen and cellulose, is a hexose. Ribose and deoxyribose (in RNA and DNA respectively) are pentose sugars. Examples of heptoses include the ketoses, mannoheptulose and sedoheptulose. Monosaccharides with eight or more carbons are rarely observed as they are quite unstable. In aqueous solutions monosaccharides exist as rings if they have more than four carbons.
Linear-chain monosaccharides
Simple monosaccharides have a linear and unbranched carbon skeleton with one carbonyl (C=O) functional group, and one hydroxyl (OH) group on each of the remaining carbon atoms. Therefore, the molecular structure of a simple monosaccharide can be written as H(CHOH)n(C=O)(CHOH)mH, where n + 1 + m = x; so that its elemental formula is CxH2xOx.
By convention, the carbon atoms are numbered from 1 to x along the backbone, starting from the end that is closest to the C=O group. Monosaccharides are the simplest units of carbohydrates and the simplest form of sugar.
If the carbonyl is at position 1 (that is, n or m is zero), the molecule begins with a formyl group H(C=O)− and is technically an aldehyde. In that case, the compound is termed an aldose. Otherwise, the molecule has a ketone group, a carbonyl −(C=O)− between two carbons; then it is formally a ketone, and is termed a ketose. Ketoses of biological interest usually have the carbonyl at position 2.
The various classifications above can be combined, resulting in names such as "aldohexose" and "ketotriose".
A more general nomenclature for open-chain monosaccharides combines a Greek prefix to indicate the number of carbons (tri-, tetr-, pent-, hex-, etc.) with the suffixes "-ose" for aldoses and "-ulose" for ketoses. In the latter case, if the carbonyl is not at position 2, its position is then indicated by a numeric infix. So, for example, H(C=O)(CHOH)4H is pentose, H(CHOH)(C=O)(CHOH)3H is pentulose, and H(CHOH)2(C=O)(CHOH)2H is pent-3-ulose.
Open-chain stereoisomers
Two monosaccharides with equivalent molecular graphs (same chain length and same carbonyl position) may still be distinct stereoisomers, whose molecules differ in spatial orientation. This happens only if the molecule contains a stereogenic center, specifically a carbon atom that is chiral (connected to four distinct molecular sub-structures). Those four bonds can have any of two configurations in space distinguished by their handedness. In a simple open-chain monosaccharide, every carbon is chiral except the first and the last atoms of the chain, and (in ketoses) the carbon with the keto group.
For example, the triketose H(CHOH)(C=O)(CHOH)H (glycerone, dihydroxyacetone) has no stereogenic center, and therefore exists as a single stereoisomer. The other triose, the aldose H(C=O)(CHOH)2H (glyceraldehyde), has one chiral carbon — the central one, number 2 — which is bonded to groups −H, −OH, −C(OH)H2, and −(C=O)H. Therefore, it exists as two stereoisomers whose molecules are mirror images of each other (like a left and a right glove). Monosaccharides with four or more carbons may contain multiple chiral carbons, so they typically have more than two stereoisomers. The number of distinct stereoisomers with the same diagram is bounded by 2c, where c is the total number of chiral carbons.
The Fischer projection is a systematic way of drawing the skeletal formula of an acyclic monosaccharide so that the handedness of each chiral carbon is well specified. Each stereoisomer of a simple open-chain monosaccharide can be identified by the positions (right or left) in the Fischer diagram of the chiral hydroxyls (the hydroxyls attached to the chiral carbons).
Most stereoisomers are themselves chiral (distinct from their mirror images). In the Fischer projection, two mirror-image isomers differ by having the positions of all chiral hydroxyls reversed right-to-left. Mirror-image isomers are chemically identical in non-chiral environments, but usually have very different biochemical properties and occurrences in nature.
While most stereoisomers can be arranged in pairs of mirror-image forms, there are some non-chiral stereoisomers that are identical to their mirror images, in spite of having chiral centers. This happens whenever the molecular graph is symmetrical, as in the 3-ketopentoses H(CHOH)2(CO)(CHOH)2H, and the two halves are mirror images of each other. In that case, mirroring is equivalent to a half-turn rotation. For this reason, there are only three distinct 3-ketopentose stereoisomers, even though the molecule has two chiral carbons.
Distinct stereoisomers that are not mirror-images of each other usually have different chemical properties, even in non-chiral environments. Therefore, each mirror pair and each non-chiral stereoisomer may be given a specific monosaccharide name. For example, there are 16 distinct aldohexose stereoisomers, but the name "glucose" means a specific pair of mirror-image aldohexoses. In the Fischer projection, one of the two glucose isomers has the hydroxyl at left on C3, and at right on C4 and C5; while the other isomer has the reversed pattern. These specific monosaccharide names have conventional three-letter abbreviations, like "Glu" for glucose and "Thr" for threose.
Generally, a monosaccharide with n asymmetrical carbons has 2n stereoisomers. The number of open chain stereoisomers for an aldose monosaccharide is larger by one than that of a ketose monosaccharide of the same length. Every ketose will have 2(n−3) stereoisomers where n > 2 is the number of carbons. Every aldose will have 2(n−2) stereoisomers where n > 2 is the number of carbons.
These are also referred to as epimers which have the different arrangement of −OH and −H groups at the asymmetric or chiral carbon atoms (this does not apply to those carbons having the carbonyl functional group).
Configuration of monosaccharides
Like many chiral molecules, the two stereoisomers of glyceraldehyde will gradually rotate the polarization direction of linearly polarized light as it passes through it, even in solution. The two stereoisomers are identified with the prefixes - and -, according to the sense of rotation: -glyceraldehyde is dextrorotatory (rotates the polarization axis clockwise), while -glyceraldehyde is levorotatory (rotates it counterclockwise).
The - and - prefixes are also used with other monosaccharides, to distinguish two particular stereoisomers that are mirror-images of each other. For this purpose, one considers the chiral carbon that is furthest removed from the C=O group. Its four bonds must connect to −H, −OH, −C(OH)H, and the rest of the molecule. If the molecule can be rotated in space so that the directions of those four groups match those of the analog groups in -glyceraldehyde's C2, then the isomer receives the - prefix. Otherwise, it receives the - prefix.
In the Fischer projection, the - and - prefixes specifies the configuration at the carbon atom that is second from bottom: - if the hydroxyl is on the right side, and - if it is on the left side.
Note that the - and - prefixes do not indicate the direction of rotation of polarized light, which is a combined effect of the arrangement at all chiral centers. However, the two enantiomers will always rotate the light in opposite directions, by the same amount. See also system.
Cyclisation of monosaccharides
A monosaccharide often switches from the acyclic (open-chain) form to a cyclic form, through a nucleophilic addition reaction between the carbonyl group and one of the hydroxyls of the same molecule. The reaction creates a ring of carbon atoms closed by one bridging oxygen atom. The resulting molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. The reaction is easily reversed, yielding the original open-chain form.
In these cyclic forms, the ring usually has five or six atoms. These forms are called furanoses and pyranoses, respectively — by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the aldehyde group on carbon 1 and the hydroxyl on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a seven-atom ring (the same of oxepane), rarely encountered, are called heptoses.
For many monosaccharides (including glucose), the cyclic forms predominate, in the solid state and in solutions, and therefore the same name commonly is used for the open- and closed-chain isomers. Thus, for example, the term "glucose" may signify glucofuranose, glucopyranose, the open-chain form, or a mixture of the three.
Cyclization creates a new stereogenic center at the carbonyl-bearing carbon. The −OH group that replaces the carbonyl's oxygen may end up in two distinct positions relative to the ring's midplane. Thus each open-chain monosaccharide yields two cyclic isomers (anomers), denoted by the prefixes α- and β-. The molecule can change between these two forms by a process called mutarotation, that consists in a reversal of the ring-forming reaction followed by another ring formation.
Haworth projection
The stereochemical structure of a cyclic monosaccharide can be represented in a Haworth projection. In this diagram, the α-isomer for the pyranose form of a -aldohexose has the −OH of the anomeric carbon below the plane of the carbon atoms, while the β-isomer has the −OH of the anomeric carbon above the plane. Pyranoses typically adopt a chair conformation, similar to that of cyclohexane. In this conformation, the α-isomer has the −OH of the anomeric carbon in an axial position, whereas the β-isomer has the −OH of the anomeric carbon in equatorial position (considering -aldohexose sugars).
Derivatives
A large number of biologically important modified monosaccharides exist:
Amino sugars such as:
galactosamine
glucosamine
sialic acid
N-acetylglucosamine
Sulfosugars such as:
sulfoquinovose
Others such as:
ascorbic acid
mannitol
glucuronic acid
See also
Monosaccharide nomenclature
Reducing sugar
Sugar acid
Sugar alcohol
Disaccharide
Notes
References
McMurry, John. Organic Chemistry. 7th ed. Belmont, CA: Thomson Brooks/Cole, 2008. Print.
External links
Nomenclature of Carbohydrates
Carbohydrate chemistry |
19924 | https://en.wikipedia.org/wiki/Microscopium | Microscopium | Microscopium ("the Microscope") is a minor constellation in the southern celestial hemisphere, one of twelve created in the 18th century by French astronomer Nicolas-Louis de Lacaille and one of several depicting scientific instruments. The name is a Latinised form of the Greek word for microscope. Its stars are faint and hardly visible from most of the non-tropical Northern Hemisphere.
The constellation's brightest star is Gamma Microscopii of apparent magnitude 4.68, a yellow giant 2.5 times the Sun's mass located 223 ± 8 light-years distant. It passed within 1.14 and 3.45 light-years of the Sun some 3.9 million years ago, possibly disturbing the outer Solar System. Two star systems—WASP-7 and HD 205739—have been determined to have planets, while two others—the young red dwarf star AU Microscopii and the sunlike HD 202628—have debris disks. AU Microscopii and the binary red dwarf system AT Microscopii are probably a wide triple system and members of the Beta Pictoris moving group. Nicknamed "Speedy Mic", BO Microscopii is a star with an extremely fast rotation period of 9 hours, 7 minutes.
Characteristics
Microscopium is a small constellation bordered by Capricornus to the north, Piscis Austrinus and Grus to the east, Sagittarius to the west, and Indus to the south, touching on Telescopium to the southwest. The recommended three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Mic". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −27.45° and −45.09°. The whole constellation is visible to observers south of latitude 45°N. Given that its brightest stars are of fifth magnitude, the constellation is invisible to the naked eye in areas with light polluted skies.
Features
Stars
French astronomer Nicolas-Louis de Lacaille charted and designated ten stars with the Bayer designations Alpha through to Iota in 1756. A star in neighbouring Indus that Lacaille had labelled Nu Indi turned out to be in Microscopium, so Gould renamed it Nu Microscopii. Francis Baily considered Gamma and Epsilon Microscopii to belong to the neighbouring constellation Piscis Austrinus, but subsequent cartographers did not follow this. In his 1725 Catalogus Britannicus, John Flamsteed labelled the stars 1, 2, 3 and 4 Piscis Austrini, which became Gamma Microscopii, HR 8076, HR 8110 and Epsilon Microscopii respectively. Within the constellation's borders, there are 43 stars brighter than or equal to apparent magnitude 6.5.
Depicting the eyepiece of the microscope is Gamma Microscopii, which—at magnitude of 4.68—is the brightest star in the constellation. Having spent much of its 620-million-year lifespan as a blue-white main sequence star, it has swollen and cooled to become a yellow giant of spectral type G6III, with a diameter ten times that of the Sun. Measurement of its parallax yields a distance of 223 ± 8 light years from Earth. It likely passed within 1.14 and 3.45 light-years of the Sun some 3.9 million years ago, at around 2.5 times the mass of the Sun, it is possibly massive enough and close enough to disturb the Oort cloud. Alpha Microscopii is also an ageing yellow giant star of spectral type G7III with an apparent magnitude of 4.90. Located 400 ± 30 light-years away from Earth, it has swollen to 17.5 times the diameter of the Sun. Alpha has a 10th magnitude companion, visible in 7.5 cm telescopes, though this is a coincidental closeness rather than a true binary system. Epsilon Microscopii lies 166 ± 5 light-years away, and is a white star of apparent magnitude 4.7, and spectral type A1V. Theta1 and Theta2 Microscopii make up a wide double whose components are splittable to the naked eye. Both are white A-class magnetic spectrum variable stars with strong metallic lines, similar to Cor Caroli. They mark the constellation's specimen slide.
Many notable objects are too faint to be seen with the naked eye. AX Microscopii, better known as Lacaille 8760, is a red dwarf which lies only 12.9 light-years from the Solar System. At magnitude 6.68, it is the brightest red dwarf in the sky. BO Microscopii is a rapidly rotating star that has 80% the diameter of the Sun. Nicknamed "Speedy Mic", it has a rotation period of 9 hours 7 minutes. An active star, it has prominent stellar flares that average 100 times stronger than those of the Sun, and are emitting energy mainly in the X-ray and ultraviolet bands of the spectrum. It lies 218 ± 4 light-years away from the Sun. AT Microscopii is a binary star system, both members of which are flare star red dwarfs. The system lies close to and may form a very wide triple system with AU Microscopii, a young star which appears to be a planetary system in the making with a debris disk. The three stars are candidate members of the Beta Pictoris moving group, one of the nearest associations of stars that share a common motion through space.
The Astronomical Society of Southern Africa in 2003 reported that observations of four of the Mira variables in Microscopium were very urgently needed as data on their light curves was incomplete. Two of them—R and S Microscopii—are challenging stars for novice amateur astronomers, and the other two, U and RY Microscopii, are more difficult still. Another red giant, T Microscopii, is a semiregular variable that ranges between magnitudes 7.7 and 9.6 over 344 days. Of apparent magnitude 11, DD Microscopii is a symbiotic star system composed of an orange giant of spectral type K2III and white dwarf in close orbit, with the smaller star ionizing the stellar wind of the larger star. The system has a low metallicity. Combined with its high galactic latitude, this indicates that the star system has its origin in the galactic halo of the Milky Way.
HD 205739 is a yellow-white main sequence star of spectral type F7V that is around 1.22 times as massive and 2.3 times as luminous as the Sun. It has a Jupiter-sized planet with an orbital period of 280 days that was discovered by the radial velocity method. WASP-7 is a star of spectral type F5V with an apparent magnitude of 9.54, about 1.28 times as massive as the Sun. Its hot Jupiter planet—WASP-7b—was discovered by transit method and found to orbit the star every 4.95 days. HD 202628 is a sunlike star of spectral type G2V with a debris disk that ranges from 158 to 220 AU distant. Its inner edge is sharply defined, indicating a probable planet orbiting between 86 and 158 AU from the star.
Deep sky objects
Describing Microscopium as "totally unremarkable", astronomer Patrick Moore concluded there was nothing of interest for amateur observers. NGC 6925 is a barred spiral galaxy of apparent magnitude 11.3 which is lens-shaped, as it lies almost edge-on to observers on Earth, 3.7 degrees west-northwest of Alpha Microscopii. SN 2011ei, a Type II Supernova in NGC 6925, was discovered by Stu Parker in New Zealand in July 2011. NGC 6923 lies nearby and is a magnitude fainter still. The Microscopium Void is a roughly rectangular region of relatively empty space, bounded by incomplete sheets of galaxies from other voids. The Microscopium Supercluster is an overdensity of galaxy clusters that was first noticed in the early 1990s. The component Abell clusters 3695 and 3696 are likely to be gravitationally bound, while the relations of Abell clusters 3693 and 3705 in the same field are unclear.
Meteor showers
The Microscopids are a minor meteor shower that appear from June to mid-July.
History
The stars that comprise Microscopium are in a region previously considered the hind feet of Sagittarius, a neighbouring constellation. John Ellard Gore wrote that al-Sufi seems to have reported that Ptolemy had seen the stars but he (Al Sufi) did not pinpoint their positions. Microscopium itself was introduced in 1751–52 by Lacaille with the French name le Microscope, after he had observed and catalogued 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. Commemorating the compound microscope, the Microscope's name had been Latinised by Lacaille to Microscopium by 1763.
See also
Microscopium (Chinese astronomy)
Notes
References
Citations
Cited texts
External links
The Deep Photographic Guide to the Constellations: Microscopium
The clickable Microscopium
Southern constellations
Constellations listed by Lacaille |
19925 | https://en.wikipedia.org/wiki/IC%20342/Maffei%20Group | IC 342/Maffei Group | The IC 342/Maffei Group (also known as the IC 342 Group or the Maffei 1 Group) is the nearest group of galaxies to the Local Group. The group can be described as a binary group; the member galaxies are mostly concentrated around either IC 342 or Maffei 1, both of which are the brightest galaxies within the group. The group is part of the Virgo Supercluster.
Members
The table below lists galaxies that have been identified as associated with the IC342/Maffei 1 Group by I. D. Karachentsev. Note that Karachentsev divides this group into two subgroups centered around IC 342 and Maffei 1.
Additionally, KKH 37 is listed as possibly being a member of the IC 342 Subgroup, and KKH 6 is listed as possibly being a member of the Maffei 1 Subgroup.
Foreground dust obscuration
As seen from Earth, the group lies near the plane of the Milky Way (a region sometimes called the Zone of Avoidance). Consequently, the light from many of the galaxies is severely affected by dust obscuration within the Milky Way. This complicates observational studies of the group, as uncertainties in the dust obscuration also affect measurements of the galaxies' luminosities and distances as well as other related quantities.
Moreover, the galaxies within the group have historically been difficult to identify. Many galaxies have only been discovered using late 20th century astronomical instrumentation. For example, while many fainter, more distant galaxies, such as the galaxies in the New General Catalogue, were already identified visually by the end of the nineteenth century, Maffei 1 and Maffei 2 were only discovered in 1968 using infrared photographic images of the region. Furthermore, it is difficult to determine whether some objects near IC 342 or Maffei 1 are galaxies associated with the IC 342/Maffei Group or diffuse foreground objects within the Milky Way that merely look like galaxies. For example, the objects MB 2 and Camelopardalis C were once thought to be dwarf galaxies in the IC 342/Maffei Group but are now known to be objects within the Milky Way.
Group formation and possible interactions with the Local Group
Since the IC 342/Maffei Group and the Local Group are located physically close to each other, the two groups may have influenced each other's evolution during the early stages of galaxy formation. An analysis of the velocities and distances to the IC 342/Maffei Group as measured by M. J. Valtonen and collaborators suggested that IC 342 and Maffei 1 were moving faster than what could be accounted for in the expansion of the universe. They therefore suggested that IC 342 and Maffei 1 were ejected from the Local Group after a violent gravitational interaction with the Andromeda Galaxy during the early stages of the formation of the two groups.
However, this interpretation is dependent on the distances measured to the galaxies in the group, which in turn is dependent on accurately measuring the degree to which interstellar dust in the Milky Way obscures the group. More recent observations have demonstrated that the dust obscuration may have been previously overestimated, so the distances may have been underestimated. If these new distance measurements are correct, then the galaxies in the IC 342/Maffei Group appear to be moving at the rate expected from the expansion of the universe, and the scenario of a collision between the IC 342/Maffei Group and the Local Group would be implausible.
References
External links
Virgo Supercluster |
19926 | https://en.wikipedia.org/wiki/M81%20Group | M81 Group | The M81 Group is a galaxy group in the constellations Ursa Major and Camelopardalis that includes the galaxies Messier 81 and Messier 82, as well as several other galaxies with high apparent brightnesses. The approximate center of the group is located at a distance of 3.6 Mpc, making it one of the nearest groups to the Local Group. The group is estimated to have a total mass of (1.03 ± 0.17).
The M81 Group, the Local Group, and other nearby groups all lie within the Virgo Supercluster (i.e. the Local Supercluster).
Members
The table below lists galaxies that have been identified as associated with the M81 Group by I. D. Karachentsev.
Note that the object names used in the above table differ from the names used by Karachentsev. NGC, IC, UGC, and PGC numbers have been used in many cases to allow for easier referencing.
Interactions within the group
Messier 81, Messier 82, and NGC 3077 are all strongly interacting with each other. Observations of the 21-centimeter hydrogen line indicate how the galaxies are connected.
The gravitational interactions have stripped some hydrogen gas away from all three galaxies, leading to the formation of filamentary gas structures within the group. Bridges of neutral hydrogen have been shown to connect M81 with M82 and NGC 3077. Moreover, the interactions have also caused some interstellar gas to fall into the centers of Messier 82 and NGC 3077, which has led to strong starburst activity (or the formation of many stars) within the centers of these two galaxies. Computer simulations of tidal interactions have been used to show how the current structure of the group could have been created.
Gallery
See also
UGC 5497
References
External links
M81 Group @ SEDS
M81 Group from An Atlas of The Universe
Virgo Supercluster
Galaxy clusters |
19929 | https://en.wikipedia.org/wiki/Mensa | Mensa | Mensa may refer to:
Mensa International, an organization for people with a high intelligence quotient (IQ)
Mensa (name), a name and list of people with the given name or surname
Mensa (constellation), a constellation in the southern sky
Mensa (ecclesiastical), a portion of church property that is appropriated to defray the expenses of either the prelate or the community that serves the church
Mensa (geology), an extraterrestrial area of raised land |
19930 | https://en.wikipedia.org/wiki/Metre%20%28poetry%29 | Metre (poetry) | In poetry, metre (Commonwealth spelling) or meter (American spelling; see spelling differences) is the basic rhythmic structure of a verse or lines in verse. Many traditional verse forms prescribe a specific verse metre, or a certain set of metres alternating in a particular order. The study and the actual use of metres and forms of versification are both known as prosody. (Within linguistics, "prosody" is used in a more general sense that includes not only poetic metre but also the rhythmic aspects of prose, whether formal or informal, that vary from language to language, and sometimes between poetic traditions.)
Characteristics
An assortment of features can be identified when classifying poetry and its metre.
Qualitative versus quantitative metre
The metre of most poetry of the Western world and elsewhere is based on patterns of syllables of particular types. The familiar type of metre in English-language poetry is called qualitative metre, with stressed syllables coming at regular intervals (e.g. in iambic pentameters, usually every even-numbered syllable). Many Romance languages use a scheme that is somewhat similar but where the position of only one particular stressed syllable (e.g. the last) needs to be fixed. The metre of the old Germanic poetry of languages such as Old Norse and Old English was radically different, but was still based on stress patterns.
Some classical languages, in contrast, used a different scheme known as quantitative metre, where patterns were based on syllable weight rather than stress. In the dactylic hexameters of Classical Latin and Classical Greek, for example, each of the six feet making up the line was either a dactyl (long-short-short) or a spondee (long-long): a "long syllable" was literally one that took longer to pronounce than a short syllable: specifically, a syllable consisting of a long vowel or diphthong or followed by two consonants. The stress pattern of the words made no difference to the metre. A number of other ancient languages also used quantitative metre, such as Sanskrit, Persian and Classical Arabic (but not Biblical Hebrew).
Finally, non-stressed languages that have little or no differentiation of syllable length, such as French or Chinese, base their verses on the number of syllables only. The most common form in French is the , with twelve syllables a verse, and in classical Chinese five characters, and thus five syllables. But since each Chinese character is pronounced using one syllable in a certain tone, classical Chinese poetry also had more strictly defined rules, such as thematic parallelism or tonal antithesis between lines.
Feet
In many Western classical poetic traditions, the metre of a verse can be described as a sequence of feet, each foot being a specific sequence of syllable types – such as relatively unstressed/stressed (the norm for English poetry) or long/short (as in most classical Latin and Greek poetry).
Iambic pentameter, a common metre in English poetry, is based on a sequence of five iambic feet or iambs, each consisting of a relatively unstressed syllable (here represented with "˘" above the syllable) followed by a relatively stressed one (here represented with "/" above the syllable) –
˘ / ˘ / ˘ / ˘ / ˘ /
So long as men can breathe, or eyes can see,
˘ / ˘ / ˘ / ˘ / ˘ /
So long lives this, and this gives life to thee.
This approach to analyzing and classifying metres originates from Ancient Greek tragedians and poets such as Homer, Pindar, Hesiod, and Sappho.
However some metres have an overall rhythmic pattern to the line that cannot easily be described using feet. This occurs in Sanskrit poetry; see Vedic metre and Sanskrit metre. (Although this poetry is in fact specified using feet, each "foot" is more or less equivalent to an entire line.) It also occurs in some Western metres, such as the hendecasyllable favoured by Catullus and Martial, which can be described as:
x x — ∪ ∪ — ∪ — ∪ — —
(where "—" = long, "∪" = short, and "x x" can be realized as "— ∪" or "— —" or "∪ —")
Disyllables
Macron and breve notation: = stressed/long syllable, = unstressed/short syllable
Trisyllables
If the line has only one foot, it is called a monometer; two feet, dimeter; three is trimeter; four is tetrameter; five is pentameter; six is hexameter, seven is heptameter and eight is octameter. For example, if the feet are iambs, and if there are five feet to a line, then it is called an iambic pentameter. If the feet are primarily dactyls and there are six to a line, then it is a dactylic hexameter.
Caesura
Sometimes a natural pause occurs in the middle of a line rather than at a line-break. This is a caesura (cut). A good example is from The Winter's Tale by William Shakespeare; the caesurae are indicated by '/':
It is for you we speak, / not for ourselves:
You are abused / and by some putter-on
That will be damn'd for't; / would I knew the villain,
I would land-damn him. / Be she honour-flaw'd,
I have three daughters; / the eldest is eleven
In Latin and Greek poetry, a caesura is a break within a foot caused by the end of a word.
Each line of traditional Germanic alliterative verse is divided into two half-lines by a caesura. This can be seen in Piers Plowman:
A fair feeld ful of folk / fond I ther bitwene—
Of alle manere of men / the meene and the riche,
Werchynge and wandrynge / as the world asketh.
Somme putten hem to the plough / pleiden ful selde,
In settynge and sowynge / swonken ful harde,
And wonnen that thise wastours / with glotonye destruyeth.
Enjambment
By contrast with caesura, enjambment is incomplete syntax at the end of a line; the meaning runs over from one poetic line to the next, without terminal punctuation. Also from Shakespeare's The Winter's Tale:
I am not prone to weeping, as our sex
Commonly are; the want of which vain dew
Perchance shall dry your pities; but I have
That honourable grief lodged here which burns
Worse than tears drown.
Metric variations
Poems with a well-defined overall metric pattern often have a few lines that violate that pattern. A common variation is the inversion of a foot, which turns an iamb ("da-DUM") into a trochee ("DUM-da"). A second variation is a headless verse, which lacks the first syllable of the first foot. A third variation is catalexis, where the end of a line is shortened by a foot, or two or part thereof – an example of this is at the end of each verse in Keats' "La Belle Dame sans Merci":
And on thy cheeks a fading rose (4 feet)
Fast withereth too (2 feet)
Modern English
Most English metre is classified according to the same system as Classical metre with an important difference. English is an accentual language, and therefore beats and offbeats (stressed and unstressed syllables) take the place of the long and short syllables of classical systems. In most English verse, the metre can be considered as a sort of back beat, against which natural speech rhythms vary expressively. The most common characteristic feet of English verse are the iamb in two syllables and the anapest in three. (See Foot (prosody) for a complete list of the metrical feet and their names.)
Metrical systems
The number of metrical systems in English is not agreed upon. The four major types are: accentual verse, accentual-syllabic verse, syllabic verse and quantitative verse. The alliterative verse of Old English could also be added to this list, or included as a special type of accentual verse. Accentual verse focuses on the number of stresses in a line, while ignoring the number of offbeats and syllables; accentual-syllabic verse focuses on regulating both the number of stresses and the total number of syllables in a line; syllabic verse only counts the number of syllables in a line; quantitative verse regulates the patterns of long and short syllables (this sort of verse is often considered alien to English). The use of foreign metres in English is all but exceptional.
Frequently used metres
The most frequently encountered metre of English verse is the iambic pentameter, in which the metrical norm is five iambic feet per line, though metrical substitution is common and rhythmic variations are practically inexhaustible. John Milton's Paradise Lost, most sonnets, and much else besides in English are written in iambic pentameter. Lines of unrhymed iambic pentameter are commonly known as blank verse. Blank verse in the English language is most famously represented in the plays of William Shakespeare and the great works of Milton, though Tennyson (Ulysses, The Princess) and Wordsworth (The Prelude) also make notable use of it.
A rhymed pair of lines of iambic pentameter make a heroic couplet, a verse form which was used so often in the 18th century that it is now used mostly for humorous effect (although see Pale Fire for a non-trivial case). The most famous writers of heroic couplets are Dryden and Pope.
Another important metre in English is the ballad metre, also called the "common metre", which is a four-line stanza, with two pairs of a line of iambic tetrameter followed by a line of iambic trimeter; the rhymes usually fall on the lines of trimeter, although in many instances the tetrameter also rhymes. This is the metre of most of the Border and Scots or English ballads. In hymnody it is called the "common metre", as it is the most common of the named hymn metres used to pair many hymn lyrics with melodies, such as Amazing Grace:
Amazing Grace! how sweet the sound
That saved a wretch like me;
I once was lost, but now am found;
Was blind, but now I see.
Emily Dickinson is famous for her frequent use of ballad metre:
Great streets of silence led away
To neighborhoods of pause —
Here was no notice — no dissent —
No universe — no laws.
Other languages
Sanskrit
Versification in Classical Sanskrit poetry is of three kinds.
Syllabic () metres depend on the number of syllables in a verse, with relative freedom in the distribution of light and heavy syllables. This style is derived from older Vedic forms. An example is the Anuṣṭubh metre found in the great epics, the Mahabharata and the Ramayana, which has exactly eight syllables in each line, of which only some are specified as to length.
Syllabo-quantitative () metres depend on syllable count, but the light-heavy patterns are fixed. An example is the Mandākrāntā metre, in which each line has 17 syllables in a fixed pattern.
Quantitative () metres depend on duration, where each line has a fixed number of morae, grouped in feet with usually 4 morae in each foot. An example is the Arya metre, in which each verse has four lines of 12, 18, 12, and 15 morae respectively. In each 4-mora foot there can be two long syllables, four short syllables, or one long and two short in any order.
Standard traditional works on metre are Pingala's and Kedāra's . The most exhaustive compilations, such as the modern ones by Patwardhan and Velankar contain over 600 metres. This is a substantially larger repertoire than in any other metrical tradition.
Greek and Latin
The metrical "feet" in the classical languages were based on the length of time taken to pronounce each syllable, which were categorized according to their weight as either "long" syllables or "short" syllables (indicated as dum and di below). These are also called "heavy" and "light" syllables, respectively, to distinguish from long and short vowels. The foot is often compared to a musical measure and the long and short syllables to whole notes and half notes. In English poetry, feet are determined by emphasis rather than length, with stressed and unstressed syllables serving the same function as long and short syllables in classical metre.
The basic unit in Greek and Latin prosody is a mora, which is defined as a single short syllable. A long syllable is equivalent to two morae. A long syllable contains either a long vowel, a diphthong, or a short vowel followed by two or more consonants. Various rules of elision sometimes prevent a grammatical syllable from making a full syllable, and certain other lengthening and shortening rules (such as correption) can create long or short syllables in contexts where one would expect the opposite.
The most important Classical metre is the dactylic hexameter, the metre of Homer and Virgil. This form uses verses of six feet. The word dactyl comes from the Greek word daktylos meaning finger, since there is one long part followed by two short stretches. The first four feet are dactyls (daa-duh-duh), but can be spondees (daa-daa). The fifth foot is almost always a dactyl. The sixth foot is either a spondee or a trochee (daa-duh). The initial syllable of either foot is called the ictus, the basic "beat" of the verse. There is usually a caesura after the ictus of the third foot. The opening line of the Aeneid is a typical line of dactylic hexameter:
Armă vĭ | rumquĕ că | nō, Troi | ae quī | prīmŭs ăb | ōrīs
("I sing of arms and the man, who first from the shores of Troy...")
In this example, the first and second feet are dactyls; their first syllables, "Ar" and "rum" respectively, contain short vowels, but count as long because the vowels are both followed by two consonants. The third and fourth feet are spondees, the first of which is divided by the main caesura of the verse. The fifth foot is a dactyl, as is nearly always the case. The final foot is a spondee.
The dactylic hexameter was imitated in English by Henry Wadsworth Longfellow in his poem Evangeline:
This is the forest primeval. The murmuring pines and the hemlocks,
Bearded with moss, and in garments green, indistinct in the twilight,
Stand like Druids of old, with voices sad and prophetic,
Stand like harpers hoar, with beards that rest on their bosoms.
Notice how the first line:
This is the | for-est pri | me-val. The | mur-muring | pines and the | hem-locks
Follows this pattern:
dum diddy | dum diddy | dum diddy | dum diddy | dum diddy | dum dum
Also important in Greek and Latin poetry is the dactylic pentameter. This was a line of verse, made up of two equal parts, each of which contains two dactyls followed by a long syllable, which counts as a half foot. In this way, the number of feet amounts to five in total. Spondees can take the place of the dactyls in the first half, but never in the second. The long syllable at the close of the first half of the verse always ends a word, giving rise to a caesura.
Dactylic pentameter is never used in isolation. Rather, a line of dactylic pentameter follows a line of dactylic hexameter in the elegiac distich or elegiac couplet, a form of verse that was used for the composition of elegies and other tragic and solemn verse in the Greek and Latin world, as well as love poetry that was sometimes light and cheerful. An example from Ovid's Tristia:
Vergĭlĭ | um vī | dī tan | tum, nĕc ă | māră Tĭ | bullō
Tempŭs ă | mīcĭtĭ | ae || fātă dĕ | dērĕ mĕ | ae.
("Virgil I merely saw, and the harsh Fates gave Tibullus no time for my friendship.")
The Greeks and Romans also used a number of lyric metres, which were typically used for shorter poems than elegiacs or hexameter. In Aeolic verse, one important line was called the hendecasyllabic, a line of eleven syllables. This metre was used most often in the Sapphic stanza, named after the Greek poet Sappho, who wrote many of her poems in the form. A hendecasyllabic is a line with a never-varying structure: two trochees, followed by a dactyl, then two more trochees. In the Sapphic stanza, three hendecasyllabics are followed by an "Adonic" line, made up of a dactyl and a trochee. This is the form of Catullus 51 (itself an homage to Sappho 31):
Illĕ mī pār essĕ dĕō vĭdētur;
illĕ, sī fās est, sŭpĕrārĕ dīvōs,
quī sĕdēns adversŭs ĭdentĭdem tē
spectăt ĕt audit
("He seems to me to be like a god; if it is permitted, he seems above the gods, who sitting across from you gazes at you and hears you again and again.")
The Sapphic stanza was imitated in English by Algernon Charles Swinburne in a poem he simply called Sapphics:
Saw the white implacable Aphrodite,
Saw the hair unbound and the feet unsandalled
Shine as fire of sunset on western waters;
Saw the reluctant...
Classical Arabic
The metrical system of Classical Arabic poetry, like those of classical Greek and Latin, is based on the weight of syllables classified as either "long" or "short". The basic principles of Arabic poetic metre Arūḍ or Arud ( ) Science of Poetry ( ), were put forward by Al-Farahidi (786 - 718 CE) who did so after noticing that poems consisted of repeated syllables in each verse. In his first book, Al-Ard ( ), he described 15 types of verse. Al-Akhfash described one extra, the 16th.
A short syllable contains a short vowel with no following consonants. For example, the word kataba, which syllabifies as ka-ta-ba, contains three short vowels and is made up of three short syllables. A long syllable contains either a long vowel or a short vowel followed by a consonant as is the case in the word maktūbun which syllabifies as mak-tū-bun. These are the only syllable types possible in Classical Arabic phonology which, by and large, does not allow a syllable to end in more than one consonant or a consonant to occur in the same syllable after a long vowel. In other words, syllables of the type -āk- or -akr- are not found in classical Arabic.
Each verse consists of a certain number of metrical feet (tafāʿīl or ʾaǧzāʾ) and a certain combination of possible feet constitutes a metre (baḥr).
The traditional Arabic practice for writing out a poem's metre is to use a concatenation of various derivations of the verbal root F-ʿ-L (فعل). Thus, the following hemistich
قفا نبك من ذكرى حبيبٍ ومنزلِ
Would be traditionally scanned as:
فعولن مفاعيلن فعولن مفاعلن
That is, Romanized and with traditional Western scansion:
Western: ⏑ – – ⏑ – – – ⏑ – – ⏑ – ⏑ –
Verse: Qifā nabki min ḏikrā ḥabībin wa-manzili
Mnemonic: fa`ūlun mafā`īlun fa`ūlun mafā`ilun
Al-Kʰalīl b. ˀAḫmad al-Farāhīdī's contribution to the study of Arabic prosody is undeniably significant: he was the first scholar to subject Arabic poetry to a meticulous, painstaking metrical analysis. Unfortunately, he fell short of producing a coherent theory; instead, he was content to merely gather, classify, and categorize the primary data—a first step which, though insufficient, represents no mean accomplishment. Therefore, al-Kʰalīl has left a formulation of utmost complexity and difficulty which requires immense effort to master; even the accomplished scholar cannot utilize and apply it with ease and total confidence. Dr. ˀIbrāhīm ˀAnīs, one of the most distinguished and celebrated pillars of Arabic literature and the Arabic language in the 20th century, states the issue clearly in his book Mūsīqā al-Sʰiˁr:
“I am aware of no [other] branch of Arabic studies which embodies as many [technical] terms as does [al-Kʰalīl’s] prosody, few and distinct as the meters are: al-Kʰalīl’s disciples employed a large number of infrequent items, assigning to those items certain technical denotations which—invariably—require definition and explanation. …. As to the rules of metric variation, they are numerous to the extent that they defy memory and impose a taxing course of study. …. In learning them, a student faces severe hardship which obscures all connection with an artistic genre—indeed, the most artistic of all—namely, poetry. ………. It is in this fashion that [various] authors dealt with the subject under discussion over a period of eleven centuries: none of them attempted to introduce a new approach or to simplify the rules. ………. Is it not time for a new, simple presentation which avoids contrivance, displays close affinity to [the art of] poetry, and perhaps renders the science of prosody palatable as well as manageable?”
In the 20th and the 21st centuries, numerous scholars have endeavored to supplement al-Kʰalīl's contribution.
The Arabic metres
Classical Arabic has sixteen established metres. Though each of them allows for a certain amount of variation, their basic patterns are as follows, using:
"–" for 1 long syllable
"⏑" for 1 short syllable
"x" for a position that can contain 1 long or 1 short
"o" for a position that can contain 1 long or 2 shorts
"S" for a position that can contain 1 long, 2 shorts, or 1 long + 1 short
Classical Persian
The terminology for metrical system used in classical and classical-style Persian poetry is the same as that of Classical Arabic, even though these are quite different in both origin and structure. This has led to serious confusion among prosodists, both ancient and modern, as to the true source and nature of the Persian metres, the most obvious error being the assumption that they were copied from Arabic.
Persian poetry is quantitative, and the metrical patterns are made of long and short syllables, much as in Classical Greek, Latin and Arabic. Anceps positions in the line, however, that is places where either a long or short syllable can be used (marked "x" in the schemes below), are not found in Persian verse except in some metres at the beginning of a line.
Persian poetry is written in couplets, with each half-line (hemistich) being 10-14 syllables long. Except in the ruba'i (quatrain), where either of two very similar metres may be used, the same metre is used for every line in the poem. Rhyme is always used, sometimes with double rhyme or internal rhymes in addition. In some poems, known as masnavi, the two halves of each couplet rhyme, with a scheme aa, bb, cc and so on. In lyric poetry, the same rhyme is used throughout the poem at the end of each couplet, but except in the opening couplet, the two halves of each couplet do not rhyme; hence the scheme is aa, ba, ca, da. A ruba'i (quatrain) also usually has the rhyme aa, ba.
A particular feature of classical Persian prosody, not found in Latin, Greek or Arabic, is that instead of two lengths of syllables (long and short), there are three lengths (short, long, and overlong). Overlong syllables can be used anywhere in the line in place of a long + a short, or in the final position in a line or half line. When a metre has a pair of short syllables (⏑ ⏑), it is common for a long syllable to be substituted, especially at the end of a line or half-line.
About 30 different metres are commonly used in Persian. 70% of lyric poems are written in one of the following seven metres:
⏑ – ⏑ – ⏑ ⏑ – – ⏑ – ⏑ – ⏑ ⏑ –
– – ⏑ – ⏑ – ⏑ ⏑ – – ⏑ – ⏑ –
– ⏑ – – – ⏑ – – – ⏑ – – – ⏑ –
x ⏑ – – ⏑ ⏑ – – ⏑ ⏑ – – ⏑ ⏑ –
x ⏑ – – ⏑ – ⏑ – ⏑ ⏑ –
⏑ – – – ⏑ – – – ⏑ – – – ⏑ – – –
– – ⏑ ⏑ – – ⏑ ⏑ – – ⏑ ⏑ – –
Masnavi poems (that is, long poems in rhyming couplets) are always written in one of the shorter 11 or 10-syllable metres (traditionally seven in number) such as the following:
⏑ – – ⏑ – – ⏑ – – ⏑ – (e.g. Ferdowsi's Shahnameh)
⏑ – – – ⏑ – – – ⏑ – – (e.g. Gorgani's Vis o Ramin)
– ⏑ – – – ⏑ – – – ⏑ – (e.g. Rumi's Masnavi-e Ma'navi)
– – ⏑ ⏑ – ⏑ – ⏑ – – (e.g. Nezami's Leyli o Majnun)
The two metres used for ruba'iyat (quatrains), which are only used for this, are the following, of which the second is a variant of the first:
– – ⏑ ⏑ – – ⏑ ⏑ – – ⏑ ⏑ –
– – ⏑ ⏑ – ⏑ – ⏑ – – ⏑ ⏑ –
Classical Chinese
Classical Chinese poetic metric may be divided into fixed and variable length line types, although the actual scansion of the metre is complicated by various factors, including linguistic changes and variations encountered in dealing with a tradition extending over a geographically extensive regional area for a continuous time period of over some two-and-a-half millennia. Beginning with the earlier recorded forms: the Classic of Poetry tends toward couplets of four-character lines, grouped in rhymed quatrains; and, the Chuci follows this to some extent, but moves toward variations in line length. Han Dynasty poetry tended towards the variable line-length forms of the folk ballads and the Music Bureau yuefu. Jian'an poetry, Six Dynasties poetry, and Tang Dynasty poetry tend towards a poetic metre based on fixed-length lines of five, seven, (or, more rarely six) characters/verbal units tended to predominate, generally in couplet/quatrain-based forms, of various total verse lengths. The Song poetry is specially known for its use of the ci, using variable line lengths which follow the specific pattern of a certain musical song's lyrics, thus ci are sometimes referred to as "fixed-rhythm" forms. Yuan poetry metres continued this practice with their qu forms, similarly fixed-rhythm forms based on now obscure or perhaps completely lost original examples (or, ur-types). Not that Classical Chinese poetry ever lost the use of the shi forms, with their metrical patterns found in the "old style poetry" (gushi) and the regulated verse forms of (lüshi or jintishi). The regulated verse forms also prescribed patterns based upon linguistic tonality. The use of caesura is important in regard to the metrical analysis of Classical Chinese poetry forms.
Old English
The metric system of Old English poetry was different from that of modern English, and related more to the verse forms of most of the older Germanic languages such as Old Norse. It used alliterative verse, a metrical pattern involving varied numbers of syllables but a fixed number (usually four) of strong stresses in each line. The unstressed syllables were relatively unimportant, but the caesurae (breaks between the half-lines) played a major role in Old English poetry.
In place of using feet, alliterative verse divided each line into two half-lines. Each half-line had to follow one of five or so patterns, each of which defined a sequence of stressed and unstressed syllables, typically with two stressed syllables per half line. Unlike typical Western poetry, however, the number of unstressed syllables could vary somewhat. For example, the common pattern "DUM-da-DUM-da" could allow between one and five unstressed syllables between the two stresses.
The following is a famous example, taken from The Battle of Maldon, a poem written shortly after the date of that battle (AD 991):
Hige sceal þe heardra, || heorte þe cēnre,
mōd sceal þe māre, || swā ūre mægen lȳtlað
("Will must be the harder, courage the bolder,
spirit must be the more, as our might lessens.")
In the quoted section, the stressed syllables have been underlined. (Normally, the stressed syllable must be long if followed by another syllable in a word. However, by a rule known as syllable resolution, two short syllables in a single word are considered equal to a single long syllable. Hence, sometimes two syllables have been underlined, as in hige and mægen.) The German philologist Eduard Sievers (died 1932) identified five different patterns of half-line in Anglo-Saxon alliterative poetry. The first three half-lines have the type A pattern "DUM-da-(da-)DUM-da", while the last one has the type C pattern "da-(da-da-)DUM-DUM-da", with parentheses indicating optional unstressed syllables that have been inserted. Note also the pervasive pattern of alliteration, where the first and/or second stressed syllables alliterate with the third, but not with the fourth.
French
In French poetry, metre is determined solely by the number of syllables in a line. A silent 'e' counts as a syllable before a consonant, but is elided before a vowel (where h aspiré counts as a consonant). At the end of a line, the "e" remains unelided but is hypermetrical (outside the count of syllables, like a feminine ending in English verse), in that case, the rhyme is also called "feminine", whereas it is called "masculine" in the other cases.
The most frequently encountered metre in Classical French poetry is the alexandrine, composed of two hemistiches of six syllables each. Two famous alexandrines are
La fille de Minos et de Pasiphaë
(Jean Racine)
(the daughter of Minos and of Pasiphaë), and
Waterloo ! Waterloo ! Waterloo ! Morne plaine!
(Victor Hugo)
(Waterloo! Waterloo! Waterloo! Gloomy plain!)
Classical French poetry also had a complex set of rules for rhymes that goes beyond how words merely sound. These are usually taken into account when describing the metre of a poem.
Spanish
In Spanish poetry the metre is determined by the number of syllables the verse has. Still it is the phonetic accent in the last word of the verse that decides the final count of the line. If the accent of the final word is at the last syllable, then the poetic rule states that one syllable shall be added to the actual count of syllables in the said line, thus having a higher number of poetic syllables than the number of grammatical syllables. If the accent lies on the second to last syllable of the last word in the verse, then the final count of poetic syllables will be the same as the grammatical number of syllables. Furthermore, if the accent lies on the third to last syllable, then one syllable is subtracted from the actual count, having then less poetic syllables than grammatical syllables.
Spanish poetry uses poetic licenses, unique to Romance languages, to change the number of syllables by manipulating mainly the vowels in the line.
Regarding these poetic licenses one must consider three kinds of phenomena: (1) syneresis, (2) dieresis and (3) hiatus
There are many types of licenses, used either to add or subtract syllables, that may be applied when needed after taking in consideration the poetic rules of the last word. Yet all have in common that they only manipulate vowels that are close to each other and not interrupted by consonants.
Some common metres in Spanish verse are:
Septenary: A line with seven poetic syllables
Octosyllable: A line with eight poetic syllables. This metre is commonly used in romances, narrative poems similar to English ballads, and in most proverbs.
Hendecasyllable: A line with eleven poetic syllables. This metre plays a similar role to pentameter in English verse. It is commonly used in sonnets, among other things.
Alexandrine: A line consisting of fourteen syllables, commonly separated into two hemistichs of seven syllables each (In most languages, this term denotes a line of twelve or sometimes thirteen syllables, but not in Spanish).
Italian
In Italian poetry, metre is determined solely by the position of the last accent in a line, the position of the other accents being however important for verse equilibrium. Syllables are enumerated with respect to a verse which ends with a paroxytone, so that a Septenary (having seven syllables) is defined as a verse whose last accent falls on the sixth syllable: it may so contain eight syllables (Ei fu. Siccome immobile) or just six (la terra al nunzio sta). Moreover, when a word ends with a vowel and the next one starts with a vowel, they are considered to be in the same syllable (synalepha): so Gli anni e i giorni consists of only four syllables ("Gli an" "ni e i" "gior" "ni"). Even-syllabic verses have a fixed stress pattern. Because of the mostly trochaic nature of the Italian language, verses with an even number of syllables are far easier to compose, and the Novenary is usually regarded as the most difficult verse.
Some common metres in Italian verse are:
Sexenary: A line whose last stressed syllable is on the fifth, with a fixed stress on the second one as well (Al Re Travicello / Piovuto ai ranocchi, Giusti)
Septenary: A line whose last stressed syllable is the sixth one.
Octosyllable: A line whose last accent falls on the seventh syllable. More often than not, the secondary accents fall on the first, third and fifth syllable, especially in nursery rhymes for which this metre is particularly well-suited.
Hendecasyllable: A line whose last accent falls on the tenth syllable. It therefore usually consists of eleven syllables; there are various kinds of possible accentuations. It is used in sonnets, in ottava rima, and in many other types of poetry. The Divine Comedy, in particular, is composed entirely of hendecasyllables, whose main stress pattern is on the 4th and 10th syllable.
Turkish
Apart from Ottoman poetry, which was heavily influenced by Persian traditions and created a unique Ottoman style, traditional Turkish poetry features a system in which the number of syllables in each verse must be the same, most frequently 7, 8, 11, 14 syllables. These verses are then divided into syllable groups depending on the number of total syllables in a verse: 4+3 for 7 syllables, 4+4 or 5+3 for 8, 4+4+3 or 6+5 for 11 syllables. The end of each group in a verse is called a "durak" (stop), and must coincide with the last syllable of a word.
The following example is by Faruk Nafiz Çamlıbel (died 1973), one of the most devoted users of traditional Turkish metre:
In this poem the 6+5 metre is used, so that there is a word-break (durak = "stop" or caesura) after the sixth syllable of every line, as well as at the end of each line.
Ottoman Turkish
In the Ottoman Turkish language, the structures of the poetic foot (تفعل tef'ile) and of poetic metre (وزن vezin) were imitated from Persian poetry. About twelve of the most common Persian metres were used for writing Turkish poetry. As was the case with Persian, no use at all was made of the commonest metres of Arabic poetry (the tawīl, basīt, kāmil, and wāfir). However, the terminology used to describe the metres was indirectly borrowed from the Arabic poetic tradition through the medium of the Persian language.
As a result, Ottoman poetry, also known as Dîvân poetry, was generally written in quantitative, mora-timed metre. The moras, or syllables, are divided into three basic types:
Open, or light, syllables (açık hece) consist of either a short vowel alone, or a consonant followed by a short vowel.
Examples: a-dam ("man"); zir-ve ("summit, peak")
Closed, or heavy, syllables (kapalı hece) consist of either a long vowel alone, a consonant followed by a long vowel, or a short vowel followed by a consonant
Examples: Â-dem ("Adam"); kâ-fir ("non-Muslim"); at ("horse")
Lengthened, or superheavy, syllables (meddli hece) count as one closed plus one open syllable and consist of a vowel followed by a consonant cluster, or a long vowel followed by a consonant
Examples: kürk ("fur"); âb ("water")
In writing out a poem's poetic metre, open syllables are symbolized by "." and closed syllables are symbolized by "–". From the different syllable types, a total of sixteen different types of poetic foot—the majority of which are either three or four syllables in length—are constructed, which are named and scanned as follows:
These individual poetic feet are then combined in a number of different ways, most often with four feet per line, so as to give the poetic metre for a line of verse. Some of the most commonly used metres are the following:
me fâ’ î lün / me fâ’ î lün / me fâ’ î lün / me fâ’ î lün. – – – / . – – – / . – – – / . – – –
—Bâkî (1526–1600)
me fâ i lün / fe i lâ tün / me fâ i lün / fe i lün. – . – / . . – – / . – . – / . . –
—Şeyh Gâlib (1757–1799)
fâ i lâ tün / fâ i lâ tün / fâ i lâ tün / fâ i lün– . – – / – . – – / – . – – / – . –
—Nedîm (1681?–1730)
fe i lâ tün / fe i lâ tün / fe i lâ tün / fe i lün. . – – / . . – – / . . – – / . . –
—Fuzûlî (1483?–1556)
mef’ û lü / me fâ î lü / me fâ î lü / fâ û lün– – . / . – – . / . – – . / – – .
—Neşâtî (?–1674)
Portuguese
Portuguese poetry uses a syllabic metre in which the verse is classified according to the last stressed syllable. The Portuguese system is quite similar to those of Spanish and Italian, as they are closely related languages. The most commonly used verses are:
Redondilha menor: composed of 5 syllables.
Redondilha maior: composed of 7 syllables.
Decasyllable (decassílabo): composed of 10 syllables. Mostly used in Parnassian sonnets. It is equivalent to the Italian hendecasyllable.
Heroic (heróico): stresses on the sixth and tenth syllables.
Sapphic (sáfico): stresses on the fourth, eighth and tenth syllables.
Martelo: stresses on the third, sixth and tenth syllables.
Gaita galega or moinheira: stresses on the fourth, seventh and tenth syllables.
Dodecasyllable (dodecassílabo): composed of 12 syllables.
Alexandrine (alexandrino): divided into two hemistiches, the sixth and the twelfth syllables are stressed.
Barbarian (bárbaro): composed of 13 or more syllables.
Lucasian (lucasiano): composed of 16 syllables, divided into two hemistiches of 8 syllables each.
Welsh
There is a continuing tradition of strict metre poetry in the Welsh language that can be traced back to at least the sixth century. At the annual National Eisteddfod of Wales a bardic chair is awarded to the best , a long poem that follows the conventions of regarding stress, alliteration and rhyme.
History
Metrical texts are first attested in early Indo-European languages. The earliest known unambiguously metrical texts, and at the same time the only metrical texts with a claim of dating to the Late Bronze Age, are the hymns of the Rigveda. That the texts of the Ancient Near East (Sumerian, Egyptian or Semitic) should not exhibit metre is surprising, and may be partly due to the nature of Bronze Age writing. There were, in fact, attempts to reconstruct metrical qualities of the poetic portions of the Hebrew Bible, e.g. by Gustav Bickell or Julius Ley, but they remained inconclusive (see Biblical poetry). Early Iron Age metrical poetry is found in the Iranian Avesta and in the Greek works attributed to Homer and Hesiod.
Latin verse survives from the Old Latin period (c. 2nd century BC), in the Saturnian metre. Persian poetry arises in the Sassanid era. Tamil poetry of the early centuries AD may be the earliest known non-Indo-European
Medieval poetry was metrical without exception, spanning traditions as diverse as European Minnesang, Trouvère or Bardic poetry, Classical Persian and Sanskrit poetry, Tang dynasty Chinese poetry or the Japanese Nara period Man'yōshū. Renaissance and Early Modern poetry in Europe is characterized by a return to templates of Classical Antiquity, a tradition begun by Petrarca's generation and continued into the time of Shakespeare and Milton.
Dissent
Not all poets accept the idea that metre is a fundamental part of poetry. 20th-century American poets Marianne Moore, William Carlos Williams and Robinson Jeffers believed that metre was an artificial construct imposed upon poetry rather than being innate to poetry. In an essay titled "Robinson Jeffers, & The Metric Fallacy" Dan Schneider echoes Jeffers' sentiments: "What if someone actually said to you that all music was composed of just 2 notes? Or if someone claimed that there were just 2 colors in creation? Now, ponder if such a thing were true. Imagine the clunkiness & mechanicality of such music. Think of the visual arts devoid of not just color, but sepia tones, & even shades of gray." Jeffers called his technique "rolling stresses".
Moore went further than Jeffers, openly declaring her poetry was written in syllabic form, and wholly denying metre. These syllabic lines from her famous poem "Poetry" illustrate her contempt for metre and other poetic tools. Even the syllabic pattern of this poem does not remain perfectly consistent:
nor is it valid
to discriminate against "business documents and
school-books": all these phenomena are important. One must make a distinction
however: when dragged into prominence by half poets, the result is not poetry
Williams tried to form poetry whose subject matter was centered on the lives of common people. He came up with the concept of the variable foot. Williams spurned traditional metre in most of his poems, preferring what he called "colloquial idioms." Another poet who turned his back on traditional concepts of metre was Britain's Gerard Manley Hopkins. Hopkins' major innovation was what he called sprung rhythm. He claimed most poetry was written in this older rhythmic structure inherited from the Norman side of the English literary heritage, based on repeating groups of two or three syllables, with the stressed syllable falling in the same place on each repetition. Sprung rhythm is structured around feet with a variable number of syllables, generally between one and four syllables per foot, with the stress always falling on the first syllable in a foot.
See also
Anisometric verse
Foot (prosody)
Generative metrics
Line (poetry)
List of classical metres
Metre (hymn)
Metre (music)
Scansion
References
Citations
Sources
Abdel-Malek, Zaki N. (2019), Towards a New Theory of Arabic Prosody, 5th edition (Revised), Posed online with free access.
.
.
.
.
metre (poetry)
Phonology |
19932 | https://en.wikipedia.org/wiki/Majed%20Moqed | Majed Moqed | Majed Mashaan Ghanem Moqed (, ; also transliterated as Moqued) (June 18, 1977 – September 11, 2001) was one of five terrorist hijackers of American Airlines Flight 77 as part of the September 11 attacks.
A Saudi, Moqed was studying law at a university in Saudi Arabia before joining Al-Qaeda in 1999 and being chosen to participate in the 9/11 attacks. He arrived in the United States in May 2001 and helped with the planning of how the attacks would be carried out.
On September 11, 2001, Moqed boarded American Airlines Flight 77 and assisted in the hijacking of the plane so that it could be crashed into the Pentagon.
Early life and activities
Moqed was a law student from the small town of Al-Nakhil, Saudi Arabia (west of Medina), studying at King Fahd University's Faculty of Administration and Economics. Before he dropped out, he was apparently recruited into al-Qaeda in 1999 along with friend Satam al-Suqami, with whom he had earlier shared a college room.
The two trained at Khalden, a large training facility near Kabul that was run by Ibn al-Shaykh al-Libi. A friend in Saudi Arabia claimed he was last seen there in 2000, before leaving to study English in the United States. In November 2000, Moqed and Suqami flew into Iran from Bahrain together.
Some time late in 2000, Moqed traveled to the United Arab Emirates, where he purchased traveler's cheques presumed to have been paid for by 9/11 financier Mustafa Ahmed al-Hawsawi. Five other hijackers also passed through the UAE and purchased travellers cheques, including Wail al-Shehri, Saeed al-Ghamdi, Hamza al-Ghamdi, Ahmed al-Haznawi and Ahmed al-Nami.
Known as al-Ahlaf during the preparations, Moqed then moved in with hijackers Salem al-Hazmi, Abdulaziz al-Omari and Khalid al-Mihdhar in an apartment in Paterson, New Jersey.
2001
According to the FBI, Moqed first arrived in the United States on May 2, 2001.
In March 2001 (this contradicts the previous paragraph), Moqed, Hani Hanjour, Hazmi and Ahmed al-Ghamdi rented a minivan and travelled to Fairfield, Connecticut. There they met a contact in the parking lot of a local convenience store who provided them with false IDs. (This was possibly Eyad Alrababah, a Jordanian charged with document fraud).
Moqed was one of the five hijackers who asked for a state identity card on August 2, 2001. On August 24, both Mihdhar and Moqed tried to purchase flight tickets from the American Airlines online ticket-merchant, but had technical difficulties resolving their address and gave up.
Employees at Advance Travel Service in Totowa, New Jersey later claimed that Moqed and Hanjour had both purchased tickets there. They claimed that Hani Hanjour spoke very little English, and Moqed did most of the speaking. Hanjour requested a seat in the front row of the airplane. Their credit card failed to authorize, and after being told the agency did not accept personal cheques, the pair left to withdraw cash. They returned shortly afterwards and paid $1842.25 total in cash.
During this time, Moqed was staying in Room 343 of the Valencia Motel. On September 2, Moqed paid cash for a $30 weekly membership at Gold's Gym in Greenbelt, Maryland.
Three days later he was seen on an ATM camera with Hani Hanjour. After the attacks, employees at an adult video store, Adult Lingerie Center, in Beltsville claimed that Moqed had been in the store three times, although there were no transactions slips that confirmed this.
Attacks
On September 11, 2001, Moqed arrived at Washington Dulles International Airport.
According to the 9/11 Commission Report, Moqed set off the metal detector at the airport and was screened with a hand-wand. He passed the cursory inspection, and was able to board his flight at 7:50. He was seated in 12A, adjacent to Mihdhar who was in 12B. Moqed helped to hijack the plane and assisted Hani Hanjour in crashing the plane into the Pentagon at 9:37 A.M., killing 189 people (64 on the plane and 125 on the ground).
The flight was scheduled to depart at 08:10, but ended up departing 10 minutes late from Gate D26 at Dulles. The last normal radio communications from the aircraft to air traffic control occurred at 08:50:51. At 08:54, Flight 77 began to deviate from its normal, assigned flight path and turned south, and then hijackers set the flight's autopilot heading for Washington, D.C. Passenger Barbara Olson called her husband, United States Solicitor General Theodore Olson, and reported that the plane had been hijacked and that the assailants had box cutters and knives. At 09:37, American Airlines Flight 77 crashed into the west facade of the Pentagon, killing all 64 aboard (including the hijackers), along with 125 on the ground in the Pentagon. In the recovery process at the Pentagon, remains of all five Flight 77 hijackers were identified through a process of elimination, as not matching any DNA samples for the victims, and put into custody of the FBI.
After the attacks his family told Arab News that Moqed had been a fan of sports, and enjoyed travelling. Additionally, the U.S. announced it had found a "Kingdom of Saudi Arabia Student Identity Card" bearing Moqed's name in the rubble surrounding the Pentagon. They also stated that it appeared to have been a forgery.
See also
PENTTBOM
Hijackers in the September 11 attacks
References
External links
The Final 9/11 Commission Report
Photo gallery
2001 deaths
American Airlines Flight 77
Participants in the September 11 attacks
Saudi Arabian al-Qaeda members
1977 births
Saudi Arabian mass murderers
Saudi Arabian murderers of children
Suicides in Virginia |
19933 | https://en.wikipedia.org/wiki/Matthew%20Perry%20%28disambiguation%29 | Matthew Perry (disambiguation) | Matthew Perry (born 1969) is a Canadian-American television and film actor.
Matthew Perry or Matt Perry may also refer to:
Matthew C. Perry (1794–1858), American naval officer who forcibly opened Japan to trade with the West
Matthew Perry Monument (Newport, Rhode Island)
USNS Matthew Perry (T-AKE-9)
Matthew J. Perry (1921–2011), South Carolina's first African American U.S. District Court judge
Matt Perry (rugby union) (born 1977), English rugby union footballer
See also
Matt Parry (born 1994), British racing driver
Matthew Parry (1885–1931), Irish cricketer |
19935 | https://en.wikipedia.org/wiki/Mimeograph | Mimeograph | A mimeograph machine (often abbreviated to mimeo, sometimes called a stencil duplicator) is a low-cost duplicating machine that works by forcing ink through a stencil onto paper. The process is called mimeography, and a copy made by the process is a mimeograph.
Mimeographs, along with spirit duplicators and hectographs, were common technologies for printing small quantities of a document, as in office work, classroom materials, and church bulletins. Early fanzines were printed by mimeograph because the machines and supplies were widely available and inexpensive. Beginning in the late 1960s and continuing into the 1970s, photocopying gradually displaced mimeographs, spirit duplicators, and hectographs.
For even smaller quantities, up to about five, a typist would use carbon paper.
Origins
Use of stencils is an ancient art, butthrough chemistry, papers, and pressestechniques advanced rapidly in the late nineteenth century:
Papyrograph
A description of the Papyrograph method of duplication was published by David Owen:
A major beneficiary of the invention of synthetic dyes was a document reproduction technique known as stencil duplicating. Its earliest form was invented in 1874 by Eugenio de Zuccato, a young Italian studying law in London, who called his device the Papyrograph. Zuccato’s system involved writing on a sheet of varnished paper with caustic ink, which ate through the varnish and paper fibers, leaving holes where the writing had been. This sheet – which had now become a stencil – was placed on a blank sheet of paper, and ink rolled over it so that the ink oozed through the holes, creating a duplicate on the second sheet.
The process was commercialized and Zuccato applied for a patent in 1895 having stencils prepared by typewriting.
Electric pen
Thomas Edison received US patent 180,857 for Autographic Printing on August 8, 1876. The patent covered the electric pen, used for making the stencil, and the flatbed duplicating press. In 1880 Edison obtained a further patent, US 224,665: "Method of Preparing Autographic Stencils for Printing," which covered the making of stencils using a file plate, a grooved metal plate on which the stencil was placed which perforated the stencil when written on with a blunt metal stylus.
The word mimeograph was first used by Albert Blake Dick when he licensed Edison's patents in 1887.
Dick received Trademark Registration no. 0356815 for the term "Mimeograph" in the US Patent Office. It is currently listed as a dead entry, but shows the A.B. Dick Company of Chicago as the owner of the name.
Over time, the term became generic and is now an example of a genericized trademark. ("Roneograph," also "Roneo machine," was another trademark used for mimeograph machines, the name being a contraction of Rotary Neostyle.)
Cyclostyle
In 1891, David Gestetner patented his Automatic Cyclostyle. This was one of the first rotary machines that retained the flatbed, which passed back and forth under inked rollers. This invention provided for more automated, faster reproductions since the pages were produced and moved by rollers instead of pressing one single sheet at a time.
By 1900, two primary types of mimeographs had come into use: a single-drum machine and a dual-drum machine. The single-drum machine used a single drum for ink transfer to the stencil, and the dual-drum machine used two drums and silk-screens to transfer the ink to the stencils. The single drum (example Roneo) machine could be easily used for multi-color work by changing the drum - each of which contained ink of a different color. This was spot color for mastheads. Colors could not be mixed.
The mimeograph became popular because it was much cheaper than traditional print - there was neither typesetting nor skilled labor involved. One individual with a typewriter and the necessary equipment became their own printing factory, allowing for greater circulation of printed material.
Mimeography process
The image transfer medium was originally a stencil made from waxed mulberry paper. Later this became an immersion-coated long-fibre paper, with the coating being a plasticized nitrocellulose. This flexible waxed or coated sheet is backed by a sheet of stiff card stock, with the two sheets bound at the top.
Once prepared, the stencil is wrapped around the ink-filled drum of the rotary machine. When a blank sheet of paper is drawn between the rotating drum and a pressure roller, ink is forced through the holes on the stencil onto the paper. Early flatbed machines used a kind of squeegee.
The ink originally had a lanolin base and later became an oil in water emulsion. This emulsion commonly uses Turkey-Red Oil (Sulfated Castor oil) which gives it a distinctive and heavy scent.
Preparing stencils
One uses a regular typewriter, with a stencil setting, to create a stencil. The operator loads a stencil assemblage into the typewriter like paper and uses a switch on the typewriter to put it in stencil mode. In this mode, the part of the mechanism which lifts the ribbon between the type element and the paper is disabled so that the bare, sharp type element strikes the stencil directly. The impact of the type element displaces the coating, making the tissue paper permeable to the oil-based ink. This is called "cutting a stencil".
A variety of specialized styluses were used on the stencil to render lettering, illustrations, or other artistic features by hand against a textured plastic backing plate.
Mistakes were corrected by brushing them out with a specially formulated correction fluid, and retyping once it has dried. ("Obliterine" was a popular brand of correction fluid in Australia and the United Kingdom.)
Stencils were also made with a thermal process, an infrared method similar to that used by early photocopiers. The common machine was a Thermofax.
Another device, called an electrostencil machine, sometimes was used to make mimeo stencils from a typed or printed original. It worked by scanning the original on a rotating drum with a moving optical head and burning through the blank stencil with an electric spark in the places where the optical head detected ink. It was slow and produced ozone. Text from electrostencils had lower resolution than that from typed stencils, although the process was good for reproducing illustrations. A skilled mimeo operator using an electrostencil and a very coarse halftone screen could make acceptable printed copies of a photograph.
During the declining years of the mimeograph, some people made stencils with early computers and dot-matrix impact printers.
Limitations
Unlike spirit duplicators (where the only ink available is depleted from the master image), mimeograph technology works by forcing a replenishable supply of ink through the stencil master. In theory, the mimeography process could be continued indefinitely, especially if a durable stencil master were used (e.g. a thin metal foil). In practice, most low-cost mimeo stencils gradually wear out over the course of producing several hundred copies. Typically the stencil deteriorates gradually, producing a characteristic degraded image quality until the stencil tears, abruptly ending the print run. If further copies are desired at this point, another stencil must be made.
Often, the stencil material covering the interiors of closed letterforms (e.g. "a", "b", "d", "e", "g", etc.) would fall away during continued printing, causing ink-filled letters in the copies. The stencil would gradually stretch, starting near the top where the mechanical forces were greatest, causing a characteristic "mid-line sag" in the textual lines of the copies, that would progress until the stencil failed completely.
The Gestetner Company (and others) devised various methods to make mimeo stencils more durable.
Compared to spirit duplication, mimeography produced a darker, more legible image. Spirit duplicated images were usually tinted a light purple or lavender, which gradually became lighter over the course of some dozens of copies. Mimeography was often considered "the next step up" in quality, capable of producing hundreds of copies. Print runs beyond that level were usually produced by professional printers or, as the technology became available, xerographic copiers.
Durability
Mimeographed images generally have much better durability than spirit-duplicated images, since the inks are more resistant to ultraviolet light. The primary preservation challenge is the low-quality paper often used, which would yellow and degrade due to residual acid in the treated pulp from which the paper was made. In the worst case, old copies can crumble into small particles when handled. Mimeographed copies have moderate durability when acid-free paper is used.
Contemporary use
Gestetner, Risograph, and other companies still make and sell highly automated mimeograph-like machines that are externally similar to photocopiers. The modern version of a mimeograph, called a digital duplicator, or copyprinter, contains a scanner, a thermal head for stencil cutting, and a large roll of stencil material entirely inside the unit. The stencil material consists of a very thin polymer film laminated to a long-fibre non-woven tissue. It makes the stencils and mounts and unmounts them from the print drum automatically, making it almost as easy to operate as a photocopier. The Risograph is the best known of these machines.
Although mimeographs remain more economical and energy-efficient in mid-range quantities, easier-to-use photocopying and offset printing have replaced mimeography almost entirely in developed countries. Mimeography continues to be used in developing countries because it is a simple, cheap, and robust technology. Many mimeographs can be hand-cranked, requiring no electricity.
Uses and art
Mimeographs and the closely related but distinctly different spirit duplicator process were both used extensively in schools to copy homework assignments and tests. They were also commonly used for low-budget amateur publishing, including club newsletters and church bulletins. They were especially popular with science fiction fans, who used them extensively in the production of fanzines in the middle 20th century, before photocopying became inexpensive.
Letters and typographical symbols were sometimes used to create illustrations, in a precursor to ASCII art. Because changing ink color in a mimeograph could be a laborious process, involving extensively cleaning the machine or, on newer models, replacing the drum or rollers, and then running the paper through the machine a second time, some fanzine publishers experimented with techniques for painting several colors on the pad.
In addition, Mimeographs were used by many resistance groups during World War Two as a way to print illegal newspapers and publications in countries such as Belgium.
See also
Duplicating machines
Gocco
List of duplicating processes
Mimeoscope
Mimeo Revolution
Spirit duplicator (also known as a "Rexograph" or "Ditto machine" in the US or a "Banda machine" in the UK)
References
Further reading
Hutchison, Howard. Mimeograph: Operation Maintenance and Repair. Blue Ridge Summit: Tab Books, 1979.
External links
Obsolete technologies
Office equipment
Printing devices
Copying |
19937 | https://en.wikipedia.org/wiki/Meteorite | Meteorite | A meteorite is a solid piece of debris from an object, such as a comet, asteroid, or meteoroid, that originates in outer space and survives its passage through the atmosphere to reach the surface of a planet or moon. When the original object enters the atmosphere, various factors such as friction, pressure, and chemical interactions with the atmospheric gases cause it to heat up and radiate energy. It then becomes a meteor and forms a fireball, also known as a shooting star or falling star; astronomers call the brightest examples "bolides". Once it settles on the larger body's surface, the meteor becomes a meteorite. Meteorites vary greatly in size. For geologists, a bolide is a meteorite large enough to create an impact crater.
Meteorites that are recovered after being observed as they transit the atmosphere and impact the Earth are called meteorite falls. All others are known as meteorite finds. , there were about 1,412 witnessed falls that have specimens in the world's collections. , there are more than 65,780 well-documented meteorite finds.
Meteorites have traditionally been divided into three broad categories: stony meteorites that are rocks, mainly composed of silicate minerals; iron meteorites that are largely composed of ferronickel; and stony-iron meteorites that contain large amounts of both metallic and rocky material. Modern classification schemes divide meteorites into groups according to their structure, chemical and isotopic composition and mineralogy. Meteorites smaller than 2 mm are classified as micrometeorites. Extraterrestrial meteorites have been found on the Moon and on Mars.
Fall phenomena
Most meteoroids disintegrate when entering the Earth's atmosphere. Usually, five to ten a year are observed to fall and are subsequently recovered and made known to scientists. Few meteorites are large enough to create large impact craters. Instead, they typically arrive at the surface at their terminal velocity and, at most, create a small pit.
Large meteoroids may strike the earth with a significant fraction of their escape velocity (second cosmic velocity), leaving behind a hypervelocity impact crater. The kind of crater will depend on the size, composition, degree of fragmentation, and incoming angle of the impactor. The force of such collisions has the potential to cause widespread destruction. The most frequent hypervelocity cratering events on the Earth are caused by iron meteoroids, which are most easily able to transit the atmosphere intact. Examples of craters caused by iron meteoroids include Barringer Meteor Crater, Odessa Meteor Crater, Wabar craters, and Wolfe Creek crater; iron meteorites are found in association with all of these craters. In contrast, even relatively large stony or icy bodies like small comets or asteroids, up to millions of tons, are disrupted in the atmosphere, and do not make impact craters. Although such disruption events are uncommon, they can cause a considerable concussion to occur; the famed Tunguska event probably resulted from such an incident. Very large stony objects, hundreds of meters in diameter or more, weighing tens of millions of tons or more, can reach the surface and cause large craters but are very rare. Such events are generally so energetic that the impactor is completely destroyed, leaving no meteorites. (The very first example of a stony meteorite found in association with a large impact crater, the Morokweng crater in South Africa, was reported in May 2006.)
Several phenomena are well documented during witnessed meteorite falls too small to produce hypervelocity craters. The fireball that occurs as the meteoroid passes through the atmosphere can appear to be very bright, rivaling the sun in intensity, although most are far dimmer and may not even be noticed during the daytime. Various colors have been reported, including yellow, green, and red. Flashes and bursts of light can occur as the object breaks up. Explosions, detonations, and rumblings are often heard during meteorite falls, which can be caused by sonic booms as well as shock waves resulting from major fragmentation events. These sounds can be heard over wide areas, with a radius of a hundred or more kilometers. Whistling and hissing sounds are also sometimes heard but are poorly understood. Following the passage of the fireball, it is not unusual for a dust trail to linger in the atmosphere for several minutes.
As meteoroids are heated during atmospheric entry, their surfaces melt and experience ablation. They can be sculpted into various shapes during this process, sometimes resulting in shallow thumbprint-like indentations on their surfaces called regmaglypts. If the meteoroid maintains a fixed orientation for some time, without tumbling, it may develop a conical "nose cone" or "heat shield" shape. As it decelerates, eventually the molten surface layer solidifies into a thin fusion crust, which on most meteorites is black (on some achondrites, the fusion crust may be very light-colored). On stony meteorites, the heat-affected zone is at most a few mm deep; in iron meteorites, which are more thermally conductive, the structure of the metal may be affected by heat up to below the surface. Reports vary; some meteorites are reported to be "burning hot to the touch" upon landing, while others are alleged to have been cold enough to condense water and form a frost.
Meteoroids that disintegrate in the atmosphere may fall as meteorite showers, which can range from only a few up to thousands of separate individuals. The area over which a meteorite shower falls is known as its strewn field. Strewn fields are commonly elliptical in shape, with the major axis parallel to the direction of flight. In most cases, the largest meteorites in a shower are found farthest down-range in the strewn field.
Classification
Most meteorites are stony meteorites, classed as chondrites and achondrites. Only about 6% of meteorites are iron meteorites or a blend of rock and metal, the stony-iron meteorites. Modern classification of meteorites is complex. The review paper of Krot et al. (2007) summarizes modern meteorite taxonomy.
About 86% of the meteorites are chondrites, which are named for the small, round particles they contain. These particles, or chondrules, are composed mostly of silicate minerals that appear to have been melted while they were free-floating objects in space. Certain types of chondrites also contain small amounts of organic matter, including amino acids, and presolar grains. Chondrites are typically about 4.55 billion years old and are thought to represent material from the asteroid belt that never coalesced into large bodies. Like comets, chondritic asteroids are some of the oldest and most primitive materials in the Solar System. Chondrites are often considered to be "the building blocks of the planets".
About 8% of the meteorites are achondrites (meaning they do not contain chondrules), some of which are similar to terrestrial igneous rocks. Most achondrites are also ancient rocks, and are thought to represent crustal material of differentiated planetesimals. One large family of achondrites (the HED meteorites) may have originated on the parent body of the Vesta Family, although this claim is disputed. Others derive from unidentified asteroids. Two small groups of achondrites are special, as they are younger and do not appear to come from the asteroid belt. One of these groups comes from the Moon, and includes rocks similar to those brought back to Earth by Apollo and Luna programs. The other group is almost certainly from Mars and constitutes the only materials from other planets ever recovered by humans.
About 5% of meteorites that have been seen to fall are iron meteorites composed of iron-nickel alloys, such as kamacite and/or taenite. Most iron meteorites are thought to come from the cores of planetesimals that were once molten. As with the Earth, the denser metal separated from silicate material and sank toward the center of the planetesimal, forming its core. After the planetesimal solidified, it broke up in a collision with another planetesimal. Due to the low abundance of iron meteorites in collection areas such as Antarctica, where most of the meteoric material that has fallen can be recovered, it is possible that the percentage of iron-meteorite falls is lower than 5%. This would be explained by a recovery bias; laypeople are more likely to notice and recover solid masses of metal than most other meteorite types. The abundance of iron meteorites relative to total Antarctic finds is 0.4%.
Stony-iron meteorites constitute the remaining 1%. They are a mixture of iron-nickel metal and silicate minerals. One type, called pallasites, is thought to have originated in the boundary zone above the core regions where iron meteorites originated. The other major type of stony-iron meteorites is the mesosiderites.
Tektites (from Greek tektos, molten) are not themselves meteorites, but are rather natural glass objects up to a few centimeters in size that were formed—according to most scientists—by the impacts of large meteorites on Earth's surface. A few researchers have favored tektites originating from the Moon as volcanic ejecta, but this theory has lost much of its support over the last few decades.
Chemistry
In March 2015, NASA scientists reported that complex organic compounds found in DNA and RNA, including uracil, cytosine, and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine and polycyclic aromatic hydrocarbons (PAHs) may have been formed in red giants or in interstellar dust and gas clouds, according to the scientists.
In January 2018, researchers found that 4.5 billion-year-old meteorites found on Earth contained liquid water along with prebiotic complex organic substances that may be ingredients for life.
In November 2019, scientists reported detecting sugar molecules in meteorites for the first time, including ribose, suggesting that chemical processes on asteroids can produce some organic compounds fundamental to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth.
Weathering
Most meteorites date from the early Solar System and are by far the oldest extant material on Earth. Analysis of terrestrial weathering due to water, salt, oxygen, etc. is used to quantify the degree of alteration that a meteorite has experienced. Several qualitative weathering indices have been applied to Antarctic and desertic samples.
The most commonly employed weathering scale, used for ordinary chondrites, ranges from W0 (pristine state) to W6 (heavy alteration).
Fossil examples
"Fossil" meteorites are sometimes discovered by geologists. They represent the highly weathered remains of meteorites that fell to Earth in the remote past and were preserved in sedimentary deposits sufficiently well that they can be recognized through mineralogical and geochemical studies. One limestone quarry in Sweden has produced an anomalously large number — exceeding one hundred — fossil meteorites from the Ordovician, nearly all of which are highly weathered L-chondrites that still resemble the original meteorite under a petrographic microscope, but which have had their original material almost entirely replaced by terrestrial secondary mineralization. The extraterrestrial provenance was demonstrated in part through isotopic analysis of relict spinel grains, a mineral that is common in meteorites, is insoluble in water, and is able to persist chemically unchanged in the terrestrial weathering environment. One of these fossil meteorites, dubbed Österplana 065, appears to represent a distinct type of meteorite that is "extinct" in the sense that it is no longer falling to Earth, the parent body having already been completely depleted from the reservoir of near-Earth objects.
Collection
A "meteorite fall", also called an "observed fall", is a meteorite collected after its arrival was observed by people or automated devices. Any other meteorite is called a "meteorite find". There are more than 1,100 documented falls listed in widely used databases, most of which have specimens in modern collections. , the Meteoritical Bulletin Database had 1,180 confirmed falls.
Falls
Most meteorite falls are collected on the basis of eyewitness accounts of the fireball or the impact of the object on the ground, or both. Therefore, despite the fact that meteorites fall with virtually equal probability everywhere on Earth, verified meteorite falls tend to be concentrated in areas with higher human population densities such as Europe, Japan, and northern India.
A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first of these was the Přibram meteorite, which fell in Czechoslovakia (now the Czech Republic) in 1959. In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite.
Following the Pribram fall, other nations established automated observing programs aimed at studying infalling meteorites. One of these was the Prairie Network, operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern US. This program also observed a meteorite fall, the Lost City chondrite, allowing its recovery and a calculation of its orbit. Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, Innisfree, in 1977. Finally, observations by the European Fireball Network, a descendant of the original Czech program that recovered Pribram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002.
NASA has an automated system that detects meteors and calculates the orbit, magnitude, ground track, and other parameters over the southeast USA, which often detects a number of events each night.
Finds
Until the twentieth century, only a few hundred meteorite finds had ever been discovered. More than 80% of these were iron and stony-iron meteorites, which are easily distinguished from local rocks. To this day, few stony meteorites are reported each year that can be considered to be "accidental" finds. The reason there are now more than 30,000 meteorite finds in the world's collections started with the discovery by Harvey H. Nininger that meteorites are much more common on the surface of the Earth than was previously thought.
United States
Nininger's strategy was to search for meteorites in the Great Plains of the United States, where the land was largely cultivated and the soil contained few rocks. Between the late 1920s and the 1950s, he traveled across the region, educating local people about what meteorites looked like and what to do if they thought they had found one, for example, in the course of clearing a field. The result was the discovery of over 200 new meteorites, mostly stony types.
In the late 1960s, Roosevelt County, New Mexico was found to be a particularly good place to find meteorites. After the discovery of a few meteorites in 1967, a public awareness campaign resulted in the finding of nearly 100 new specimens in the next few years, with many being by a single person, Ivan Wilson. In total, nearly 140 meteorites were found in the region since 1967. In the area of the finds, the ground was originally covered by a shallow, loose soil sitting atop a hardpan layer. During the dustbowl era, the loose soil was blown off, leaving any rocks and meteorites that were present stranded on the exposed surface.
Beginning in the mid-1960s, amateur meteorite hunters began scouring the arid areas of the southwestern United States. To date, thousands of meteorites have been recovered from the Mojave, Sonoran, Great Basin, and Chihuahuan Deserts, with many being recovered on dry lake beds. Significant finds include the three-tonne Old Woman meteorite, currently on display at the Desert Discovery Center in Barstow, California, and the Franconia and Gold Basin meteorite strewn fields; hundreds of kilograms of meteorites have been recovered from each.
A number of finds from the American Southwest have been submitted with false find locations, as many finders think it is unwise to publicly share that information for fear of confiscation by the federal government and competition with other hunters at published find sites.
Several of the meteorites found recently are currently on display in the Griffith Observatory in Los Angeles, and at UCLA's Meteorite Gallery.
Antarctica
A few meteorites were found in Antarctica between 1912 and 1964. In 1969, the 10th Japanese Antarctic Research Expedition found nine meteorites on a blue ice field near the Yamato Mountains. With this discovery, came the realization that movement of ice sheets might act to concentrate meteorites in certain areas. After a dozen other specimens were found in the same place in 1973, a Japanese expedition was launched in 1974 dedicated to the search for meteorites. This team recovered nearly 700 meteorites.
Shortly thereafter, the United States began its own program to search for Antarctic meteorites, operating along the Transantarctic Mountains on the other side of the continent: the Antarctic Search for Meteorites (ANSMET) program. European teams, starting with a consortium called "EUROMET" in the 1990/91 season, and continuing with a program by the Italian Programma Nazionale di Ricerche in Antartide have also conducted systematic searches for Antarctic meteorites.
The Antarctic Scientific Exploration of China has conducted successful meteorite searches since 2000. A Korean program (KOREAMET) was launched in 2007 and has collected a few meteorites. The combined efforts of all of these expeditions have produced more than 23,000 classified meteorite specimens since 1974, with thousands more that have not yet been classified. For more information see the article by Harvey (2003).
Australia
At about the same time as meteorite concentrations were being discovered in the cold desert of Antarctica, collectors discovered that many meteorites could also be found in the hot deserts of Australia. Several dozen meteorites had already been found in the Nullarbor region of Western and South Australia. Systematic searches between about 1971 and the present recovered more than 500 others, ~300 of which are currently well characterized. The meteorites can be found in this region because the land presents a flat, featureless, plain covered by limestone. In the extremely arid climate, there has been relatively little weathering or sedimentation on the surface for tens of thousands of years, allowing meteorites to accumulate without being buried or destroyed. The dark-colored meteorites can then be recognized among the very different looking limestone pebbles and rocks.
The Sahara
In 1986–87, a German team installing a network of seismic stations while prospecting for oil discovered about 65 meteorites on a flat, desert plain about southeast of Dirj (Daraj), Libya. A few years later, a desert enthusiast saw photographs of meteorites being recovered by scientists in Antarctica, and thought that he had seen similar occurrences in northern Africa. In 1989, he recovered about 100 meteorites from several distinct locations in Libya and Algeria. Over the next several years, he and others who followed found at least 400 more meteorites. The find locations were generally in regions known as regs or hamadas: flat, featureless areas covered only by small pebbles and minor amounts of sand. Dark-colored meteorites can be easily spotted in these places. In the case of several meteorite fields, such as Dar al Gani, Dhofar, and others, favorable light-colored geology consisting of basic rocks (clays, dolomites, and limestones) makes meteorites particularly easy to identify.
Although meteorites had been sold commercially and collected by hobbyists for many decades, up to the time of the Saharan finds of the late 1980s and early 1990s, most meteorites were deposited in or purchased by museums and similar institutions where they were exhibited and made available for scientific research. The sudden availability of large numbers of meteorites that could be found with relative ease in places that were readily accessible (especially compared to Antarctica), led to a rapid rise in commercial collection of meteorites. This process was accelerated when, in 1997, meteorites coming from both the Moon and Mars were found in Libya. By the late 1990s, private meteorite-collecting expeditions had been launched throughout the Sahara. Specimens of the meteorites recovered in this way are still deposited in research collections, but most of the material is sold to private collectors. These expeditions have now brought the total number of well-described meteorites found in Algeria and Libya to more than 500.
Northwest Africa
Meteorite markets came into existence in the late 1990s, especially in Morocco. This trade was driven by Western commercialization and an increasing number of collectors. The meteorites were supplied by nomads and local people who combed the deserts looking for specimens to sell. Many thousands of meteorites have been distributed in this way, most of which lack any information about how, when, or where they were discovered. These are the so-called "Northwest Africa" meteorites. When they get classified, they are named "Northwest Africa" (abbreviated NWA) followed by a number. It is generally accepted that NWA meteorites originate in Morocco, Algeria, Western Sahara, Mali, and possibly even further afield. Nearly all of these meteorites leave Africa through Morocco. Scores of important meteorites, including Lunar and Martian ones, have been discovered and made available to science via this route. A few of the more notable meteorites recovered include Tissint and Northwest Africa 7034. Tissint was the first witnessed Martian meteorite fall in over fifty years; NWA 7034 is the oldest meteorite known to come from Mars, and is a unique water-bearing regolith breccia.
Arabian Peninsula
In 1999, meteorite hunters discovered that the desert in southern and central Oman were also favorable for the collection of many specimens. The gravel plains in the Dhofar and Al Wusta regions of Oman, south of the sandy deserts of the Rub' al Khali, had yielded about 5,000 meteorites as of mid-2009. Included among these are a large number of lunar and Martian meteorites, making Oman a particularly important area both for scientists and collectors. Early expeditions to Oman were mainly done by commercial meteorite dealers, however, international teams of Omani and European scientists have also now collected specimens.
The recovery of meteorites from Oman is currently prohibited by national law, but a number of international hunters continue to remove specimens now deemed national treasures. This new law provoked a small international incident, as its implementation preceded any public notification of such a law, resulting in the prolonged imprisonment of a large group of meteorite hunters, primarily from Russia, but whose party also consisted of members from the US as well as several other European countries.
In human affairs
Meteorites have figured into human culture since their earliest discovery as ceremonial or religious objects, as the subject of writing about events occurring in the sky and as a source of peril. The oldest known iron artifacts are nine small beads hammered from meteoritic iron. They were found in northern Egypt and have been securely dated to 3200 BC.
Ceremonial or religious use
Although the use of the metal found in meteorites is also recorded in myths of many countries and cultures where the celestial source was often acknowledged, scientific documentation only began in the last few centuries.
Meteorite falls may have been the source of cultish worship. The cult in the Temple of Artemis at Ephesus, one of the Seven Wonders of the Ancient World, possibly originated with the observation and recovery of a meteorite that was understood by contemporaries to have fallen to the earth from Jupiter, the principal Roman deity.
There are reports that a sacred stone was enshrined at the temple that may have been a meteorite.
The Black Stone set into the wall of the Kaaba has often been presumed to be a meteorite, but the little available evidence for this is inconclusive.
Some Native Americans treated meteorites as ceremonial objects. In 1915, a iron meteorite was found in a Sinagua (c. 1100–1200 AD) burial cyst near Camp Verde, Arizona, respectfully wrapped in a feather cloth. A small pallasite was found in a pottery jar in an old burial found at Pojoaque Pueblo, New Mexico. Nininger reports several other such instances, in the Southwest US and elsewhere, such as the discovery of Native American beads of meteoric iron found in Hopewell burial mounds, and the discovery of the Winona meteorite in a Native American stone-walled crypt.
Historical writings
In medieval China during the Song dynasty, a meteorite strike event was recorded by Shen Kuo in 1064 AD near Changzhou. He reported "a loud noise that sounded like a thunder was heard in the sky; a giant star, almost like the moon, appeared in the southeast" and later finding the crater and the still-hot meteorite within, nearby.
Two of the oldest recorded meteorite falls in Europe are the Elbogen (1400) and Ensisheim (1492) meteorites. The German physicist, Ernst Florens Chladni, was the first to publish (in 1794) the idea that meteorites might be rocks that originated not from Earth, but from space. His booklet was "On the Origin of the Iron Masses Found by Pallas and Others Similar to it, and on Some Associated Natural Phenomena". In this he compiled all available data on several meteorite finds and falls concluded that they must have their origins in outer space. The scientific community of the time responded with resistance and mockery. It took nearly ten years before a general acceptance of the origin of meteorites was achieved through the work of the French scientist Jean-Baptiste Biot and the British chemist, Edward Howard. Biot's study, initiated by the French Academy of Sciences, was compelled by a fall of thousands of meteorites on 26 April 1803 from the skies of L'Aigle, France.
Striking people or property
Throughout history, many first- and second-hand reports speak of meteorites killing humans and other animals. One example is from 1490 AD in China, which purportedly killed thousands of people. John Lewis has compiled some of these reports, and summarizes, "No one in recorded history has ever been killed by a meteorite in the presence of a meteoriticist and a medical doctor" and "reviewers who make sweeping negative conclusions usually do not cite any of the primary publications in which the eyewitnesses describe their experiences, and give no evidence of having read them".
Modern reports of meteorite strikes include:
In 1954 in Sylacauga, Alabama. A stone chondrite, the Hodges meteorite or Sylacauga meteorite, crashed through a roof and injured an occupant.
An approximately fragment of the Mbale meteorite fall from Uganda struck a youth, causing no injury.
In October 2021 a meteorite penetrated the roof of a house in Golden, British Columbia landing on an occupant's bed.
Notable examples
Naming
Meteorites are always named for the places they were found, where practical, usually a nearby town or geographic feature. In cases where many meteorites were found in one place, the name may be followed by a number or letter (e.g., Allan Hills 84001 or Dimmitt (b)). The name designated by the Meteoritical Society is used by scientists, catalogers, and most collectors.
Terrestrial
Allende – largest known carbonaceous chondrite (Chihuahua, Mexico, 1969).
Allan Hills A81005 – First meteorite determined to be of lunar origin.
Allan Hills 84001 – Mars meteorite that was claimed to prove the existence of life on Mars.
The Bacubirito Meteorite (Meteorito de Bacubirito) – A meteorite estimated to weigh .
Campo del Cielo – a group of iron meteorites associated with a crater field (of the same name) of at least 26 craters in West Chaco Province, Argentina. The total weight of meteorites recovered exceeds 100 tonnes.
Canyon Diablo – Associated with Meteor Crater in Arizona.
Cape York – One of the largest meteorites in the world. A 34-ton fragment called "Ahnighito", is exhibited at the American Museum of Natural History; the largest meteorite on exhibit in any museum.
Gibeon – A large Iron meteorite in Namibia, created the largest known strewn field.
Hoba – The largest known intact meteorite.
Kaidun – An unusual carbonaceous chondrite.
Mbozi meteorite – A 16-metric-ton ungrouped iron meteorite in Tanzania.
Murchison – A carbonaceous chondrite found to contain nucleobases – the building block of life.
Nōgata – The oldest meteorite whose fall can be dated precisely (to 19 May 861, at Nōgata)
Orgueil – A famous meteorite due to its especially primitive nature and high presolar grain content.
Sikhote-Alin – Massive iron meteorite impact event that occurred on 12 February 1947.
Tucson Ring – Ring shaped meteorite, used by a blacksmith as an anvil, in Tucson AZ. Currently at the Smithsonian.
Willamette – The largest meteorite ever found in the United States.
2007 Carancas impact event – On 15 September 2007, a stony meteorite that may have weighed as much as 4000 kilograms created a crater 13 meters in diameter near the village of Carancas, Peru.
2013 Russian meteor event – a 17-metre diameter, 10 000 ton asteroid hit the atmosphere above Chelyabinsk, Russia at 18 km/s around 09:20 local time (03:20 UTC) 15 February 2013, producing a very bright fireball in the morning sky. A number of small meteorite fragments have since been found nearby.
Extraterrestrial
Bench Crater meteorite (Apollo 12, 1969) and the Hadley Rille meteorite (Apollo 15, 1971)−Fragments of asteroids were found among the samples collected on the Moon.
Block Island meteorite and Heat Shield Rock – Discovered on Mars by Opportunity rover among four other iron meteorites. Two nickel-iron meteorites were identified by the Spirit rover. (See also: Mars rocks)
Large impact craters
Acraman crater in South Australia ( diameter)
Ames crater in Major County, Oklahoma diameter
Brent crater in northern Ontario ( diameter)
Chesapeake Bay impact crater ( diameter)
Chicxulub Crater off the coast of Yucatán Peninsula ( diameter)
Clearwater Lakes a double crater impact in Québec, Canada ( in diameter)
Lonar crater in India ( diameter)
Lumparn in Åland, in the Baltic Sea ( diameter)
Manicouagan Reservoir in Québec, Canada ( diameter)
Manson crater in Iowa ( crater is buried)
Meteor Crater in Arizona, also known as "Barringer Crater", the first confirmed terrestrial impact crater. ( diameter)
Mjølnir impact crater in the Barents Sea ( diameter)
Nördlinger Ries crater in Bavaria, Germany ( diameter)
Popigai crater in Russia ( diameter)
Siljan (lake) in Sweden, largest crater in Europe ( diameter)
Sudbury Basin in Ontario, Canada ( diameter).
Ungava Bay in Québec, Canada ()
Vredefort Crater in South Africa, the largest known impact crater on Earth ( diameter from an estimated wide meteorite).
Disintegrating meteoroids
Tunguska event in Siberia 1908 (no crater)
Vitim event in Siberia 2002 (no crater)
Chelyabinsk event in Russia 2013 (no known crater)
See also
Atmospheric focusing
Glossary of meteoritics
List of impact craters on Earth
List of Martian meteorites
List of meteorite minerals
List of rocks on Mars
List of possible impact structures on Earth
Meteor shower
Meteorite find
Meteoroid
Micrometeorite
Panspermia
References
External links
Current meteorite news articles
The British and Irish Meteorite Society
The Natural History Museum's meteorite catalogue database
Meteoritical Society
Earth Impact Database
Every Recorded Meteorite Impact on Earth from Tableau Software
Meteor Impact Craters Around the World
Geophysics |
19938 | https://en.wikipedia.org/wiki/Mega- | Mega- | Mega is a unit prefix in metric systems of units denoting a factor of one million (106 or ). It has the unit symbol M. It was confirmed for use in the International System of Units (SI) in 1960. Mega comes from .
Common examples of usage
Megapixel: 1 million pixels in a digital camera
One megatonne of TNT equivalent amounts to approx. 4 petajoules and is the approximate energy released on igniting one million tonnes of TNT. The unit is often used in measuring the explosive power of nuclear weapons.
Megahertz: frequency of electromagnetic radiation for radio and television broadcasting, GSM, etc. 1 MHz = 1,000,000 Hz.
Megabyte: unit of information equal to one million bytes (SI standard).
Megawatt: equal to one million watts of power. It is commonly used to measure the output of power plants, as well as the power consumption of electric locomotives, data centers, and other entities that heavily consume electricity.
Megadeath: (or megacorpse) is one million human deaths, usually used in reference to projected number of deaths from a nuclear explosion. The term was used by scientists and thinkers who strategized likely outcomes of all-out nuclear warfare.
Exponentiation
When units occur in exponentiation, such as in square and cubic forms, any multiples-prefix is considered part of the unit, and thus included in the exponentiation.
1 Mm2 means one square megametre or the size of a square of by or , and not (106 m2).
1 Mm3 means one cubic megametre or the size of a cube of by by or 1018 m3, and not (106 m3)
Computing
In some fields of computing, mega may sometimes denote 1,048,576 (220) of information units, for example, a megabyte, a megaword, but denotes (106) units of other quantities, for example, transfer rates: = . The prefix mebi- has been suggested as a prefix for 220 to avoid ambiguity.
See also
Binary prefix
Mebibyte
Order of magnitude
RKM code
References
External links
BIPM website
SI prefixes
he:תחיליות במערכת היחידות הבינלאומית#מגה |
19940 | https://en.wikipedia.org/wiki/Maciej%20P%C5%82a%C5%BCy%C5%84ski | Maciej Płażyński | Maciej Płażyński (; 10 February 1958 – 10 April 2010) was a Polish liberal-conservative politician.
Biography
Płażyński was born in Młynary. He began his political career in 1980 / 1981 as one of the leaders of the Students' Solidarity; he was governor of the Gdańsk Voivodship from August 1990 to July 1996, and was elected to the Sejm (the lower house of the Polish parliament) in September 1997. To date he is longest serving Marshal of the Sejm of the Third Republic of Poland.
In January 2001, he founded the Civic Platform political party with Donald Tusk and Andrzej Olechowski. He left Civic Platform for personal reasons and at the time of his death was an independent MP. He was member of Kashubian-Pomeranian Association. He was later chosen as a chairman of the Association "Polish Community".
Maciej Płażyński was married to Elżbieta Płażyńska and together they had three children: Jakub, Katarzyna, and Kacper.
He was killed on the Tupolev Tu-154 of the 36th Special Aviation Regiment which also carrying the President of Poland Lech Kaczyński crashed while landing at Smolensk-North airport near Smolensk, Russia, on 10 April 2010, killing all aboard including Płażyński and the President.
Honours and awards
In 2000, Płażyński was awarded the Order of Merit of the Italian Republic, First Class. He received the titles of honorary citizen of Młynary, Puck, Pionki and Lidzbark Warmiński.
On 16 April 2010 he was posthumously awarded the Grand Cross of the Order of Polonia Restituta. He was also awarded a Gold Medal of Gloria Artis.
See also
Solidarity
References
External links
Official site
1958 births
2010 deaths
People from Młynary
Polish lawyers
Marshals of the Sejm of the Third Polish Republic
Members of the Polish Sejm 1997–2001
Members of the Polish Sejm 2001–2005
Members of the Senate of Poland 2005–2007
Victims of the Smolensk air disaster
University of Gdańsk alumni
Grand Crosses of the Order of Polonia Restituta
Knights Grand Cross of the Order of Merit of the Italian Republic
Recipients of the Gold Medal for Merit to Culture – Gloria Artis
Polish Roman Catholics
Members of the Polish Sejm 2007–2011
Political party founders |
19941 | https://en.wikipedia.org/wiki/Mark%20Bingham | Mark Bingham | Mark Kendall Bingham (May 22, 1970 – September 11, 2001) was an American public relations executive who founded his own company, the Bingham Group. During the September 11 attacks in 2001, he was a passenger on board United Airlines Flight 93. Bingham was among the passengers who, along with Todd Beamer, Tom Burnett and Jeremy Glick, formed the plan to retake the plane from the hijackers, and led the effort that resulted in the crash of the plane into a field near Shanksville, Pennsylvania, thwarting the hijackers' plan to crash the plane into a building in Washington, D.C., most likely either the U.S. Capitol Building or the White House.
His heroic efforts on United Flight 93, as well as his athletic physique, were noted for having prompted a reassessment of gay stereotypes.
Early life
Mark Bingham was born in 1970, the only child of Alice Hoagland and Gerald Bingham. When Mark was two years old, his parents divorced. Raised by his mother and her family, Mark grew up in Miami, Florida, and Southern California before moving to the San Jose area in 1983. Bingham was an aspiring filmmaker, and as a teenager he began using a video camera as a personal diary to document his life and those of his family and friends. He graduated from Los Gatos High School as a two-year captain of his rugby team in 1988. As an undergraduate at the University of California, Berkeley, Bingham played on two of Coach Jack Clark's national-championship-winning rugby teams in the early 1990s. He also joined the Chi Psi fraternity, eventually becoming its president. Upon graduation at the age of twenty-one, Bingham came out as gay to his family and friends.
Rugby and business career
A large athlete at and , Bingham also played for the gay-inclusive rugby union team San Francisco Fog RFC. Bingham played No. 8 in their first two friendly matches. He played in their first tournament, and taught his teammates his favorite rugby songs.
At the time of his death, Bingham had recently opened a satellite office of his public relations firm in New York City and was spending more time on the East Coast. He discussed plans with his friend Scott Glaessgen to form a New York City rugby team, the Gotham Knights.
September 11, 2001
On the morning of September 11, Bingham overslept and nearly missed his flight, on his way to San Francisco to be an usher in his fraternity brother Joseph Salama's wedding. He arrived at Terminal A at Newark International Airport at 7:40 am, ran to Gate 17, and was the last passenger to board United Airlines Flight 93, taking seat 4D, next to passenger Tom Burnett.
United Flight 93 was scheduled to depart at 8:00 am, but the Boeing 757 did not depart until 42 minutes later due to runway traffic delays. Four minutes later, American Airlines Flight 11 crashed into the World Trade Center's North Tower. Fifteen minutes later, at 9:03 am, as United Flight 175 crashed into the South Tower, United 93 climbed to cruising altitude, heading west over New Jersey and into Pennsylvania. At 9:25 am, Flight 93 was above eastern Ohio, and pilots Jason Dahl and LeRoy Homer received an alert, "Beware of cockpit intrusion," on the cockpit computer device ACARS (Aircraft Communications and Reporting System). Three minutes later, Cleveland controllers could hear screams over the cockpit's open microphone. Moments later, the hijackers, led by the Lebanese Ziad Samir Jarrah, took over the plane's controls and told passengers, "Keep remaining sitting. We have a bomb on board". Bingham and the other passengers were herded into the back of the plane. Within six minutes, the plane changed course and headed for Washington, D.C. Several of the passengers made phone calls to loved ones, who informed them about the two planes that had crashed into the World Trade Center.
After the hijackers veered the plane sharply south, the passengers decided to act. Bingham, along with Todd Beamer, Tom Burnett and Jeremy Glick, formed a plan to take the plane back from the hijackers. They relayed this plan to their loved ones and the authorities via telephone. Bingham got through to his aunt's home in California. Bingham stated, "This is Mark. I want to let you guys know that I love you, in case I don't see you again...I'm on United Airlines, Flight 93. It's being hijacked." According to The Week, Hoagland formed the impression that her son spoke "confidentially" with a fellow passenger, to form a plan to retake the plane. According to ABC News, the call cut off after about three minutes. Hoagland, after seeing news reports of the plane's hijacking, called him back and left two messages for him, calmly saying, "Mark, this is your mom. The news is that it's been hijacked by terrorists. They are planning to probably use the plane as a target to hit some site on the ground. I would say go ahead and do everything you can to overpower them, because they are hellbent. Try to call me back if you can." Bingham, Burnett, and Glick were each more than tall, well-built and fit. As they made their decision to retake the plane, Glick related this over the phone to his wife, Lyz. Fellow passenger Todd Beamer, speaking to GTE-Verizon Lisa Jefferson and the FBI, related that he too was part of this group. They were joined by other passengers, including Lou Nacke, Rich Guadagno, Alan Beaven, Honor Elizabeth Wainio, Linda Gronlund, and William Cashman, along with flight attendants Sandra Bradshaw and Cee Cee Ross-Lyles, in discussing their options and voting on a course of action, ultimately deciding to storm the cockpit and take over the plane.
According to the 9/11 Commission Report, after the plane's voice data recorder was recovered, it revealed pounding and crashing sounds against the cockpit door and shouts and screams in English. "Let's get them!" a passenger cries. A hijacker shouts, "Allah akbar!" ("God is great"). Jarrah repeatedly pitched the plane to knock passengers off their feet, but the passengers apparently managed to invade the cockpit, where one was heard shouting, "In the cockpit. If we don't, we'll die." At 10:02 am, a hijacker ordered, "Pull it down! Pull it down!" The 9/11 Commission later reported that the plane's control wheel was turned hard to the right, causing it to roll on its back and plow into an empty field in Shanksville, Pennsylvania, at , killing everyone on board. The plane was 20 minutes of flying time away from its suspected target, the White House or the U.S. Capitol Building in Washington, D.C. According to Vice President Dick Cheney, President George W. Bush gave the order to shoot the plane down.
Legacy
Bingham is survived by his parents and the Hoagland family members who played a part in his upbringing, by his stepmother and various stepsiblings, and by his partner of six years, Paul Holm. Holm described Bingham as a brave, competitive man, saying, "He hated to lose—at anything." He was known to proudly display a scar he received after being gored at the Running of the Bulls in Pamplona, Spain. He is buried at Madronia Cemetery, Saratoga, California.
U.S. Senators John McCain and Barbara Boxer honored Bingham on September 17, 2001, in a ceremony for San Francisco Bay Area victims of the attacks, presenting a folded American flag to Paul Holm.
The Mark Kendall Bingham Memorial Tournament (referred to as the Bingham Cup), a biennial international rugby union competition predominantly for gay and bisexual men, was established in 2002 in his memory.
Bingham, along with the other passengers on Flight 93, was posthumously awarded the Arthur Ashe Courage Award in 2002.
The Eureka Valley Recreation Center's Gymnasium in San Francisco was renamed the Mark Bingham Gymnasium in August 2002.
Singer Melissa Etheridge dedicated the song "Tuesday Morning" in 2004 to his memory.
Beginning in 2005, the Mark Bingham Award for Excellence in Achievement has been awarded by the California Alumni Association of the University of California, Berkeley to a young alumnus or alumna at its annual Charter Gala.
At the National 9/11 Memorial, Bingham and other passengers from Flight 93 are memorialized at the South Pool, on Panel S-67.
At the Flight 93 National Memorial in Pennsylvania, Bingham's name is located on one of the 40 panels of polished, granite that comprise the Memorial's Wall of Names.
The 2013 feature-length documentary The Rugby Player focuses on Bingham and the bond he had with his mother, Alice Hoagland, a former United Airlines flight attendant who, following his death, became an authority on airline safety and a champion of LGBT rights. Described by ESPN as "an insightful and stereotype-shattering exploration" of Bingham's life, the film, which is directed by Scott Gracheff, relies on the vast amount of video footage Bingham himself shot beginning in his teens until weeks before his death. The film's alternate title, With You, is a popular rugby term, and one of Bingham's favorite expressions. The film premiered on Australia's ABC2 on August 20, 2014.
References
Further reading
Barrett, Jon Hero of Flight 93: Mark Bingham, Advocate Books, 2002
"UNITED FLIGHT 93: On Doomed Flight, Passengers Vowed to Perish Fighting" The New York Times. September 13, 2001
External links
Mark Bingham: a Tribute to a Wonderful Man, a Great Friend, a Loving Brother, and an American Hero.
SFFOG.ORG Mark's rugby team, the S.F. Fog (Mark's Memorial page).
Advocate Magazine article on Bingham.
Daily Cal Article referencing Mark attacking Stanford tree.
Mark Bingham Scholarship Fund.
Official Website of The Bingham Cup.
Team Bingham - Mark Bingham Scholarship fundraising Organization.
After September 11: Farewell to a Hero (California Monthly tribute).
With You: The Mark Bingham Story
1970 births
2001 deaths
American rugby union players
American terrorism victims
Gay sportsmen
LGBT people from California
LGBT sportspeople from the United States
LGBT rugby union players
Male murder victims
People from Los Gatos, California
Sportspeople from Phoenix, Arizona
People murdered in Pennsylvania
Terrorism deaths in Pennsylvania
United Airlines Flight 93 victims
University of California, Berkeley alumni
American sportsmen
Burials in California
West Valley College alumni
American business executives
20th-century LGBT people |
19942 | https://en.wikipedia.org/wiki/Manner%20of%20articulation | Manner of articulation | In articulatory phonetics, the manner of articulation is the configuration and interaction of the articulators (speech organs such as the tongue, lips, and palate) when making a speech sound. One parameter of manner is stricture, that is, how closely the speech organs approach one another. Others include those involved in the r-like sounds (taps and trills), and the sibilancy of fricatives.
The concept of manner is mainly used in the discussion of consonants, although the movement of the articulators will also greatly alter the resonant properties of the vocal tract, thereby changing the formant structure of speech sounds that is crucial for the identification of vowels. For consonants, the place of articulation and the degree of phonation of voicing are considered separately from manner, as being independent parameters. Homorganic consonants, which have the same place of articulation, may have different manners of articulation. Often nasality and laterality are included in manner, but some phoneticians, such as Peter Ladefoged, consider them to be independent.
Broad classifications
Manners of articulation with substantial obstruction of the airflow (stops, fricatives, affricates) are called obstruents. These are prototypically voiceless, but voiced obstruents are extremely common as well. Manners without such obstruction (nasals, liquids, approximants, and also vowels) are called sonorants because they are nearly always voiced. Voiceless sonorants are uncommon, but are found in Welsh and Classical Greek (the spelling "rh"), in Standard Tibetan (the "lh" of Lhasa), and the "wh" in those dialects of English that distinguish "which" from "witch".
Sonorants may also be called resonants, and some linguists prefer that term, restricting the word 'sonorant' to non-vocoid resonants (that is, nasals and liquids, but not vowels or semi-vowels). Another common distinction is between occlusives (stops, nasals and affricates) and continuants (all else).
Stricture
From greatest to least stricture, speech sounds may be classified along a cline as stop consonants (with occlusion, or blocked airflow), fricative consonants (with partially blocked and therefore strongly turbulent airflow), approximants (with only slight turbulence), tense vowels, and finally lax vowels (with full unimpeded airflow). Affricates often behave as if they were intermediate between stops and fricatives, but phonetically they are sequences of a stop and fricative.
Over time, sounds in a language may move along the cline toward less stricture in a process called lenition or towards more stricture in a process called fortition.
Other parameters
Sibilants are distinguished from other fricatives by the shape of the tongue and how the airflow is directed over the teeth. Fricatives at coronal places of articulation may be sibilant or non-sibilant, sibilants being the more common.
Flaps (also called taps) are similar to very brief stops. However, their articulation and behavior are distinct enough to be considered a separate manner, rather than just length. The main articulatory difference between flaps and stops is that, due to the greater length of stops compared to flaps, a build-up of air pressure occurs behind a stop which does not occur behind a flap. This means that when the stop is released, there is a burst of air as the pressure is relieved, while for flaps there is no such burst.
Trills involve the vibration of one of the speech organs. Since trilling is a separate parameter from stricture, the two may be combined. Increasing the stricture of a typical trill results in a trilled fricative. Trilled affricates are also known.
Nasal airflow may be added as an independent parameter to any speech sound. It is most commonly found in nasal occlusives and nasal vowels, but nasalized fricatives, taps, and approximants are also found. When a sound is not nasal, it is called oral.
Laterality is the release of airflow at the side of the tongue. This can be combined with other manners, resulting in lateral approximants (such as the pronunciation of the letter L in the English word "let"), lateral flaps, and lateral fricatives and affricates.
Individual manners
Stop, often called a plosive, is an oral occlusive, where there is occlusion (blocking) of the oral vocal tract, and no nasal air flow, so the air flow stops completely. Examples include English (voiceless) and (voiced). If the consonant is voiced, the voicing is the only sound made during occlusion; if it is voiceless, a stop is completely silent. What we hear as a /p/ or /k/ is the effect that the onset of the occlusion has on the preceding vowel, as well as the release burst and its effect on the following vowel. The shape and position of the tongue (the place of articulation) determine the resonant cavity that gives different stops their characteristic sounds. All languages have stops.
Nasal, a nasal occlusive, where there is occlusion of the oral tract, but air passes through the nose. The shape and position of the tongue determine the resonant cavity that gives different nasals their characteristic sounds. Examples include English . Nearly all languages have nasals, the only exceptions being in the area of Puget Sound and a single language on Bougainville Island.
Fricative, sometimes called spirant, where there is continuous frication (turbulent and noisy airflow) at the place of articulation. Examples include English (voiceless), (voiced), etc. Most languages have fricatives, though many have only an . However, the Indigenous Australian languages are almost completely devoid of fricatives of any kind.
Sibilants are a type of fricative where the airflow is guided by a groove in the tongue toward the teeth, creating a high-pitched and very distinctive sound. These are by far the most common fricatives. Fricatives at coronal (front of tongue) places of articulation are usually, though not always, sibilants. English sibilants include and .
Lateral fricatives are a rare type of fricative, where the frication occurs on one or both sides of the edge of the tongue. The "ll" of Welsh and the "hl" of Zulu are lateral fricatives.
Affricate, which begins like a stop, but this releases into a fricative rather than having a separate release of its own. The English letters "ch" and "j" represent affricates. Affricates are quite common around the world, though less common than fricatives.
Flap, often called a tap, is a momentary closure of the oral cavity. The "tt" of "utter" and the "dd" of "udder" are pronounced as a flap in North American and Australian English. Many linguists distinguish taps from flaps, but there is no consensus on what the difference might be. No language relies on such a difference. There are also lateral flaps.
Trill, in which the articulator (usually the tip of the tongue) is held in place, and the airstream causes it to vibrate. The double "r" of Spanish "perro" is a trill. Trills and flaps, where there are one or more brief occlusions, constitute a class of consonant called rhotics.
Approximant, where there is very little obstruction. Examples include English and . In some languages, such as Spanish, there are sounds that seem to fall between fricative and approximant.
One use of the word semivowel, sometimes called a glide, is a type of approximant, pronounced like a vowel but with the tongue closer to the roof of the mouth, so that there is slight turbulence. In English, is the semivowel equivalent of the vowel , and (spelled "y") is the semivowel equivalent of the vowel in this usage. Other descriptions use semivowel for vowel-like sounds that are not syllabic, but do not have the increased stricture of approximants. These are found as elements in diphthongs. The word may also be used to cover both concepts. The term glide is newer than semivowel, being used to indicate an essential quality of sounds such as and , which is the movement (or glide) from their initial position ( and , respectively) to a following vowel.
Lateral approximants, usually shortened to lateral, are a type of approximant pronounced with the side of the tongue. English is a lateral. Together with the rhotics, which have similar behavior in many languages, these form a class of consonant called liquids.
Other airstream initiations
All of these manners of articulation are pronounced with an airstream mechanism called pulmonic egressive, meaning that the air flows outward, and is powered by the lungs (actually the ribs and diaphragm). Other airstream mechanisms are possible. Sounds that rely on some of these include:
Ejectives, which are glottalic egressive. That is, the airstream is powered by an upward movement of the glottis rather than by the lungs or diaphragm. Stops, affricates, and occasionally fricatives may occur as ejectives. All ejectives are voiceless, or at least transition from voiced to voiceless.
Implosives, which are glottalic ingressive. Here the glottis moves downward, but the lungs may be used simultaneously (to provide voicing), and in some languages no air may actually flow into the mouth. Implosive stops are not uncommon, but implosive affricates and fricatives are rare. Voiceless implosives are also rare.
Clicks, which are lingual ingressive. Here the back of the tongue is used to create a vacuum in the mouth, causing air to rush in when the forward occlusion (tongue or lips) is released. Clicks may be oral or nasal, stop or affricate, central or lateral, voiced or voiceless. They are extremely rare in normal words outside Southern Africa. However, English has a click in its "tsk tsk" (or "tut tut") sound, and another is often used to say "giddy up" to a horse.
Combinations of these, in some analyses, in a single consonant: linguo-pulmonic and linguo-glottalic (ejective) consonants, which are clicks released into either a pulmonic or ejective stop/fricative.
See also
Index of phonetics articles
Articulatory phonetics
Place of articulation
Basis of articulation
Diction
Phonation
Airstream mechanism
Relative articulation
Nonexplosive stop
Vocal tract
Human voice
Source-filter model of speech production
Bibliography
External links
Movie clip showing the human articulators in action
Interactive place and manner of articulation
Interactive Flash website for American English, Spanish and German sounds
Phonetics
Articulatory phonetics
Articles containing video clips |
19943 | https://en.wikipedia.org/wiki/Mostaganem%20Province | Mostaganem Province | Mostaganem () is a province (wilaya) of Algeria. Its capital is Mostaganem.
Geography
The land relief in Mostaganem Province can be divided into four regions: the Dahra Range to the east, the Mostaganem Plateau to the south, the Chelif River valley which separates the two highland regions, and the plains on the province's southern border which lie next to the marshes of the Macta.
The Mostaganem Plateau covers eleven municipalities in the southern part of the province: Mostaganem, Ain Tedles, Sour, Bouguirat, Sirat, Souaflia, Mesra, Ain Sidi Cherif, Mansourah, Touahria and Sayada. It is a semi-arid and sandy plateau, in the shape of a triangle and bounded to the north by the Chelif River. It receives 350 mm of rainfall per year.
During French colonization, viticulture was introduced on the plateau. After the country's independence, it was replaced by irrigated market gardening and the culture of citrus fruits and cereals. However, in certain sectors east of Mostaganem, the replacement of the vineyards caused the appearance of small dunes as a consequence of the resumption of soil movement.
History
In 1984 Relizane Province was carved out of its territory.
Administrative divisions
The province is divided into 10 districts (daïras), which are further divided into 32 communes or municipalities.
Districts
Achacha
Aïn Nouïssy
Aïn Tédelès
Bouguirat
Hassi Mamèche
Kheïr Eddine
Mesra
Mostaganem
Sidi Ali
Sidi Lakhdar
Communes
Achacha (Achaacha)
Aïn Boudinar
Aïn Nouïssy
Aïn Sidi Chérif
Aïn Tédelès (Ain Tedles)
Benabdelmalek Ramdane (Abdelmalek Ramdane)
Bouguirat
El Hassaine
Fornaka
Hadjadj
Hassi Mamèche (Hasi Mameche)
Khadra
Kheïr Eddine (Kheiredine)
Mansourah
Mazagran (Mazagrain, Mezghrane)
Mesra
Mostaganem
Nékmaria
Oued El Kheïr
Ouled Boughalem
Ouled Malah (Ouled Maalef)
Safsaf (Saf Saf)
Sayada
Sidi Ali
Sidi Bellater (Sidi Belatar)
Sidi Lakhdar (Sidi Lakhdaara)
Sirat
Souaflia
Sour
Stidia
Tazgait
Touahria
References
Provinces of Algeria
States and territories established in 1974 |
19945 | https://en.wikipedia.org/wiki/Motherboard | Motherboard | A motherboard (also called mainboard, main circuit board, or mobo) is the main printed circuit board (PCB) in general-purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit (CPU) and memory, and provides connectors for other peripherals. Unlike a backplane, a motherboard usually contains significant sub-systems, such as the central processor, the chipset's input/output and memory controllers, interface connectors, and other components integrated for general use.
Motherboard means specifically a PCB with expansion capabilities. As the name suggests, this board is often referred to as the "mother" of all components attached to it, which often include peripherals, interface cards, and daughterboards: sound cards, video cards, network cards, host bus adapters, TV tuner cards, IEEE 1394 cards; and a variety of other custom components.
Similarly, the term mainboard describes a device with a single board and no additional expansions or capability, such as controlling boards in laser printers, television sets, washing machines, mobile phones, and other embedded systems with limited expansion abilities.
History
Prior to the invention of the microprocessor, the digital computer consisted of multiple printed circuit boards in a card-cage case with components connected by a backplane, a set of interconnected sockets. In very old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice. The central processing unit (CPU), memory, and peripherals were housed on individually printed circuit boards, which were plugged into the backplane. The ubiquitous S-100 bus of the 1970s is an example of this type of backplane system.
The most popular computers of the 1980s such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment.
During the late 1980s and early 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard. In the late 1980s, personal computer motherboards began to include single ICs (also called Super I/O chips) capable of supporting a set of low-speed peripherals: PS/2 keyboard and mouse, floppy disk drive, serial ports, and parallel ports. By the late 1990s, many personal computer motherboards included consumer-grade embedded audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retained only the graphics card as a separate component. Business PCs, workstations, and servers were more likely to need expansion cards, either for more robust functions, or for higher speeds; those systems often had fewer embedded components.
Laptop and notebook computers that were developed in the 1990s integrated the most common peripherals. This even included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century (like the tablet computer and the netbook). Memory, processors, network controllers, power source, and storage would be integrated into some systems.
Design
A motherboard provides the electrical connections by which the other components of the system communicate. Unlike a backplane, it also contains the central processing unit and hosts other subsystems and devices.
A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables; in modern microcomputers, it is increasingly common to integrate some of these peripherals into the motherboard itself.
An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard.
Modern motherboards include:
CPU sockets (or CPU slots) in which one or more microprocessors may be installed. In the case of CPUs in ball grid array packages, such as the VIA Nano and the Goldmont Plus, the CPU is directly soldered to the motherboard.
Memory slots into which the system's main memory is to be installed, typically in the form of DIMM modules containing DRAM chips can be DDR3, DDR4 or DDR5
The chipset which forms an interface between the CPU, main memory, and peripheral buses
Non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS
The clock generator which produces the system clock signal to synchronize the various components
Slots for expansion cards (the interface to the system via the buses supported by the chipset)
Power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards. , some graphics cards (e.g. GeForce 8 and Radeon R600) require more power than the motherboard can provide, and thus dedicated connectors have been introduced to attach them directly to the power supply
Connectors for hard disk drives, optical disc drives, or solid-state drives, typically SATA and NVMe now.
Additionally, nearly all motherboards include logic and connectors to support commonly used input devices, such as USB for mouse devices and keyboards. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards.
Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat.
Form factor
Motherboards are produced in a variety of sizes and shapes called form factors, some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible systems are designed to fit various case sizes. , most desktop computer motherboards use the ATX standard form factor — even those found in Macintosh and Sun computers, which have not been built from commodity components. A case's motherboard and power supply unit (PSU) form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases. For example, an ATX case will usually accommodate a microATX motherboard. Laptop computers generally use highly integrated, miniaturized, and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard
CPU sockets
A CPU socket (central processing unit) or slot is an electrical component that attaches to a Printed Circuit Board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. CPU sockets on the motherboard can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture. A CPU socket type and motherboard chipset must support the CPU series and speed.
Integrated peripherals
With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly integrated motherboards are thus especially popular in small form factor and budget computers.
Disk controllers for SATA drives, and historical PATA drives.
Historical floppy-disk controller
Integrated graphics controller supporting 2D and 3D graphics, with VGA, DVI, HDMI, DisplayPort and TV output
integrated sound card supporting 8-channel (7.1) audio and S/PDIF output
Ethernet network controller for connection to a LAN and to receive Internet
USB controller
Wireless network interface controller
Bluetooth controller
Temperature, voltage, and fan-speed sensors that allow software to monitor the health of computer components.
Peripheral card slots
A typical motherboard will have a different number of connections depending on its standard and form factor.
A standard, modern ATX motherboard will typically have two or three PCI-Express x16 connection for a graphics card, one or two legacy PCI slots for various expansion cards, and one or two PCI-E x1 (which has superseded PCI). A standard EATX motherboard will have two to four PCI-E x16 connection for graphics cards, and a varying number of PCI and PCI-E x1 slots. It can sometimes also have a PCI-E x4 slot (will vary between brands and models).
Some motherboards have two or more PCI-E x16 slots, to allow more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for AMD). These allow 2 to 4 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming, video editing, etc.
In newer motherboards, the M.2 slots are for SSD and/or Wireless network interface controller.
Temperature and reliability
Motherboards are generally air cooled with heat sinks often mounted on larger chips in modern motherboards. Insufficient or improper cooling can cause damage to the internal components of the computer, or cause it to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPU's until the late 1990s; since then, most have required CPU fans mounted on heat sinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional computer fans and integrated temperature sensors to detect motherboard and CPU temperatures and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Alternatively computers can use a water cooling system instead of many fans.
Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as a careful layout of the motherboard and other components to allow for heat sink placement.
A 2003 study found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation, an issue termed capacitor plague.
Modern motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours of operation at , their expected design life roughly doubles for every below this. At a lifetime of 3 to 4 years can be expected. However, many manufacturers deliver substandard capacitors, which significantly reduce life expectancy. Inadequate case cooling and elevated temperatures around the CPU socket exacerbate this problem. With top blowers, the motherboard components can be kept under , effectively doubling the motherboard lifetime.
Mid-range and high-end motherboards, on the other hand, use solid capacitors exclusively. For every 10 °C less, their average lifespan is multiplied approximately by three, resulting in a 6-times higher lifetime expectancy at . These capacitors may be rated for 5000, 10000 or 12000 hours of operation at , extending the projected lifetime in comparison with standard solid capacitors.
In Desktop PCs and notebook computers, the motherboard cooling and monitoring solutions are usually based on Super I/O or Embedded Controller.
Bootstrapping using the Basic Input/Output System
Motherboards contain a ROM (and later EPROM, EEPROM, NOR flash) to initialize hardware devices, and loads an operating system from the peripheral device. Microcomputers such as the Apple II and IBM PC used ROM chips mounted in sockets on the motherboard. At power-up, the central processor unit would load its program counter with the address of the Boot ROM and start executing instructions from the Boot ROM. These instructions initialized and tested the system hardware, displays system information on the screen, performed RAM checks, and then loaded an operating system from a peripheral device. If none was available, then the computer would perform tasks from other ROM stores or display an error message, depending on the model and design of the computer. For example, both the Apple II and the original IBM PC had Cassette BASIC (ROM BASIC) and would start that if no operating system could be loaded from the floppy disk or hard disk.
Most modern motherboard designs use a BIOS, stored in an EEPROM or NOR flash chip soldered to or socketed on the motherboard, to boot an operating system. When the computer is powered on, the BIOS firmware tests and configures memory, circuitry, and peripherals. This Power-On Self Test (POST) may include testing some of the following things:
Video card
Expansion cards inserted into slots, such as conventional PCI and PCI Express
Historical floppy drive
Temperatures, voltages, and fan speeds for hardware monitoring
CMOS memory used to store BIOS configuration
Keyboard and Mouse
Sound card
Network adapter
Optical drives: CD-ROM or DVD-ROM
Hard disk drive and solid state drive
Security devices, such as a fingerprint reader
USB devices, such as a USB mass storage device
Many motherboards now use a successor to BIOS called UEFI. It became popular after Microsoft began requiring it for a system to be certified to run Windows 8.
See also
Peripheral Component Interconnect (PCI)
PCI-X
PCI Express (PCIe)
Accelerated Graphics Port (AGP)
M.2
U.2
Computer case screws
CMOS battery
Expansion card
List of computer hardware manufacturers
Basic Input/Output System (BIOS)
Unified Extensible Firmware Interface (UEFI)
Overclocking
Single-board computer
Switched-mode power supply applications
Symmetric multiprocessing
Chip creep
References
External links
The Making of a Motherboard: ECS Factory Tour
The Making of a Motherboard: Gigabyte Factory Tour
Front Panel I/O Connectivity Design Guide - v1.3 (pdf file)
Motherboard
IBM PC compatibles |
19947 | https://en.wikipedia.org/wiki/Mannerism | Mannerism | Mannerism, also known as Late Renaissance, is a style in European art that emerged in the later years of the Italian High Renaissance around 1520, spreading by about 1530 and lasting until about the end of the 16th century in Italy, when the Baroque style largely replaced it. Northern Mannerism continued into the early 17th century.
Stylistically, Mannerism encompasses a variety of approaches influenced by, and reacting to, the harmonious ideals associated with artists such as Leonardo da Vinci, Raphael, Vasari, and early Michelangelo. Where High Renaissance art emphasizes proportion, balance, and ideal beauty, Mannerism exaggerates such qualities, often resulting in compositions that are asymmetrical or unnaturally elegant. Notable for its artificial (as opposed to naturalistic) qualities, this artistic style privileges compositional tension and instability rather than the balance and clarity of earlier Renaissance painting. Mannerism in literature and music is notable for its highly florid style and intellectual sophistication.
The definition of Mannerism and the phases within it continues to be a subject of debate among art historians. For example, some scholars have applied the label to certain early modern forms of literature (especially poetry) and music of the 16th and 17th centuries. The term is also used to refer to some late Gothic painters working in northern Europe from about 1500 to 1530, especially the Antwerp Mannerists—a group unrelated to the Italian movement. Mannerism has also been applied by analogy to the Silver Age of Latin literature.
Nomenclature
The word "Mannerism" derives from the Italian maniera, meaning "style" or "manner". Like the English word "style", maniera can either indicate a specific type of style (a beautiful style, an abrasive style) or indicate an absolute that needs no qualification (someone "has style"). In the second edition of his Lives of the Most Excellent Painters, Sculptors, and Architects (1568), Giorgio Vasari used maniera in three different contexts: to discuss an artist's manner or method of working; to describe a personal or group style, such as the term maniera greca to refer to the medieval Italo-Byzantine style or simply to the maniera of Michelangelo; and to affirm a positive judgment of artistic quality. Vasari was also a Mannerist artist, and he described the period in which he worked as "la maniera moderna", or the "modern style". James V. Mirollo describes how "bella maniera" poets attempted to surpass in virtuosity the sonnets of Petrarch. This notion of "bella maniera" suggests that artists who were thus inspired looked to copying and bettering their predecessors, rather than confronting nature directly. In essence, "bella maniera" utilized the best from a number of source materials, synthesizing it into something new.
As a stylistic label, "Mannerism" is not easily defined. It was used by Swiss historian Jacob Burckhardt and popularized by German art historians in the early 20th century to categorize the seemingly uncategorizable art of the Italian 16th century – art that was no longer found to exhibit the harmonious and rational approaches associated with the High Renaissance. "High Renaissance" connoted a period distinguished by harmony, grandeur and the revival of classical antiquity. The term "Mannerist" was redefined in 1967 by John Shearman following the exhibition of Mannerist paintings organised by Fritz Grossmann at Manchester City Art Gallery in 1965. The label "Mannerism" was used during the 16th century to comment on social behaviour and to convey a refined virtuoso quality or to signify a certain technique. However, for later writers, such as the 17th-century Gian Pietro Bellori, la maniera was a derogatory term for the perceived decline of art after Raphael, especially in the 1530s and 1540s. From the late 19th century on, art historians have commonly used the term to describe art that follows Renaissance classicism and precedes the Baroque.
Yet historians differ as to whether Mannerism is a style, a movement, or a period; and while the term remains controversial it is still commonly used to identify European art and culture of the 16th century.
Origin and development
By the end of the High Renaissance, young artists experienced a crisis: it seemed that everything that could be achieved was already achieved. No more difficulties, technical or otherwise, remained to be solved. The detailed knowledge of anatomy, light, physiognomy and the way in which humans register emotion in expression and gesture, the innovative use of the human form in figurative composition, the use of the subtle gradation of tone, all had reached near perfection. The young artists needed to find a new goal, and they sought new approaches. At this point Mannerism started to emerge. The new style developed between 1510 and 1520 either in Florence, or in Rome, or in both cities simultaneously.
Origins and role models
This period has been described as a "natural extension" of the art of Andrea del Sarto, Michelangelo, and Raphael. Michelangelo developed his own style at an early age, a deeply original one which was greatly admired at first, then often copied and imitated by other artists of the era. One of the qualities most admired by his contemporaries was his terribilità, a sense of awe-inspiring grandeur, and subsequent artists attempted to imitate it. Other artists learned Michelangelo's impassioned and highly personal style by copying the works of the master, a standard way that students learned to paint and sculpt. His Sistine Chapel ceiling provided examples for them to follow, in particular his representation of collected figures often called ignudi and of the Libyan Sibyl, his vestibule to the Laurentian Library, the figures on his Medici tombs, and above all his Last Judgment. The later Michelangelo was one of the great role models of Mannerism. Young artists broke into his house and stole drawings from him. In his book Lives of the Most Eminent Painters, Sculptors, and Architects, Giorgio Vasari noted that Michelangelo stated once: "Those who are followers can never pass by whom they follow".
The competitive spirit
The competitive spirit was cultivated by patrons who encouraged sponsored artists to emphasize virtuosic technique and to compete with one another for commissions. It drove artists to look for new approaches and dramatically illuminated scenes, elaborate clothes and compositions, elongated proportions, highly stylized poses, and a lack of clear perspective. Leonardo da Vinci and Michelangelo were each given a commission by Gonfaloniere Piero Soderini to decorate a wall in the Hall of Five Hundred in Florence. These two artists were set to paint side by side and compete against each other, fueling the incentive to be as innovative as possible.
Early mannerism
The early Mannerists in Florence—especially the students of Andrea del Sarto such as Jacopo da Pontormo and Rosso Fiorentino—are notable for elongated forms, precariously balanced poses, a collapsed perspective, irrational settings, and theatrical lighting. Parmigianino (a student of Correggio) and Giulio Romano (Raphael's head assistant) were moving in similarly stylized aesthetic directions in Rome. These artists had matured under the influence of the High Renaissance, and their style has been characterized as a reaction to or exaggerated extension of it. Instead of studying nature directly, younger artists began studying Hellenistic sculpture and paintings of masters past. Therefore, this style is often identified as "anti-classical", yet at the time it was considered a natural progression from the High Renaissance. The earliest experimental phase of Mannerism, known for its "anti-classical" forms, lasted until about 1540 or 1550. Marcia B. Hall, professor of art history at Temple University, notes in her book After Raphael that Raphael's premature death marked the beginning of Mannerism in Rome.
In past analyses, it has been noted that mannerism arose in the early 16th century contemporaneously with a number of other social, scientific, religious and political movements such as the Copernican heliocentrism, the Sack of Rome in 1527, and the Protestant Reformation's increasing challenge to the power of the Catholic Church. Because of this, the style's elongated forms and distorted forms were once interpreted as a reaction to the idealized compositions prevalent in High Renaissance art. This explanation for the radical stylistic shift c. 1520 has fallen out of scholarly favor, though early Mannerist art is still sharply contrasted with High Renaissance conventions; the accessibility and balance achieved by Raphael's School of Athens no longer seemed to interest young artists.
High maniera
The second period of Mannerism is commonly differentiated from the earlier, so-called "anti-classical" phase.
Subsequent mannerists stressed intellectual conceits and artistic virtuosity, features that have led later critics to accuse them of working in an unnatural and affected "manner" (maniera). Maniera artists looked to their older contemporary Michelangelo as their principal model; theirs was an art imitating art, rather than an art imitating nature. Art historian Sydney Joseph Freedberg argues that the intellectualizing aspect of maniera art involves expecting its audience to notice and appreciate this visual reference—a familiar figure in an unfamiliar setting enclosed between "unseen, but felt, quotation marks". The height of artifice is the Maniera painter's penchant for deliberately misappropriating a quotation. Agnolo Bronzino and Giorgio Vasari exemplify this strain of Maniera that lasted from about 1530 to 1580. Based largely at courts and in intellectual circles around Europe, Maniera art couples exaggerated elegance with exquisite attention to surface and detail: porcelain-skinned figures recline in an even, tempered light, acknowledging the viewer with a cool glance, if they make eye contact at all. The Maniera subject rarely displays much emotion, and for this reason works exemplifying this trend are often called 'cold' or 'aloof.' This is typical of the so-called "stylish style" or Maniera in its maturity.
Spread of Mannerism
The cities Rome, Florence, and Mantua were Mannerist centers in Italy. Venetian painting pursued a different course, represented by Titian in his long career. A number of the earliest Mannerist artists who had been working in Rome during the 1520s fled the city after the Sack of Rome in 1527. As they spread out across the continent in search of employment, their style was disseminated throughout Italy and Northern Europe. The result was the first international artistic style since the Gothic. Other parts of Northern Europe did not have the advantage of such direct contact with Italian artists, but the Mannerist style made its presence felt through prints and illustrated books. European rulers, among others, purchased Italian works, while northern European artists continued to travel to Italy, helping to spread the Mannerist style. Individual Italian artists working in the North gave birth to a movement known as the Northern Mannerism. Francis I of France, for example, was presented with Bronzino's Venus, Cupid, Folly and Time. The style waned in Italy after 1580, as a new generation of artists, including the Carracci brothers, Caravaggio and Cigoli, revived naturalism. Walter Friedlaender identified this period as "anti-mannerism", just as the early Mannerists were "anti-classical" in their reaction away from the aesthetic values of the High Renaissance and today the Carracci brothers and Caravaggio are agreed to have begun the transition to Baroque-style painting which was dominant by 1600.
Outside of Italy, however, Mannerism continued into the 17th century. In France, where Rosso traveled to work for the court at Fontainebleau, it is known as the "Henry II style" and had a particular impact on architecture. Other important continental centers of Northern Mannerism include the court of Rudolf II in Prague, as well as Haarlem and Antwerp. Mannerism as a stylistic category is less frequently applied to English visual and decorative arts, where native labels such as "Elizabethan" and "Jacobean" are more commonly applied. Seventeenth-century Artisan Mannerism is one exception, applied to architecture that relies on pattern books rather than on existing precedents in Continental Europe.
Of particular note is the Flemish influence at Fontainebleau that combined the eroticism of the French style with an early version of the vanitas tradition that would dominate seventeenth-century Dutch and Flemish painting. Prevalent at this time was the pittore vago, a description of painters from the north who entered the workshops in France and Italy to create a truly international style.
Sculpture
As in painting, early Italian Mannerist sculpture was very largely an attempt to find an original style that would top the achievement of the High Renaissance, which in sculpture essentially meant Michelangelo, and much of the struggle to achieve this was played out in commissions to fill other places in the Piazza della Signoria in Florence, next to Michelangelo's David. Baccio Bandinelli took over the project of Hercules and Cacus from the master himself, but it was little more popular then than it is now, and maliciously compared by Benvenuto Cellini to "a sack of melons", though it had a long-lasting effect in apparently introducing relief panels on the pedestal of statues. Like other works of his and other Mannerists, it removes far more of the original block than Michelangelo would have done. Cellini's bronze Perseus with the head of Medusa is certainly a masterpiece, designed with eight angles of view, another Mannerist characteristic, and artificially stylized in comparison with the Davids of Michelangelo and Donatello. Originally a goldsmith, his famous gold and enamel Salt Cellar (1543) was his first sculpture, and shows his talent at its best.
Small bronze figures for collector's cabinets, often mythological subjects with nudes, were a popular Renaissance form at which Giambologna, originally Flemish but based in Florence, excelled in the later part of the century. He also created life-size sculptures, of which two entered the collection in the Piazza della Signoria. He and his followers devised elegant elongated examples of the figura serpentinata, often of two intertwined figures, that were interesting from all angles.
Early theorists
Giorgio Vasari
Giorgio Vasari's opinions about the art of painting emerge in the praise he bestows on fellow artists in his multi-volume Lives of the Artists: he believed that excellence in painting demanded refinement, richness of invention (invenzione), expressed through virtuoso technique (maniera), and wit and study that appeared in the finished work, all criteria that emphasized the artist's intellect and the patron's sensibility. The artist was now no longer just a trained member of a local Guild of St Luke. Now he took his place at court alongside scholars, poets, and humanists, in a climate that fostered an appreciation for elegance and complexity. The coat-of-arms of Vasari's Medici patrons appears at the top of his portrait, quite as if it were the artist's own. The framing of the woodcut image of Vasari's Lives would be called "Jacobean" in an English-speaking milieu. In it, Michelangelo's Medici tombs inspire the anti-architectural "architectural" features at the top, the papery pierced frame, the satyr nudes at the base. As a mere frame it is extravagant: Mannerist, in short.
Gian Paolo Lomazzo
Another literary figure from the period is Gian Paolo Lomazzo, who produced two works—one practical and one metaphysical—that helped define the Mannerist artist's self-conscious relation to his art. His Trattato dell'arte della pittura, scoltura et architettura (Milan, 1584) is in part a guide to contemporary concepts of decorum, which the Renaissance inherited in part from Antiquity but Mannerism elaborated upon. Lomazzo's systematic codification of aesthetics, which typifies the more formalized and academic approaches typical of the later 16th century, emphasized a consonance between the functions of interiors and the kinds of painted and sculpted decors that would be suitable. Iconography, often convoluted and abstruse, is a more prominent element in the Mannerist styles. His less practical and more metaphysical Idea del tempio della pittura (The ideal temple of painting, Milan, 1590) offers a description along the lines of the "four temperaments" theory of human nature and personality, defining the role of individuality in judgment and artistic invention.
Characteristics of artworks created during the Mannerist period
Mannerism was an anti-classical movement which differed greatly from the aesthetic ideologies of the Renaissance. Though Mannerism was initially accepted with positivity based on the writings of Vasari, it was later regarded in a negative light because it solely view as, "an alteration of natural truth and a trite repetition of natural formulas." As an artistic moment, Mannerism involves many characteristics that are unique and specific to experimentation of how art is perceived. Below is a list of many specific characteristics that Mannerist artists would employ in their artworks.
Elongation of figures: often Mannerist work featured the elongation of the human figure – occasionally this contributed to the bizarre imagery of some Mannerist art.
Distortion of perspective: in paintings, the distortion of perspective explored the ideals for creating a perfect space. However, the idea of perfection sometimes alluded to the creation of unique imagery. One way in which distortion was explored was through the technique of foreshortening. At times, when extreme distortion was utilized, it would render the image nearly impossible to decipher.
Black backgrounds: Mannerist artists often utilized flat black backgrounds to present a full contrast of contours in order to create dramatic scenes. Black backgrounds also contributed to a creating sense of fantasy within the subject matter.
Use of darkness and light: many Mannerists were interested in capturing the essence of the night sky through the use of intentional illumination, often creating a sense of fantasy scenes. Notably, special attention was paid to torch and moonlight to create dramatic scenes.
Sculptural forms: Mannerism was greatly influenced by sculpture, which gained popularity in the sixteenth century. As a result, Mannerist artists often based their depictions of human bodies in reference to sculptures and prints. This allowed Mannerist artists to focus on creating dimension.
Clarity of line: the attention that was paid to clean outlines of figures was prominent within Mannerism and differed largely from the Baroque and High Renaissance.The outlines of figures often allowed for more attention to detail.
Composition and space: Mannerist artists rejected the ideals of the Renaissance, notably the technique of one-point perspective. Instead, there was an emphasis on atmospheric effects and distortion of perspective. The use of space in Mannerist works instead privileged crowded compositions with various forms and figures or scant compositions with emphasis on black backgrounds.
Mannerist movement: the interest in the study of human movement often lead to Mannerist artists rendering a unique type of movement linked to serpentine positions. These positions often anticipate the movements of future positions because of their often-unstable motions figures. In addition, this technique attributes to the artist's experimentation of form.
Painted frames: in some Mannerist works, painted frames were utilized to blend in with the background of paintings and at times, contribute to the overall composition of the artwork. This is at times prevalent when there is special attention paid to ornate detailing.
Atmospheric effects: many Mannerists utilized the technique of sfumato, known as, "the rendering of soft and hazy contours or surfaces" in their paintings for rendering the streaming of light.
Mannerist colour: a unique aspect of Mannerism was in addition to the experimentation of form, composition, and light, much of the same curiosity was applied to color. Many artworks toyed with pure and intense hues of blues, green, pinks, and yellows, which at times detract from the overall design of artworks, and at other times, compliment it. Additionally, when rending skin tone, artists would often concentrate on create overly creaming and light complexions and often utilize undertones of blue.
Mannerist artists and examples of their works
Jacopo da Pontormo
Jacopo da Pontormo's work is one of the most important contributions to Mannerism. He often drew his subject matter from religious narratives; heavily influenced by the works of Michelangelo, he frequently alludes to or uses sculptural forms as models for his compositions. A well-known element of his work is the rendering of gazes by various figures which often pierce out at the viewer in various directions. Dedicated to his work, Pontormo often expressed anxiety about its quality and was known to work slowly and methodically. His legacy is highly regarded, as he influenced artists such as Agnolo Bronzino and the aesthetic ideals of late Mannerism.
Pontomoro's Joseph in Egypt, painted in 1517, portrays a running narrative of four Biblical scenes in which Joseph reconnects with his family. On the left side of the composition, Pontomoro depicts a scene of Joseph introducing his family to the Pharaoh of Egypt. On the right, Joseph is riding on a rolling bench, as cherubs fill the composition around him in addition to other figures and large rocks on a path in the distance. Above these scenes, is a spiral staircase which Joseph guides one his sons to their mother at the top. The final scene, on the right, is the final stage of Jacob's death as his sons watch nearby.
Pontormo's Joseph in Egypt features many Mannerist elements. One element is utilization of incongruous colors such as various shades of pinks and blues which make up a majority of the canvas. An additional element of Mannerism is the incoherent handling of time about the story of Joseph through various scenes and use of space. Through the inclusion of the four different narratives, Ponotormo creates a cluttered composition and overall sense of busyness.
Rosso Fiorentino and the School of Fontainebleau
Rosso Fiorentino, who had been a fellow pupil of Pontormo in the studio of Andrea del Sarto, in 1530 brought Florentine Mannerism to Fontainebleau, where he became one of the founders of French 16th-century Mannerism, popularly known as the School of Fontainebleau.
The examples of a rich and hectic decorative style at Fontainebleau further disseminated the Italian style through the medium of engravings to Antwerp, and from there throughout Northern Europe, from London to Poland. Mannerist design was extended to luxury goods like silver and carved furniture. A sense of tense, controlled emotion expressed in elaborate symbolism and allegory, and an ideal of female beauty characterized by elongated proportions are features of this style.
Agnolo Bronzino
Agnolo Bronzino was a pupil of Pontormo, whose style was very influential and often confusing in terms of figuring out the attribution of many artworks. During his career, Bronzino also collaborated with Vasari as a set designer for the production "Comedy of Magicians", where he painted many portraits. Bronzino's work was sought after, and he enjoyed great success when he became a court painter for the Medici family in 1539. A unique Mannerist characteristic of Bronzino's work was the rendering of milky complexions.
In the painting, Venus, Cupid, Folly and Time, Bronzino portrays an erotic scene that leaves the viewer with more questions than answers. In the foreground, Cupid and Venus are nearly engaged in a kiss, but pause as if caught in the act. Above the pair are mythological figures, Father Time on the right, who pulls a curtain to reveal the pair and the representation of the goddess of the night on the left. The composition also involves a grouping of masks, a hybrid creature composed of features of a girl and a serpent, and a man depicted in agonizing pain. Many theories are available for the painting, such as it conveying the dangers of syphilis, or that the painting functioned as a court game.
Mannerist portraits by Bronzino are distinguished by a serene elegance and meticulous attention to detail. As a result, Bronzino's sitters have been said to project an aloofness and marked emotional distance from the viewer. There is also a virtuosic concentration on capturing the precise pattern and sheen of rich textiles. Specifically, within the Venus, Cupid, Folly and Time, Bronzino utilizes the tactics of Mannerist movement, attention to detail, color, and sculptural forms. Evidence of Mannerist movement is apparent in the awkward movements of Cupid and Venus, as they contort their bodies to partly embrace. Particularly, Bronzino paints the complexion with the many forms as a perfect porcelain white with a smooth effacement of their muscles which provides a reference to the smoothness of sculpture.
Alessandro Allori
Alessandro Allori's (1535–1607) Susanna and the Elders (below) is distinguished by latent eroticism and consciously brilliant still life detail, in a crowded, contorted composition.
Jacopo Tintoretto
Jacopo Tintoretto has been known for his vastly different contributions to Venetian painting after the legacy of Titian. His work, which differed greatly from his predecessors, had been criticized by Vasari for its, "fantastical, extravagant, bizarre style." Within his work, Tintoretto adopted Mannerist elements that have distanced him from the classical notion of Venetian painting, as he often created artworks which contained elements of fantasy and retained naturalism. Other unique elements of Tintoretto's work include his attention to color through the regular utilization of rough brushstrokes and experimentation with pigment to create illusion.
An artwork that is associated with Mannerist characteristics is the Last Supper; it was commissioned by Michele Alabardi for the San Giorgio Maggiore in 1591. In Tintoretto's Last Supper, the scene is portrayed from the angle of group of people along the right side of the composition. On the left side of the painting, Christ and the Apostles occupy one side of the table and single out Judas. Within the dark space, there are few sources of light; one source is emitted by Christ's halo and hanging torch above the table.
In its distinct composition, the Last Supper portrays Mannerist characteristics. One characteristic that Tintoretto utilizes is a black background. Though the painting gives some indication of an interior space through the use of perspective, the edges of the composition are mostly shrouded in shadow which provides drama for the central scene of the Last Supper. Additionally, Tintoretto utilizes the spotlight effects with light, especially with the halo of Christ and the hanging torch above the table. A third Mannerist characteristic that Tintoretto employs are the atmospheric effects of figures shaped in smoke and float about the composition.
El Greco
El Greco attempted to express religious emotion with exaggerated traits. After the realistic depiction of the human form and the mastery of perspective achieved in High Renaissance, some artists started to deliberately distort proportions in disjointed, irrational space for emotional and artistic effect. El Greco still is a deeply original artist. He has been characterized by modern scholars as an artist so individual that he belongs to no conventional school. Key aspects of Mannerism in El Greco include the jarring "acid" palette, elongated and tortured anatomy, irrational perspective and light, and obscure and troubling iconography. El Greco's style was a culmination of unique developments based on his Greek heritage and travels to Spain and Italy.
El Greco's work reflects a multitude of styles including Byzantine elements as well as the influence of Caravaggio and Parmigianino in addition to Venetian coloring. An important element is his attention to color as he regarded it to be one of the most important aspects of his painting. Over the course of his career, El Greco's work remained in high demand as he completed important commissions in locations such as the Colegio de la Encarnación de Madrid.
El Greco's unique painting style and connection to Mannerist characteristics is especially prevalent in the work Laocoön. Painted in 1610, it depicts the mythological tale of Laocoön, who warned the Trojans about the danger of the wooden horse which was presented by the Greeks as peace offering to the goddess Minerva. As a result, Minerva retaliated in revenge by summoning serpents to kill Laocoön and his two sons. Instead of being set against the backdrop of Troy, El Greco situated the scene near Toledo, Spain in order to "universalize the story by drawing out its relevance for the contemporary world."
El Greco's unique style in Laocoön exemplifies many Mannerist characteristics. Prevalent is the elongation of many of the human forms throughout the composition in conjunction with their serpentine movement, which provides a sense of elegance. An additional element of Mannerist style is the atmospheric effects in which El Greco creates a hazy sky and blurring of landscape in the background.
Benvenuto Cellini
Benvenuto Cellini created the Cellini Salt Cellar of gold and enamel in 1540 featuring Poseidon and Amphitrite (water and earth) placed in uncomfortable positions and with elongated proportions. It is considered a masterpiece of Mannerist sculpture.
Lavinia Fontana
Lavinia Fontana (1552–1614) was a Mannerist portraitist often acknowledged to be the first female career artist in Western Europe. She was appointed to be the Portraitist in Ordinary at the Vatican. Her style is characterized as being influenced by the Carracci family of painters by the colors of the Venetian School. She is known for her portraits of noblewomen, and for her depiction of nude figures, which was unusual for a woman of her time.
Taddeo Zuccaro (or Zuccari)
Taddeo Zuccaro was born in Sant'Angelo in Vado, near Urbino, the son of Ottaviano Zuccari, an almost unknown painter. His brother Federico, born around 1540, was also a painter and architect.
Federico Zuccaro (or Zuccari)
Federico Zuccaro’s documented career as a painter began in 1550, when he moved to Rome to work under Taddeo, his elder brother. He went on to complete decorations for Pius IV, and help complete the fresco decorations at the Villa Farnese at Caprarola. Between 1563 and 1565, he was active in Venice with the Grimani family of Santa Maria Formosa. During his Venetian period, he traveled alongside Palladio in Friuli.
Joachim Wtewael
Joachim Wtewael (1566–1638) continued to paint in a Northern Mannerist style until the end of his life, ignoring the arrival of the Baroque art, and making him perhaps the last significant Mannerist artist still to be working. His subjects included large scenes with still life in the manner of Pieter Aertsen, and mythological scenes, many small cabinet paintings beautifully executed on copper, and most featuring nudity.
Giuseppe Arcimboldo
Giuseppe Arcimboldo is most readily known for his artworks that incorporate still life and portraiture. His style is viewed as Mannerist with the assemblage style of fruits and vegetables in which its composition can be depicted in various ways—right side up and upside down. Arcimboldo's artworks have also applied to Mannerism in terms of humor that it conveys to viewers, because it does not hold the same degree of seriousness as Renaissance works. Stylistically, Arcimboldo's paintings are known for their attention to nature and concept of a "monstrous appearance."
One of Arcimboldo's paintings which contains various Mannerist characteristics is, Vertumnus. Painted against a black background is a portrait of Rudolf II, whose body is composed of various vegetables, flowers, and fruits. The joke of the painting communicates the humor of power which is that Emperor Rudolf II is hiding a dark inner self behind his public image. On the other hand, the serious tone of the painting foreshadows the good fortune that would be prevalent during his reign.
Vertumnus contains various Mannerist elements in terms of its composition and message. One element is the flat, black background which Arcimboldo utilizes to emphasize the status and identity of the Emperor, as well as highlighting the fantasy of his reign. In the portrait of Rudolf II, Arcimboldo also strays away from the naturalistic representation of the Renaissance, and explores the construction of composition by rendering him from a jumble of fruits, vegetables, plants and flowers. Another element of Mannerism which the painting portrays is the dual narrative of a joke and serious message; humor wasn't normally utilized in Renaissance artworks.
Mannerist architecture
Mannerist architecture was characterized by visual trickery and unexpected elements that challenged the Renaissance norms. Flemish artists, many of whom had traveled to Italy and were influenced by Mannerist developments there, were responsible for the spread of Mannerist trends into Europe north of the Alps, including into the realm of architecture. During the period, architects experimented with using architectural forms to emphasize solid and spatial relationships. The Renaissance ideal of harmony gave way to freer and more imaginative rhythms. The best known architect associated with the Mannerist style, and a pioneer at the Laurentian Library, was Michelangelo (1475–1564). He is credited with inventing the giant order, a large pilaster that stretches from the bottom to the top of a façade. He used this in his design for the Piazza del Campidoglio in Rome. The Herrerian style ( or arquitectura herreriana) of architecture was developed in Spain during the last third of the 16th century under the reign of Philip II (1556–1598), and continued in force in the 17th century, but transformed by the Baroque style of the time. It corresponds to the third and final stage of the Spanish Renaissance architecture, which evolved into a progressive purification ornamental, from the initial Plateresque to classical Purism of the second third of the 16th century and total nudity decorative that introduced the Herrerian style.
Prior to the 20th century, the term Mannerism had negative connotations, but it is now used to describe the historical period in more general, non-judgmental terms. Mannerist architecture has also been used to describe a trend in the 1960s and 1970s that involved breaking the norms of modernist architecture while at the same time recognizing their existence. Defining Mannerism in this context, architect and author Robert Venturi wrote "Mannerism for architecture of our time that acknowledges conventional order rather than original expression but breaks the conventional order to accommodate complexity and contradiction and thereby engages ambiguity unambiguously."
Renaissance examples
An example of Mannerist architecture is the Villa Farnese at Caprarola, in the rugged countryside outside of Rome. The proliferation of engravers during the 16th century spread Mannerist styles more quickly than any previous styles.
Dense with ornament of "Roman" detailing, the display doorway at Colditz Castle exemplifies the northern style, characteristically applied as an isolated "set piece" against unpretentious vernacular walling.
From the late 1560s onwards, many buildings in Valletta, the new capital city of Malta, were designed by the architect Girolamo Cassar in the Mannerist style. Such buildings include St. John's Co-Cathedral, the Grandmaster's Palace and the seven original auberges. Many of Cassar's buildings were modified over the years, especially in the Baroque period. However, a few buildings, such as Auberge d'Aragon and the exterior of St. John's Co-Cathedral, still retain most of Cassar's original Mannerist design.
Mannerism in literature and music
In English literature, Mannerism is commonly identified with the qualities of the "Metaphysical" poets of whom the most famous is John Donne. The witty sally of a Baroque writer, John Dryden, against the verse of Donne in the previous generation, affords a concise contrast between Baroque and Mannerist aims in the arts:
The rich musical possibilities in the poetry of the late 16th and early 17th centuries provided an attractive basis for the madrigal, which quickly rose to prominence as the pre-eminent musical form in Italian musical culture, as discussed by Tim Carter:
The word Mannerism has also been used to describe the style of highly florid and contrapuntally complex polyphonic music made in France in the late 14th century. This period is now usually referred to as the ars subtilior.
Mannerism and theatre
The Early Commedia dell'Arte (1550–1621): The Mannerist Context by Paul Castagno discusses Mannerism's effect on the contemporary professional theatre. Castagno's was the first study to define a theatrical form as Mannerist, employing the vocabulary of Mannerism and maniera to discuss the typification, exaggerated, and effetto meraviglioso of the comici dell'arte. See Part II of the above book for a full discussion of Mannerist characteristics in the commedia dell'arte. The study is largely iconographic, presenting a pictorial evidence that many of the artists who painted or printed commedia images were in fact, coming from the workshops of the day, heavily ensconced in the maniera tradition.
The preciosity in Jacques Callot's minute engravings seem to belie a much larger scale of action. Callot's Balli di Sfessania (literally, dance of the buttocks) celebrates the commedia's blatant eroticism, with protruding phalli, spears posed with the anticipation of a comic ream, and grossly exaggerated masks that mix the bestial with human. The eroticism of the innamorate (lovers) including the baring of breasts, or excessive veiling, was quite in vogue in the paintings and engravings from the second School of Fontainebleau, particularly those that detect a Franco-Flemish influence. Castagno demonstrates iconographic linkages between genre painting and the figures of the commedia dell'arte that demonstrate how this theatrical form was embedded within the cultural traditions of the late cinquecento.
Commedia dell'arte, disegno interno, and the discordia concors
Important corollaries exist between the disegno interno, which substituted for the disegno esterno (external design) in Mannerist painting. This notion of projecting a deeply subjective view as superseding nature or established principles (perspective, for example), in essence, the emphasis away from the object to its subject, now emphasizing execution, displays of virtuosity, or unique techniques. This inner vision is at the heart of commedia performance. For example, in the moment of improvisation the actor expresses his virtuosity without heed to formal boundaries, decorum, unity, or text. Arlecchino became emblematic of the mannerist discordia concors (the union of opposites), at one moment he would be gentle and kind, then, on a dime, become a thief violently acting out with his battle. Arlecchino could be graceful in movement, only in the next beat, to clumsily trip over his feet. Freed from the external rules, the actor celebrated the evanescence of the moment; much the way Benvenuto Cellini would dazzle his patrons by draping his sculptures, unveiling them with lighting effects and a sense of the marvelous. The presentation of the object became as important as the object itself.
Neo-Mannerism
In the 20th century, the rise of Neo-Mannerism stemmed from artist Ernie Barnes. The style was heavily influenced by both the Jewish Community, as well as the African-American Community, leading to "The Beauty of the Ghetto" exhibition between 1972 - 1979. The Exhibition toured major American cities, and was hosted by dignitaries, professional athletes, and celebrities. When the exhibition was on view in 1974 at the Museum of African Art in Washington, DC, Rep. John Conyers stressed the important positive message of the exhibit in the Congressional Record.
The style of Neo-Mannerism, as developed by Barnes, includes subjects with elongated limbs and bodies, as well as exaggerated movement. Another common theme was closed eyes of the subjects, as a visual representation of "how blind we are to one another's humanity". "We look upon each other and decide immediately: This person is black, so he must be ... This person lives in poverty, so he must be ...".
Neo-Mannerism and Theater & Cinema
In an interview, film director Peter Greenaway mentions Federico Fellini and Bill Viola as two major inspirations for his exhaustive and self-referential play with the insoluble tension between the database form of images and the various analogous and digital interfaces that structure them cinematically. This play can be called neo-mannerist precisely insofar as it is distinguished from the (neo-)baroque: "Just as Roman Catholicism would offer you paradise and heaven, there is an equivalent commercial paradise being offered very largely by the whole capitalistic effect, which is associated with Western cinema. This is my political analogy in terms of the use of multimedia as a political weapon. I would equate, in a sense, the great baroque Counter-Reformation, its cultural activity, with what cinema, American cinema predominantly, has been doing in the last seventy years."
Criticism
According to art critic Jerry Saltz, "Neo-Mannerism" (new Mannerism) is among several clichés that are "squeezing the life out of the art world". Neo-Mannerism describes art of the 21st century that is turned out by students whose academic teachers "have scared [them] into being pleasingly meek, imitative, and ordinary".
See also
Counter-Maniera
Mannerist architecture and sculpture in Poland
Timeline of Italian artists to 1800
Notes
References
Apel, Willi. 1946–47. "The French Secular Music of the Late Fourteenth Century". Acta Musicologica 18: 17–29.
Briganti, Giuliano. 1962. Italian Mannerism, translated from the Italian by Margaret Kunzle. London: Thames and Hudson; Princeton: Van Nostrand; Leipzig: VEB Edition. (Originally published in Italian, as La maniera italiana, La pittura italiana 10. Rome: Editori Riuniti, 1961).
Carter, Tim. 1991. Music in Late Renaissance and Early Baroque Italy. London: Amadeus Press.
Castagno, Paul C. 1994. The Early Commedia Dell'arte (1550–1621): The Mannerist Context. New York: P. Lang. .
Cheney, Liana de Girolami (ed.). 2004. Readings in Italian Mannerism, second printing, with a foreword by Craig Hugh Smyth. New York: Peter Lang. . (Previous edition, without the foreword by Smyth, New York: Peter Lang, 1997. ).
Cox-Rearick, Janet. "Pontormo, Jacopo da." Grove Art Online.11 Apr 2019. http://www.oxfordartonline.com/groveart/view/10.1093/gao/9781884446054.001.0001/oao-9781884446054-e-7000068662.
Davies, David, Greco, J. H Elliott, Metropolitan Museum of Art (New York, N.Y.), and National Gallery (Great Britain). El Greco. London: National Gallery Company, 2003.
Freedberg, Sidney J. 1965. "Observations on the Painting of the Maniera". Reprinted in Cheney 2004, 116–23.
Freedberg, Sidney J. 1971. Painting in Italy, 1500–1600, first edition. The Pelican History of Art. Harmondsworth and Baltimore: Penguin Books.
Freedberg, Sidney J. 1993. Painting in Italy, 1500–1600, 3rd edition, New Haven and London: Yale University Press. (cloth) (pbk)
Friedländer, Walter. 1965. Mannerism and Anti-Mannerism in Italian Painting. New York: Schocken. LOC 578295 (First edition, New York: Columbia University Press, 1958.)
Gombrich, E[rnst] H[ans]. 1995. The Story of Art, sixteenth edition. London: Phaidon Press. .
Kaufmann, Thomas DaCosta. Arcimboldo : Visual Jokes, Natural History, and Still-Life Painting. Chicago: University of Chicago Press, 2010. ProQuest Ebook Central.
Lambraki-Plaka, Marina (1999). El Greco-The Greek. Kastaniotis. .
Marchetti Letta, Elisabetta, Jacopo Da Pontormo, and Rosso Fiorentino. Pontormo, Rosso Fiorentino. The Library of Great Masters. Antella, Florence: Scala, 199
Marías, Fernando. 2003 "Greco, El." Grove Art Online. 2 April 2019. http://www.oxfordartonline.com/groveart/view/10.1093/gao/9781884446054.001.0001/oao-9781884446054-e-7000034199.
Mirollo, James V. 1984. Mannerism and Renaissance Poetry: Concept, Mode, Inner Design. New Haven: Yale University Press. .
Nichols, Tom. Tintoretto : Tradition and Identity. London: Reaktion, 1999.
Shearman, John K. G. 1967. Mannerism. Style and Civilization. Harmondsworth: Penguin. Reprinted, London and New York: Penguin, 1990.
Olson, Roberta J.M., Italian Renaissance Sculpture, 1992, Thames & Hudson (World of Art),
Smart, Alastair. The Renaissance and Mannerism in Northern Europe and Spain. The Harbrace History of Art. New York: Harcourt Brace Jovanovich, 1972.
Smyth, Craig Hugh. 1992. Mannerism and Maniera, with an introduction by Elizabeth Cropper. Vienna: IRSA. .
Summerson, John. 1983. Architecture in Britain 1530–1830, 7th revised and enlarged (3rd integrated) edition. The Pelican History of Art.
Harmondsworth and New York: Penguin. (cased) (pbk) [Reprinted with corrections, 1986; 8th edition, Harmondsworth and New York: Penguin, 1991.]
Stokstad, Marilyn, and Michael Watt Cothren. Art History. 4th ed. Upper Saddle River, NJ: Pearson/Prentice Hall, 2011.
Further reading
Gardner, Helen Louise. 1972. The Metaphysical Poets, Selected and Edited, revised edition. Introduction. Harmondsworth, England; New York: Penguin Books. .
Grossmann F. 1965. Between Renaissance and Baroque: European Art: 1520–1600. Manchester City Art Gallery
Hall, Marcia B . 2001. After Raphael: Painting in Central Italy in the Sixteenth Century, Cambridge University Press. .
Pinelli, Antonio. 1993. La bella maniera: artisti del Cinquecento tra regola e licenza. Turin: Piccola biblioteca Einaudi.
Sypher, Wylie. 1955. Four Stages of Renaissance Style: Transformations in Art and Literature, 1400–1700. Garden City, N.Y.: Doubleday. A classic analysis of Renaissance, Mannerism, Baroque, and Late Baroque.
Würtenberger, Franzsepp. 1963. Mannerism: The European Style of the Sixteenth Century. New York: Holt, Rinehart and Winston (Originally published in German, as Der Manierismus; der europäische Stil des sechzehnten Jahrhunderts. Vienna: A. Schroll, 1962).
External links
"Mannerism: Bronzino (1503–1572) and his Contemporaries", on the Metropolitan Museum of Art's website
Architectural styles
Art movements in Europe
Renaissance art
Mannerist architecture
Western art
16th century in art
17th century in art |
19948 | https://en.wikipedia.org/wiki/Monica%20Lewinsky | Monica Lewinsky | Monica Samille Lewinsky (born July 23, 1973) is an American activist, television personality, fashion designer, and former White House intern. President Bill Clinton admitted to having an affair with Lewinsky while she worked at the White House in 1995 and 1996. The affair and its repercussions (which included Clinton's impeachment) became known later as the Clinton–Lewinsky scandal.
As a result of the public coverage of the political scandal, Lewinsky gained international celebrity status. She subsequently engaged in a variety of ventures that included designing a line of handbags under her name, serving as an advertising spokesperson for a diet plan, and working as a television personality. Lewinsky later left the public spotlight to pursue a master's degree in psychology in London. In 2014, she returned to public view as a social activist speaking out against cyberbullying.
Early life
Lewinsky was born in San Francisco, California, and grew up in an affluent family in Southern California in the Westside Brentwood area of Los Angeles and later in Beverly Hills. Her father is Bernard Lewinsky, an oncologist, who is the son of German Jews who escaped from Nazi Germany and moved to El Salvador and then to the United States when he was 14. Her mother, born Marcia Kay Vilensky, is an author who uses the name Marcia Lewis. In 1996, she wrote her first and only book, the gossip biography The Private Lives of the Three Tenors. During the Lewinsky scandal, the press compared Lewis's unproven "hints" that she had an affair with opera star Plácido Domingo to her daughter's sexual relationship with Clinton. Monica's maternal grandfather, Samuel M. Vilensky, was a Lithuanian Jew, and Monica's maternal grandmother, Bronia Poleshuk, was born in the British Concession of Tianjin, China, to a Russian Jewish family. Monica's parents' acrimonious separation and divorce during 1987 and 1988 had a significant impact on her. Her father later married his current wife, Barbara; her mother later married R. Peter Straus, a media executive and former director of the Voice of America under President Jimmy Carter.
The family attended Sinai Temple in Los Angeles, and Monica attended Sinai Akiba Academy, its religious school. For her primary education, she attended the John Thomas Dye School in Bel-Air. She attended Beverly Hills High School for her first three years of high school, before transferring to Bel Air Prep (later known as Pacific Hills School) for her senior year, graduating in 1991.
Following her high school graduation, Lewinsky attended Santa Monica College, a two-year community college, and worked for the drama department at Beverly Hills High School and at a tie shop. In 1992, she allegedly began a five-year affair with Andy Bleiler, her married former high school drama instructor. In 1993, she enrolled at Lewis & Clark College in Portland, Oregon, graduating with a bachelor's degree in psychology in 1995. In an appearance on Larry King Live in 2000, she revealed that she started an affair with a 40-year-old married man in Los Angeles when she was 18 years old, and that the affair continued while she was attending Lewis & Clark College in the early 90s. She did not reveal the man's identity.
With the assistance of a family connection, Lewinsky secured an unpaid summer White House internship in the office of White House Chief of Staff Leon Panetta. Lewinsky moved to Washington, D.C. and took up the position in July 1995. She moved to a paid position in the White House Office of Legislative Affairs in December 1995.
Scandal
Lewinsky stated that she had nine sexual encounters in the Oval Office with President Bill Clinton between November 1995 and March 1997. According to her testimony, these involved fellatio and other sexual acts, but not sexual intercourse.
Clinton had previously been confronted with allegations of sexual misconduct during his time as Governor of Arkansas. Former Arkansas state employee Paula Jones filed a civil lawsuit against him alleging that he had sexually harassed her. Lewinsky's name surfaced during the discovery phase of Jones' case, when Jones' lawyers sought to show a pattern of behavior by Clinton which involved inappropriate sexual relationships with other government employees.
In April 1996, Lewinsky's superiors transferred her from the White House to the Pentagon because they felt that she was spending too much time around Clinton. At the Pentagon, she worked as an assistant to chief Pentagon spokesman Kenneth Bacon. Lewinsky told co-worker Linda Tripp about her relationship with Clinton, and Tripp began secretly recording their telephone conversations beginning in September 1997. Lewinsky left the Pentagon position in December 1997. Lewinsky submitted an affidavit in the Paula Jones case in January 1998 denying any physical relationship with Clinton, and she attempted to persuade Tripp to lie under oath in that case. Tripp gave the tapes to Independent Counsel Kenneth Starr, adding to his on-going investigation into the Whitewater controversy. Starr then broadened his investigation beyond the Arkansas land use deal to include Lewinsky, Clinton, and others for possible perjury and subornation of perjury in the Jones case. Tripp reported the taped conversations to literary agent Lucianne Goldberg. She also convinced Lewinsky to save the gifts that Clinton had given her during their relationship and not to dry clean a blue dress that was stained with Clinton's semen. Under oath, Clinton denied having had "a sexual affair", "sexual relations", or "a sexual relationship" with Lewinsky.
News of the Clinton–Lewinsky relationship broke in January 1998. On January 26, 1998, Clinton stated, "I did not have sexual relations with that woman, Miss Lewinsky" in a nationally televised White House news conference. The matter instantly occupied the news media, and Lewinsky spent the next weeks hiding from public attention in her mother's residence at the Watergate complex. News of Lewinsky's affair with Andy Bleiler, her former high school drama instructor, also came to light, and he turned over to Starr various souvenirs, photographs, and documents that Lewinsky had sent him and his wife during the time that she was in the White House.
Clinton had also said, "There is not a sexual relationship, an improper sexual relationship or any other kind of improper relationship" which he defended as truthful on August 17, 1998, because of his use of the present tense, arguing "it depends on what the meaning of the word 'is' is". Starr obtained a blue dress from Lewinsky with Clinton's semen stained on it, as well as testimony from her that the President had inserted a cigar into her vagina. Clinton stated, "I did have a relationship with Miss Lewinsky that was not appropriate", but he denied committing perjury because, according to Clinton, the legal definition of oral sex was not encompassed by "sex" per se. In addition, he relied on the definition of "sexual relations" as proposed by the prosecution and agreed by the defense and by Judge Susan Webber Wright, who was hearing the Paula Jones case. Clinton claimed that certain acts were performed on him, not by him, and therefore he did not engage in sexual relations. Lewinsky's testimony to the Starr Commission, however, contradicted Clinton's claim of being totally passive in their encounters.
Clinton and Lewinsky were both called before a grand jury; he testified via closed-circuit television, she in person. She was granted transactional immunity by the Office of the Independent Counsel in exchange for her testimony.
Life after the scandal
The affair led to pop culture celebrity for Lewinsky, as she had become the focus of a political storm. Her immunity agreement restricted what she could talk about publicly, but she was able to cooperate with Andrew Morton in his writing of Monica's Story, her biography which included her side of the Clinton affair. The book was published in March 1999; it was also excerpted as a cover story in Time magazine. On March 3, 1999, Barbara Walters interviewed Lewinsky on ABC's 20/20. The program was watched by 70 million Americans, which ABC said was a record for a news show. Lewinsky made about $500,000 from her participation in the book and another $1 million from international rights to the Walters interview, but was still beset by high legal bills and living costs.
In June 1999, Ms. magazine published a series of articles by writer Susan Jane Gilman, sexologist Susie Bright, and author-host Abiola Abrams arguing from three generations of women whether Lewinsky's behavior had any meaning for feminism. Also in 1999, Lewinsky declined to sign an autograph in an airport, saying, "I'm kind of known for something that's not so great to be known for." She made a cameo appearance as herself in two sketches during the May 8, 1999, episode of NBC's Saturday Night Live, a program that had lampooned her relationship with Clinton over the prior 16 months.
By her own account, Lewinsky had survived the intense media attention during the scandal period by knitting. In September 1999, she took this interest further by beginning to sell a line of handbags bearing her name, under the company name The Real Monica, Inc. They were sold online as well as at Henri Bendel in New York, Fred Segal in California, and The Cross in London. Lewinsky designed the bags—described by New York magazine as "hippie-ish, reversible totes"—and traveled frequently to supervise their manufacture in Louisiana.
At the start of 2000, Lewinsky began appearing in television commercials for the diet company Jenny Craig, Inc. The $1 million endorsement deal, which required Lewinsky to lose 40 or more pounds in six months, gained considerable publicity at the time. Lewinsky said that despite her desire to return to a more private life, she needed the money to pay off legal fees, and she believed in the product. A Jenny Craig spokesperson said of Lewinsky, "She represents a busy active woman of today with a hectic lifestyle. And she has had weight issues and weight struggles for a long time. That represents a lot of women in America." The choice of Lewinsky as a role model proved controversial for Jenny Craig, and some of its private franchises switched to an older advertising campaign. The company stopped running the Lewinsky ads in February 2000, concluded her campaign entirely in April 2000, and paid her only $300,000 of the $1 million contracted for her involvement.
Also at the start of 2000, Lewinsky moved to New York City, lived in the West Village, and became an A-list guest in the Manhattan social scene. In February 2000, she appeared on MTV's The Tom Green Show, in an episode in which the host took her to his parents' home in Ottawa in search of fabric for her new handbag business. Later in 2000, Lewinsky worked as a correspondent for Channel 5 in the UK, on the show Monica's Postcards, reporting on U.S. culture and trends from a variety of locations.
In March 2002, Lewinsky, no longer bound by the terms of her immunity agreement, appeared in the HBO special, "Monica in Black and White", part of the America Undercover series. In it she answered a studio audience's questions about her life and the Clinton affair.
Lewinsky hosted a reality television dating program, Mr. Personality, on Fox Television Network in 2003, where she advised young women contestants who were picking men hidden by masks. Some Americans tried to organize a boycott of advertisers on the show, to protest Lewinsky's capitalizing on her notoriety. Nevertheless, the show debuted to very high ratings, and Alessandra Stanley wrote in The New York Times: "after years of trying to cash in on her fame by designing handbags and other self-marketing schemes, Ms. Lewinsky has finally found a fitting niche on television." The ratings, however, slid downward each successive week, and after the show completed its initial limited run, it did not reappear. The same year she appeared as a guest on the programs V Graham Norton in the UK, High Chaparall in Sweden, and The View and Jimmy Kimmel Live! in the U.S.
After Clinton's autobiography, My Life, appeared in 2004, Lewinsky said in an interview with the British tabloid Daily Mail:
By 2005, Lewinsky found that she could not escape the spotlight in the U.S., which made both her professional and personal life difficult. She stopped selling her handbag line and moved to London to study social psychology at the London School of Economics. In December 2006, Lewinsky graduated with a Master of Science degree. Her thesis was titled, "In Search of the Impartial Juror: An Exploration of the Third-Person Effect and Pre-Trial Publicity." For the next decade, she tried to avoid publicity.
Lewinsky did correspond in 2009 with scholar Ken Gormley, who was writing an in-depth study of the Clinton scandals, maintaining that Clinton had lied under oath when asked detailed and specific questions about his relationship with her. In 2013, the items associated with Lewinsky that Bleiler had turned over to Starr were put up for auction by Bleiler's ex-wife, who had come into possession of them.
During her decade out of the public eye, Lewinsky lived in London, Los Angeles, New York, and Portland but, due to her notoriety, had trouble finding employment in the communications and marketing jobs for nonprofit organizations where she had been interviewed.
Public re-emergence
In May 2014, Lewinsky wrote an essay for Vanity Fair magazine titled "Shame and Survival", wherein she discussed her life and the scandal. She continued to maintain that the relationship was mutual and wrote that while Clinton took advantage of her, it was a consensual relationship. She added: "I, myself, deeply regret what happened between me and President Clinton. Let me say it again: I. Myself. Deeply. Regret. What. Happened." However, she said it was now time to "stick my head above the parapet so that I can take back my narrative and give a purpose to my past." The magazine later announced her as a Vanity Fair contributor, stating she would "contribute to their website on an ongoing basis, on the lookout for relevant topics of interest".
In July 2014, Lewinsky was interviewed in a three-part television special for the National Geographic Channel, titled The 90s: The Last Great Decade. The series looked at various events of the 1990s, including the scandal that brought Lewinsky into the national spotlight. This was Lewinsky's first such interview in more than ten years.
In October 2014, she took a public stand against cyberbullying, calling herself "patient zero" of online harassment. Speaking at a Forbes magazine "30 Under 30" summit about her experiences in the aftermath of the scandal, she said, "Having survived myself, what I want to do now is help other victims of the shame game survive, too." She said she was influenced by reading about the suicide of Tyler Clementi, a Rutgers University freshman, involving cyberbullying and joined Twitter to facilitate her efforts. In March 2015, Lewinsky continued to speak out publicly against cyberbullying, delivering a TED talk calling for a more compassionate Internet. In June 2015, she became an ambassador and strategic advisor for anti-bullying organization Bystander Revolution. The same month, she gave an anti-cyberbullying speech at the Cannes Lions International Festival of Creativity. In September 2015, Lewinsky was interviewed by Amy Robach on Good Morning America, about Bystander Revolution's Month of Action campaign for National Bullying Prevention Month. Lewinsky wrote the foreword to an October 2017 book by Sue Scheff and Melissa Schorr, Shame Nation: The Global Epidemic of Online Hate.
In October 2017, Lewinsky tweeted the #MeToo hashtag to indicate that she was a victim of sexual harassment and/or sexual assault, but did not provide details. She wrote an essay in the March 2018 issue of Vanity Fair in which she did not directly explain why she used the #MeToo hashtag in October. She did write that looking back at her relationship with Bill Clinton, although it was consensual, because he was 27 years older than her and in a position with a lot more power than she had, in her opinion the relationship constituted an "abuse of power" on Clinton's part. She added that she had been diagnosed with post-traumatic stress disorder due to the experiences involved after the relationship was disclosed. In May 2018, Lewinsky was disinvited from an event hosted by Town & Country when Bill Clinton accepted an invitation to the event.
In September 2018, Lewinsky spoke at a conference in Jerusalem. Following her speech, she sat for a Q&A session with the host, journalist Yonit Levi. The first question Levi asked was whether Lewinsky thinks that Clinton owes her a private apology. Lewinsky refused to answer the question, and walked off the stage. She later tweeted that the question was posed in a pre-event meeting with Levi, and Lewinsky told her that such a question was off limits. A spokesman for the Israel Television News Company, which hosted the conference and is Levi's employer, responded that Levi had kept all the agreements she made with Lewinsky and honored her requests.
In 2019, she was interviewed by John Oliver on his HBO show Last Week Tonight with John Oliver, where they discussed the importance of solving the problem of public shaming and how her situation may have been different if social media had existed at the time that the scandal broke in the late 1990s. More recently, she started Alt Ending Productions with a first look deal at 20th Television.
On August 6, 2019, it was announced that the Clinton–Lewinsky scandal would be the focus of the third season of the television series American Crime Story with the title Impeachment. The season began production in October 2020. Lewinsky was a co-producer. It consists of 10 episodes and premiered on September 7, 2021. The season portrays the Clinton–Lewinsky scandal and is based on the book A Vast Conspiracy: The Real Story of the Sex Scandal That Nearly Brought Down a President by Jeffrey Toobin. The 28-year-old actress Beanie Feldstein plays Monica Lewinsky. In discussing the series and her observations on social media and cancel culture today in an interview with Kara Swisher for the New York Times Opinion podcast Sway, Lewinsky noted that
References
Further reading
Berlant, Lauren, and Duggan, Lisa. Our Monica, Ourselves: The Clinton Affair and the Public Interest. Sexual Cultures. New York: New York University Press, 2001. .
Kalb, Marvin. One Scandalous Story: Clinton, Lewinsky, and Thirteen Days That Tarnished American Journalism. New York: Free Press, 2001. .
External links
"The Price of Shame" speech at TED
Roger Ailes Dream Was My Nightmare
1973 births
Living people
20th-century American women
21st-century American women
Activists from San Francisco
Alumni of the London School of Economics
American expatriates in the United Kingdom
American fashion businesspeople
American fashion designers
American people of German-Jewish descent
American people of Lithuanian-Jewish descent
American people of Romanian-Jewish descent
American people of Russian-Jewish descent
American people of Salvadoran descent
American women activists
American women fashion designers
American women television personalities
Anti-bullying activists
Beverly Hills High School alumni
Jewish activists
Jewish fashion designers
Lewis & Clark College alumni
Mistresses of United States presidents
Santa Monica College alumni
Television personalities from San Francisco
Vanity Fair (magazine) people |
19951 | https://en.wikipedia.org/wiki/Pressure%20measurement | Pressure measurement | Pressure measurement is the analysis of an applied force by a fluid (liquid or gas) on a surface. Pressure is typically measured in units of force per unit of surface area. Many techniques have been developed for the measurement of pressure and vacuum. Instruments used to measure and display pressure in an integral unit are called pressure meters or pressure gauges or vacuum gauges. A manometer is a good example, as it uses the surface area and weight of a column of liquid to both measure and indicate pressure. Likewise, the widely used Bourdon gauge is a mechanical device, which both measures and indicates and is probably the best known type of gauge.
A vacuum gauge is a pressure gauge used to measure pressures lower than the ambient atmospheric pressure, which is set as the zero point, in negative values (for instance, −15 psig or −760 mmHg equals total vacuum). Most gauges measure pressure relative to atmospheric pressure as the zero point, so this form of reading is simply referred to as "gauge pressure". However, anything greater than total vacuum is technically a form of pressure. For very accurate readings, especially at very low pressures, a gauge that uses total vacuum as the zero point may be used, giving pressure readings in an absolute scale.
Other methods of pressure measurement involve sensors that can transmit the pressure reading to a remote indicator or control system (telemetry).
Absolute, gauge and differential pressures — zero reference
Everyday pressure measurements, such as for vehicle tire pressure, are usually made relative to ambient air pressure. In other cases measurements are made relative to a vacuum or to some other specific reference. When distinguishing between these zero references, the following terms are used:
is zero-referenced against a perfect vacuum, using an absolute scale, so it is equal to gauge pressure plus atmospheric pressure.
is zero-referenced against ambient air pressure, so it is equal to absolute pressure minus atmospheric pressure. Negative signs are usually omitted. To distinguish a negative pressure, the value may be appended with the word "vacuum" or the gauge may be labeled a "vacuum gauge". These are further divided into two subcategories: high and low vacuum (and sometimes ultra-high vacuum). The applicable pressure ranges of many of the techniques used to measure vacuums overlap. Hence, by combining several different types of gauge, it is possible to measure system pressure continuously from 10 mbar down to 10−11 mbar.
is the difference in pressure between two points.
The zero reference in use is usually implied by context, and these words are added only when clarification is needed. Tire pressure and blood pressure are gauge pressures by convention, while atmospheric pressures, deep vacuum pressures, and altimeter pressures must be absolute.
For most working fluids where a fluid exists in a closed system, gauge pressure measurement prevails. Pressure instruments connected to the system will indicate pressures relative to the current atmospheric pressure. The situation changes when extreme vacuum pressures are measured, then absolute pressures are typically used instead.
Differential pressures are commonly used in industrial process systems. Differential pressure gauges have two inlet ports, each connected to one of the volumes whose pressure is to be monitored. In effect, such a gauge performs the mathematical operation of subtraction through mechanical means, obviating the need for an operator or control system to watch two separate gauges and determine the difference in readings.
Moderate vacuum pressure readings can be ambiguous without the proper context, as they may represent absolute pressure or gauge pressure without a negative sign. Thus a vacuum of 26 inHg gauge is equivalent to an absolute pressure of 4 inHg, calculated as 30 inHg (typical atmospheric pressure) − 26 inHg (gauge pressure).
Atmospheric pressure is typically about 100 kPa at sea level, but is variable with altitude and weather. If the absolute pressure of a fluid stays constant, the gauge pressure of the same fluid will vary as atmospheric pressure changes. For example, when a car drives up a mountain, the (gauge) tire pressure goes up because atmospheric pressure goes down. The absolute pressure in the tire is essentially unchanged.
Using atmospheric pressure as reference is usually signified by a "g" for gauge after the pressure unit, e.g. 70 psig, which means that the pressure measured is the total pressure minus atmospheric pressure. There are two types of gauge reference pressure: vented gauge (vg) and sealed gauge (sg).
A vented-gauge pressure transmitter, for example, allows the outside air pressure to be exposed to the negative side of the pressure-sensing diaphragm, through a vented cable or a hole on the side of the device, so that it always measures the pressure referred to ambient barometric pressure. Thus a vented-gauge reference pressure sensor should always read zero pressure when the process pressure connection is held open to the air.
A sealed gauge reference is very similar, except that atmospheric pressure is sealed on the negative side of the diaphragm. This is usually adopted on high pressure ranges, such as hydraulics, where atmospheric pressure changes will have a negligible effect on the accuracy of the reading, so venting is not necessary. This also allows some manufacturers to provide secondary pressure containment as an extra precaution for pressure equipment safety if the burst pressure of the primary pressure sensing diaphragm is exceeded.
There is another way of creating a sealed gauge reference, and this is to seal a high vacuum on the reverse side of the sensing diaphragm. Then the output signal is offset, so the pressure sensor reads close to zero when measuring atmospheric pressure.
A sealed gauge reference pressure transducer will never read exactly zero because atmospheric pressure is always changing and the reference in this case is fixed at 1 bar.
To produce an absolute pressure sensor, the manufacturer seals a high vacuum behind the sensing diaphragm. If the process-pressure connection of an absolute-pressure transmitter is open to the air, it will read the actual barometric pressure.
History
For much of human history, the pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopher Anaximenes of Miletus claimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating, changing to a gas, and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases really do become less dense when warmer, more dense when cooler.
In the 17th century, Evangelista Torricelli conducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end up out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. Previously, the more popular conclusion, even for Galileo, was that air was weightless and it is vacuum that provided force, as in a siphon. The discovery helped bring Torricelli to the conclusion:
This test, known as Torricelli's experiment, was essentially the first documented pressure gauge.
Blaise Pascal went farther, having his brother-in-law try the experiment at different altitudes on a mountain, and finding indeed that the farther down in the ocean of atmosphere, the higher the pressure.
Units
The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N·m−2 or kg·m−1·s−2). This special name for the unit was added in 1971; before that, pressure in SI was expressed in units such as N·m−2. When indicated, the zero reference is stated in parenthesis following the unit, for example 101 kPa (abs). The pound per square inch (psi) is still in widespread use in the US and Canada, for measuring, for instance, tire pressure. A letter is often appended to the psi unit to indicate the measurement's zero reference; psia for absolute, psig for gauge, psid for differential, although this practice is discouraged by the NIST.
Because pressure was once commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., inches of water). Manometric measurement is the subject of pressure head calculations. The most common choices for a manometer's fluid are mercury (Hg) and water; water is nontoxic and readily available, while mercury's density allows for a shorter column (and so a smaller manometer) to measure a given pressure. The abbreviation "W.C." or the words "water column" are often printed on gauges and measurements that use water for the manometer.
Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. So measurements in "millimetres of mercury" or "inches of mercury" can be converted to SI units as long as attention is paid to the local factors of fluid density and gravity. Temperature fluctuations change the value of fluid density, while location can affect gravity.
Although no longer preferred, these manometric units are still encountered in many fields. Blood pressure is measured in millimetres of mercury (see torr) in most of the world, central venous pressure and lung pressures in centimeters of water are still common, as in settings for CPAP machines. Natural gas pipeline pressures are measured in inches of water, expressed as "inches W.C."
Underwater divers use manometric units: the ambient pressure is measured in units of metres sea water (msw) which is defined as equal to one tenth of a bar. The unit used in the US is the foot sea water (fsw), based on standard gravity and a sea-water density of 64 lb/ft3. According to the US Navy Diving Manual, one fsw equals 0.30643 msw, , or , though elsewhere it states that 33 fsw is (one atmosphere), which gives one fsw equal to about 0.445 psi. The msw and fsw are the conventional units for measurement of diver pressure exposure used in decompression tables and the unit of calibration for pneumofathometers and hyperbaric chamber pressure gauges. Both msw and fsw are measured relative to normal atmospheric pressure.
In vacuum systems, the units torr (millimeter of mercury), micron (micrometer of mercury), and inch of mercury (inHg) are most commonly used. Torr and micron usually indicates an absolute pressure, while inHg usually indicates a gauge pressure.
Atmospheric pressures are usually stated using hectopascal (hPa), kilopascal (kPa), millibar (mbar) or atmospheres (atm). In American and Canadian engineering, stress is often measured in kip. Note that stress is not a true pressure since it is not scalar. In the cgs system the unit of pressure was the barye (ba), equal to 1 dyn·cm−2. In the mts system, the unit of pressure was the pieze, equal to 1 sthene per square metre.
Many other hybrid units are used such as mmHg/cm2 or grams-force/cm2 (sometimes as [[kg/cm2]] without properly identifying the force units). Using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as a unit of force is prohibited in SI; the unit of force in SI is the newton (N).
Static and dynamic pressure
Static pressure is uniform in all directions, so pressure measurements are independent of direction in an immovable (static) fluid. Flow, however, applies additional pressure on surfaces perpendicular to the flow direction, while having little impact on surfaces parallel to the flow direction. This directional component of pressure in a moving (dynamic) fluid is called dynamic pressure. An instrument facing the flow direction measures the sum of the static and dynamic pressures; this measurement is called the total pressure or stagnation pressure. Since dynamic pressure is referenced to static pressure, it is neither gauge nor absolute; it is a differential pressure.
While static gauge pressure is of primary importance to determining net loads on pipe walls, dynamic pressure is used to measure flow rates and airspeed. Dynamic pressure can be measured by taking the differential pressure between instruments parallel and perpendicular to the flow. Pitot-static tubes, for example perform this measurement on airplanes to determine airspeed. The presence of the measuring instrument inevitably acts to divert flow and create turbulence, so its shape is critical to accuracy and the calibration curves are often non-linear.
Applications
Altimeter
Barometer
Depth gauge
MAP sensor
Pitot tube
Sphygmomanometer
Instruments
Many instruments have been invented to measure pressure, with different advantages and disadvantages. Pressure range, sensitivity, dynamic response and cost all vary by several orders of magnitude from one instrument design to the next. The oldest type is the liquid column (a vertical tube filled with mercury) manometer invented by Evangelista Torricelli in 1643. The U-Tube was invented by Christiaan Huygens in 1661.
Hydrostatic
Hydrostatic gauges (such as the mercury column manometer) compare pressure to the hydrostatic force per unit area at the base of a column of fluid. Hydrostatic gauge measurements are independent of the type of gas being measured, and can be designed to have a very linear calibration. They have poor dynamic response.
Piston
Piston-type gauges counterbalance the pressure of a fluid with a spring (for example tire-pressure gauges of comparatively low accuracy) or a solid weight, in which case it is known as a deadweight tester and may be used for calibration of other gauges.
Liquid column (manometer)
Liquid-column gauges consist of a column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight (a force applied due to gravity) is in equilibrium with the pressure differential between the two ends of the tube (a force applied due to fluid pressure). A very simple version is a U-shaped tube half-full of liquid, one side of which is connected to the region of interest while the reference pressure (which might be the atmospheric pressure or a vacuum) is applied to the other. The difference in liquid levels represents the applied pressure. The pressure exerted by a column of fluid of height h and density ρ is given by the hydrostatic pressure equation, P = hgρ. Therefore, the pressure difference between the applied pressure Pa and the reference pressure P0 in a U-tube manometer can be found by solving . In other words, the pressure on either end of the liquid (shown in blue in the figure) must be balanced (since the liquid is static), and so .
In most liquid-column measurements, the result of the measurement is the height h, expressed typically in mm, cm, or inches. The h is also known as the pressure head. When expressed as a pressure head, pressure is specified in units of length and the measurement fluid must be specified. When accuracy is critical, the temperature of the measurement fluid must likewise be specified, because liquid density is a function of temperature. So, for example, pressure head might be written "742.2 mmHg" or "4.2 inH2O at 59 °F" for measurements taken with mercury or water as the manometric fluid respectively. The word "gauge" or "vacuum" may be added to such a measurement to distinguish between a pressure above or below the atmospheric pressure. Both mm of mercury and inches of water are common pressure heads, which can be converted to S.I. units of pressure using unit conversion and the above formulas.
If the fluid being measured is significantly dense, hydrostatic corrections may have to be made for the height between the moving surface of the manometer working fluid and the location where the pressure measurement is desired, except when measuring differential pressure of a fluid (for example, across an orifice plate or venturi), in which case the density ρ should be corrected by subtracting the density of the fluid being measured.
Although any fluid can be used, mercury is preferred for its high density (13.534 g/cm3) and low vapour pressure. Its convex meniscus is advantageous since this means there will be no pressure errors from wetting the glass, though under exceptionally clean circumstances, the mercury will stick to glass and the barometer may become stuck (the mercury can sustain a negative absolute pressure) even under a strong vacuum. For low pressure differences, light oil or water are commonly used (the latter giving rise to units of measurement such as inches water gauge and millimetres H2O). Liquid-column pressure gauges have a highly linear calibration. They have poor dynamic response because the fluid in the column may react slowly to a pressure change.
When measuring vacuum, the working liquid may evaporate and contaminate the vacuum if its vapor pressure is too high. When measuring liquid pressure, a loop filled with gas or a light fluid can isolate the liquids to prevent them from mixing, but this can be unnecessary, for example, when mercury is used as the manometer fluid to measure differential pressure of a fluid such as water. Simple hydrostatic gauges can measure pressures ranging from a few torrs (a few 100 Pa) to a few atmospheres (approximately ).
A single-limb liquid-column manometer has a larger reservoir instead of one side of the U-tube and has a scale beside the narrower column. The column may be inclined to further amplify the liquid movement. Based on the use and structure, following types of manometers are used
Simple manometer
Micromanometer
Differential manometer
Inverted differential manometer
McLeod gauge
A McLeod gauge isolates a sample of gas and compresses it in a modified mercury manometer until the pressure is a few millimetres of mercury. The technique is very slow and unsuited to continual monitoring, but is capable of good accuracy. Unlike other manometer gauges, the McLeod gauge reading is dependent on the composition of the gas, since the interpretation relies on the sample compressing as an ideal gas. Due to the compression process, the McLeod gauge completely ignores partial pressures from non-ideal vapors that condense, such as pump oils, mercury, and even water if compressed enough.
Useful range: from around 10−4 Torr (roughly 10−2 Pa) to vacuums as high as 10−6 Torr (0.1 mPa),
0.1 mPa is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-dependent properties. These indirect measurements must be calibrated to SI units by a direct measurement, most commonly a McLeod gauge.
Aneroid
Aneroid gauges are based on a metallic pressure-sensing element that flexes elastically under the effect of a pressure difference across the element. "Aneroid" means "without fluid", and the term originally distinguished these gauges from the hydrostatic gauges described above. However, aneroid gauges can be used to measure the pressure of a liquid as well as a gas, and they are not the only type of gauge that can operate without fluid. For this reason, they are often called mechanical gauges in modern language. Aneroid gauges are not dependent on the type of gas being measured, unlike thermal and ionization gauges, and are less likely to contaminate the system than hydrostatic gauges. The pressure sensing element may be a Bourdon tube, a diaphragm, a capsule, or a set of bellows, which will change shape in response to the pressure of the region in question. The deflection of the pressure sensing element may be read by a linkage connected to a needle, or it may be read by a secondary transducer. The most common secondary transducers in modern vacuum gauges measure a change in capacitance due to the mechanical deflection. Gauges that rely on a change in capacitance are often referred to as capacitance manometers.
Bourdon gauge
The Bourdon pressure gauge uses the principle that a flattened tube tends to straighten or regain its circular form in cross-section when pressurized. (A party horn illustrates this principle.) This change in cross-section may be hardly noticeable, involving moderate stresses within the elastic range of easily workable materials. The strain of the material of the tube is magnified by forming the tube into a C shape or even a helix, such that the entire tube tends to straighten out or uncoil elastically as it is pressurized. Eugène Bourdon patented his gauge in France in 1849, and it was widely adopted because of its superior sensitivity, linearity, and accuracy; Edward Ashcroft purchased Bourdon's American patent rights in 1852 and became a major manufacturer of gauges. Also in 1849, Bernard Schaeffer in Magdeburg, Germany patented a successful diaphragm (see below) pressure gauge, which, together with the Bourdon gauge, revolutionized pressure measurement in industry. But in 1875 after Bourdon's patents expired, his company Schaeffer and Budenberg also manufactured Bourdon tube gauges.
In practice, a flattened thin-wall, closed-end tube is connected at the hollow end to a fixed pipe containing the fluid pressure to be measured. As the pressure increases, the closed end moves in an arc, and this motion is converted into the rotation of a (segment of a) gear by a connecting link that is usually adjustable. A small-diameter pinion gear is on the pointer shaft, so the motion is magnified further by the gear ratio. The positioning of the indicator card behind the pointer, the initial pointer shaft position, the linkage length and initial position, all provide means to calibrate the pointer to indicate the desired range of pressure for variations in the behavior of the Bourdon tube itself. Differential pressure can be measured by gauges containing two different Bourdon tubes, with connecting linkages.
Bourdon tubes measure gauge pressure, relative to ambient atmospheric pressure, as opposed to absolute pressure; vacuum is sensed as a reverse motion. Some aneroid barometers use Bourdon tubes closed at both ends (but most use diaphragms or capsules, see below). When the measured pressure is rapidly pulsing, such as when the gauge is near a reciprocating pump, an orifice restriction in the connecting pipe is frequently used to avoid unnecessary wear on the gears and provide an average reading; when the whole gauge is subject to mechanical vibration, the entire case including the pointer and indicator card can be filled with an oil or glycerin. Tapping on the face of the gauge is not recommended as it will tend to falsify actual readings initially presented by the gauge. The Bourdon tube is separate from the face of the gauge and thus has no effect on the actual reading of pressure. Typical high-quality modern gauges provide an accuracy of ±2% of span, and a special high-precision gauge can be as accurate as 0.1% of full scale.
Force-balanced fused quartz Bourdon tube sensors work on the same principle but uses the reflection of a beam of light from a mirror to sense the angular displacement and current is applied to electromagnets to balance the force of the tube and bring the angular displacement back to zero, the current that is applied to the coils is used as the measurement. Due to the extremely stable and repeatable mechanical and thermal properties of quartz and the force balancing which eliminates nearly all physical movement these sensors can be accurate to around 1 PPM of full scale. Due to the extremely fine fused quartz structures which must be made by hand these sensors are generally limited to scientific and calibration purposes.
In the following illustrations the transparent cover face of the pictured combination pressure and vacuum gauge has been removed and the mechanism removed from the case. This particular gauge is a combination vacuum and pressure gauge used for automotive diagnosis:
The left side of the face, used for measuring manifold vacuum, is calibrated in centimetres of mercury on its inner scale and inches of mercury on its outer scale.
The right portion of the face is used to measure fuel pump pressure or turbo boost and is calibrated in fractions of 1 kgf/cm2 on its inner scale and pounds per square inch on its outer scale.
Mechanical details
Stationary parts:
A: Receiver block. This joins the inlet pipe to the fixed end of the Bourdon tube (1) and secures the chassis plate (B). The two holes receive screws that secure the case.
B: Chassis plate. The face card is attached to this. It contains bearing holes for the axles.
C: Secondary chassis plate. It supports the outer ends of the axles.
D: Posts to join and space the two chassis plates.
Moving parts:
Stationary end of Bourdon tube. This communicates with the inlet pipe through the receiver block.
Moving end of Bourdon tube. This end is sealed.
Pivot and pivot pin
Link joining pivot pin to lever (5) with pins to allow joint rotation
Lever, an extension of the sector gear (7)
Sector gear axle pin
Sector gear
Indicator needle axle. This has a spur gear that engages the sector gear (7) and extends through the face to drive the indicator needle. Due to the short distance between the lever arm link boss and the pivot pin and the difference between the effective radius of the sector gear and that of the spur gear, any motion of the Bourdon tube is greatly amplified. A small motion of the tube results in a large motion of the indicator needle.
Hair spring to preload the gear train to eliminate gear lash and hysteresis
Diaphragm
A second type of aneroid gauge uses deflection of a flexible membrane that separates regions of different pressure. The amount of deflection is repeatable for known pressures so the pressure can be determined by using calibration. The deformation of a thin diaphragm is dependent on the difference in pressure between its two faces. The reference face can be open to atmosphere to measure gauge pressure, open to a second port to measure differential pressure, or can be sealed against a vacuum or other fixed reference pressure to measure absolute pressure. The deformation can be measured using mechanical, optical or capacitive techniques. Ceramic and metallic diaphragms are used.
Useful range: above 10−2 Torr (roughly 1 Pa)
For absolute measurements, welded pressure capsules with diaphragms on either side are often used.
shape:
Flat
Corrugated
Flattened tube
Capsule
Bellows
In gauges intended to sense small pressures or pressure differences, or require that an absolute pressure be measured, the gear train and needle may be driven by an enclosed and sealed bellows chamber, called an aneroid, which means "without liquid". (Early barometers used a column of liquid such as water or the liquid metal mercury suspended by a vacuum.) This bellows configuration is used in aneroid barometers (barometers with an indicating needle and dial card), altimeters, altitude recording barographs, and the altitude telemetry instruments used in weather balloon radiosondes. These devices use the sealed chamber as a reference pressure and are driven by the external pressure. Other sensitive aircraft instruments such as air speed indicators and rate of climb indicators (variometers) have connections both to the internal part of the aneroid chamber and to an external enclosing chamber.
Magnetic coupling
These gauges use the attraction of two magnets to translate differential pressure into motion of a dial pointer. As differential pressure increases, a magnet attached to either a piston or rubber diaphragm moves. A rotary magnet that is attached to a pointer then moves in unison. To create different pressure ranges, the spring rate can be increased or decreased.
Spinning-rotor gauge
The spinning-rotor gauge works by measuring the amount a rotating ball is slowed by the viscosity of the gas being measured. The ball is made of steel and is magnetically levitated inside a steel tube closed at one end and exposed to the gas to be measured at the other. The ball is brought up to speed (2500 or 3800 about rad/s), and the deceleration rate measured after switching off the drive, by electromagnetic transducers. The range of the instrument is 5−5 to 102 Pa (103 Pa with less accuracy). It is accurate and stable enough to be used as a secondary standard. During the last years this type of gauge became much more user friendly and easier to operate. In the past the instrument was famous to requires some skill and knowledge to use correctly. For high accuracy measurements various corrections must be applied and the ball must be spun at a pressure well below the intended measurement pressure for five hours before using. It is most useful in calibration and research laboratories where high accuracy is required and qualified technicians are available. Insulation vacuum monitoring of cryogenic liquids is a perfect suited application for this system too. With the inexpensive and long term stable, weldable sensor, that can be separated from the more costly electronics/read it is a perfect fit to all static vacuums.
Electronic pressure instruments
Metal strain gauge
The strain gauge is generally glued (foil strain gauge) or deposited (thin-film strain gauge) onto a membrane. Membrane deflection due to pressure causes a resistance change in the strain gauge which can be electronically measured.
Piezoresistive strain gauge
Uses the piezoresistive effect of bonded or formed strain gauges to detect strain due to applied pressure.
Piezoresistive silicon pressure sensor
The Sensor is generally a temperature compensated, piezoresistive silicon pressure sensor chosen for its excellent performance and long-term stability. Integral temperature compensation is provided over a range of 0–50°C using laser-trimmed resistors. An additional laser-trimmed resistor is included to normalize pressure sensitivity variations by programming the gain of an external differential amplifier. This provides good sensitivity and long-term stability. The two ports of the sensor, apply pressure to the same single transducer, please see pressure flow diagram below.
This is an over simplified diagram, but you can see fundamental design of the internal ports in the sensor. The important item here to note is the “Diaphragm” as this is the sensor itself. Please note that is it slightly convex in shape (highly exaggerated in the drawing), this is important as it effects the accuracy of the sensor in use.
The shape of the sensor is important because it is calibrated to work in the direction of Air flow as shown by the RED Arrows. This is normal operation for the pressure sensor, providing a positive reading on the display of the digital pressure meter. Applying pressure in the reverse direction can induce errors in the results as the movement of the air pressure is trying to force the diaphragm to move in the opposite direction. The errors induced by this are small but, can be significant and therefore it is always preferable to ensure that the more positive pressure is always applied to the positive (+ve) port and the lower pressure is applied to the negative (-ve) port, for normal 'Gauge Pressure' application. The same applies to measuring the difference between two vacuums, the larger vacuum should always be applied to the negative (-ve) port.
The measurement of pressure via the Wheatstone Bridge looks something like this....
The effective electrical model of the transducer, together with a basic signal conditioning circuit, is shown in the application schematic. The pressure sensor is a fully active Wheatstone bridge which has been temperature compensated and offset adjusted by means of thick film, laser trimmed resistors. The excitation to the bridge is applied via a constant current. The low-level bridge output is at +O and -O, and the amplified span is set by the gain programming resistor (r). The electrical design is microprocessor controlled, which allows for calibration, the additional functions for the user, such as Scale Selection, Data Hold, Zero and Filter functions, the Record function that stores/displays MAX/MIN.
Capacitive
Uses a diaphragm and pressure cavity to create a variable capacitor to detect strain due to applied pressure.
Magnetic
Measures the displacement of a diaphragm by means of changes in inductance (reluctance), LVDT, Hall effect, or by eddy current principle.
Piezoelectric
Uses the piezoelectric effect in certain materials such as quartz to measure the strain upon the sensing mechanism due to pressure.
Optical
Uses the physical change of an optical fiber to detect strain due to applied pressure.
Potentiometric
Uses the motion of a wiper along a resistive mechanism to detect the strain caused by applied pressure.
Resonant
Uses the changes in resonant frequency in a sensing mechanism to measure stress, or changes in gas density, caused by applied pressure.
Thermal conductivity
Generally, as a real gas increases in density -which may indicate an increase in pressure- its ability to conduct heat increases. In this type of gauge, a wire filament is heated by running current through it. A thermocouple or resistance thermometer (RTD) can then be used to measure the temperature of the filament. This temperature is dependent on the rate at which the filament loses heat to the surrounding gas, and therefore on the thermal conductivity. A common variant is the Pirani gauge, which uses a single platinum filament as both the heated element and RTD. These gauges are accurate from 10−3 Torr to 10 Torr, but their calibration is sensitive to the chemical composition of the gases being measured.
Pirani (one wire)
A Pirani gauge consists of a metal wire open to the pressure being measured. The wire is heated by a current flowing through it and cooled by the gas surrounding it. If the gas pressure is reduced, the cooling effect will decrease, hence the equilibrium temperature of the wire will increase. The resistance of the wire is a function of its temperature: by measuring the voltage across the wire and the current flowing through it, the resistance (and so the gas pressure) can be determined. This type of gauge was invented by Marcello Pirani.
Two-wire
In two-wire gauges, one wire coil is used as a heater, and the other is used to measure temperature due to convection. Thermocouple gauges and thermistor gauges work in this manner using a thermocouple or thermistor, respectively, to measure the temperature of the heated wire.
Ionization gauge
Ionization gauges are the most sensitive gauges for very low pressures (also referred to as hard or high vacuum). They sense pressure indirectly by measuring the electrical ions produced when the gas is bombarded with electrons. Fewer ions will be produced by lower density gases. The calibration of an ion gauge is unstable and dependent on the nature of the gases being measured, which is not always known. They can be calibrated against a McLeod gauge which is much more stable and independent of gas chemistry.
Thermionic emission generates electrons, which collide with gas atoms and generate positive ions. The ions are attracted to a suitably biased electrode known as the collector. The current in the collector is proportional to the rate of ionization, which is a function of the pressure in the system. Hence, measuring the collector current gives the gas pressure. There are several sub-types of ionization gauge.
Useful range: 10−10 - 10−3 torr (roughly 10−8 - 10−1 Pa)
Most ion gauges come in two types: hot cathode and cold cathode. In the hot cathode version, an electrically heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3 Torr to 10−10 Torr. The principle behind cold cathode version is the same, except that electrons are produced in the discharge of a high voltage. Cold cathode gauges are accurate from 10−2 Torr to 10−9 Torr. Ionization gauge calibration is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits. Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases at high vacuums will usually be unpredictable, so a mass spectrometer must be used in conjunction with the ionization gauge for accurate measurement.
Hot cathode
A hot-cathode ionization gauge is composed mainly of three electrodes acting together as a triode, wherein the cathode is the filament. The three electrodes are a collector or plate, a filament, and a grid. The collector current is measured in picoamperes by an electrometer. The filament voltage to ground is usually at a potential of 30 volts, while the grid voltage at 180–210 volts DC, unless there is an optional electron bombardment feature, by heating the grid, which may have a high potential of approximately 565 volts.
The most common ion gauge is the hot-cathode Bayard–Alpert gauge, with a small ion collector inside the grid. A glass envelope with an opening to the vacuum can surround the electrodes, but usually the nude gauge is inserted in the vacuum chamber directly, the pins being fed through a ceramic plate in the wall of the chamber. Hot-cathode gauges can be damaged or lose their calibration if they are exposed to atmospheric pressure or even low vacuum while hot. The measurements of a hot-cathode ionization gauge are always logarithmic.
Electrons emitted from the filament move several times in back-and-forth movements around the grid before finally entering the grid. During these movements, some electrons collide with a gaseous molecule to form a pair of an ion and an electron (electron ionization). The number of these ions is proportional to the gaseous molecule density multiplied by the electron current emitted from the filament, and these ions pour into the collector to form an ion current. Since the gaseous molecule density is proportional to the pressure, the pressure is estimated by measuring the ion current.
The low-pressure sensitivity of hot-cathode gauges is limited by the photoelectric effect. Electrons hitting the grid produce x-rays that produce photoelectric noise in the ion collector. This limits the range of older hot-cathode gauges to 10−8 Torr and the Bayard–Alpert to about 10−10 Torr. Additional wires at cathode potential in the line of sight between the ion collector and the grid prevent this effect. In the extraction type the ions are not attracted by a wire, but by an open cone. As the ions cannot decide which part of the cone to hit, they pass through the hole and form an ion beam. This ion beam can be passed on to a:
Faraday cup
Microchannel plate detector with Faraday cup
Quadrupole mass analyzer with Faraday cup
Quadrupole mass analyzer with microchannel plate detector and Faraday cup
Ion lens and acceleration voltage and directed at a target to form a sputter gun. In this case a valve lets gas into the grid-cage.
Cold cathode
There are two subtypes of cold-cathode ionization gauges: the Penning gauge (invented by Frans Michel Penning), and the inverted magnetron, also called a Redhead gauge. The major difference between the two is the position of the anode with respect to the cathode. Neither has a filament, and each may require a DC potential of about 4 kV for operation. Inverted magnetrons can measure down to 1 Torr.
Likewise, cold-cathode gauges may be reluctant to start at very low pressures, in that the near-absence of a gas makes it difficult to establish an electrode current - in particular in Penning gauges, which use an axially symmetric magnetic field to create path lengths for electrons that are of the order of metres. In ambient air, suitable ion-pairs are ubiquitously formed by cosmic radiation; in a Penning gauge, design features are used to ease the set-up of a discharge path. For example, the electrode of a Penning gauge is usually finely tapered to facilitate the field emission of electrons.
Maintenance cycles of cold cathode gauges are, in general, measured in years, depending on the gas type and pressure that they are operated in. Using a cold cathode gauge in gases with substantial organic components, such as pump oil fractions, can result in the growth of delicate carbon films and shards within the gauge that eventually either short-circuit the electrodes of the gauge or impede the generation of a discharge path.
Dynamic transients
When fluid flows are not in equilibrium, local pressures may be higher or lower than the average pressure in a medium. These disturbances propagate from their source as longitudinal pressure variations along the path of propagation. This is also called sound. Sound pressure is the instantaneous local pressure deviation from the average pressure caused by a sound wave. Sound pressure can be measured using a microphone in air and a hydrophone in water. The effective sound pressure is the root mean square of the instantaneous sound pressure over a given interval of time. Sound pressures are normally small and are often expressed in units of microbar.
frequency response of pressure sensors
resonance
Calibration and standards
The American Society of Mechanical Engineers (ASME) has developed two separate and distinct standards on pressure measurement, B40.100 and PTC 19.2. B40.100 provides guidelines on Pressure Indicated Dial Type and Pressure Digital Indicating Gauges, Diaphragm Seals, Snubbers, and Pressure Limiter Valves. PTC 19.2 provides instructions and guidance for the accurate determination of pressure values in support of the ASME Performance Test Codes. The choice of method, instruments, required calculations, and corrections to be applied depends on the purpose of the measurement, the allowable uncertainty, and the characteristics of the equipment being tested.
The methods for pressure measurement and the protocols used for data transmission are also provided. Guidance is given for setting up the instrumentation and determining the uncertainty of the measurement. Information regarding the instrument type, design, applicable pressure range, accuracy, output, and relative cost is provided. Information is also provided on pressure-measuring devices that are used in field environments i.e., piston gauges, manometers, and low-absolute-pressure (vacuum) instruments.
These methods are designed to assist in the evaluation of measurement uncertainty based on current technology and engineering knowledge, taking into account published instrumentation specifications and measurement and application techniques. This Supplement provides guidance in the use of methods to establish the pressure-measurement uncertainty.
European (CEN) Standard
EN 472 : Pressure gauge - Vocabulary.
EN 837-1 : Pressure gauges. Bourdon tube pressure gauges. Dimensions, metrology, requirements and testing.
EN 837-2 : Pressure gauges. Selection and installation recommendations for pressure gauges.
EN 837-3 : Pressure gauges. Diaphragm and capsule pressure gauges. Dimensions, metrology, requirements, and testing.
US ASME Standards
B40.100-2013: Pressure gauges and Gauge attachments.
PTC 19.2-2010 : The Performance test code for pressure measurement.
See also
Air core gauge
Deadweight tester
Force gauge
Gauge
Isoteniscope
Piezometer
Sphygmomanometer
Time pressure gauge
Tire-pressure gauge
Vacuum engineering
References
Sources
External links
Home Made Manometer
Manometer
Underwater diving safety equipment
Vacuum |
19953 | https://en.wikipedia.org/wiki/Medieval%20dance | Medieval dance | Sources for an understanding of dance in Europe in the Middle Ages are limited and fragmentary, being composed of some interesting depictions in paintings and illuminations, a few musical examples of what may be dances, and scattered allusions in literary texts. The first detailed descriptions of dancing only date from 1451 in Italy, which is after the start of the Renaissance in Western Europe.
Carole
The most documented form of secular dance during the Middle Ages is the carol also called the "carole" or "carola" and known from the 12th and 13th centuries in Western Europe in rural and court settings. It consisted of a group of dancers holding hands usually in a circle, with the dancers singing in a leader and refrain style while dancing. No surviving lyrics or music for the carol have been identified. In northern France, other terms for this type of dance included "ronde" and its diminutives "rondet", "rondel", and "rondelet" from which the more modern music term "rondeau" derives. In the German-speaking areas, this same type of choral dance was known as "reigen".
Mullally in his book on the carole makes the case that the dance, at least in France, was done in a closed circle with the dancers, usually men and women interspersed, holding hands. He adduces evidence that the general progression of the dance was to the left (clockwise) and that the steps probably were very simple consisting of a step to the left with the left foot followed by a step on the right foot closing to the left foot.
France
Chretien de Troyes
Some of the earliest mentions of the carol occur in the works of the French poet Chrétien de Troyes in his series of Arthurian romances. In the wedding scene in Erec and Enide (about 1170)
Puceles carolent et dancent,
Trestuit de joie feire tancent
(lines 2047–2048)
"Maidens performed rounds and other dances, each trying to outdo the other in showing their joy"
In The Knight of the Cart, (probably late 1170s) at a meadow where there are knights and ladies, various games are played while:
(lines 1656–1659)
"[S]ome others were playing at childhood games – rounds, dances and reels, singing, tumbling, and leaping"
In what is probably Chretien's last work, Perceval, the Story of the Grail, probably written 1181–1191, we find:
"Men and women danced rounds through every street and square"
and later at a court setting:
"The queen ... had all her maidens join hands together to dance and begin the merry-making. In his honour they began their singing, dances, and rounds"
Italy
Dante (1265-1321) has a few minor references to dance in his works but a more substantive description of the round dance with song from Bologna comes from Giovanni del Virgilio (floruit 1319–1327).
Later in the 14th century Giovanni Boccaccio (1313–1375) shows us the "carola" in Florence in the Decameron (about 1350–1353) which has several passages describing men and women dancing to their own singing or accompanied by musicians. Boccaccio also uses two other terms for contemporary dances, ridda and ballonchio, both of which refer to round dances with singing.
Approximately contemporary with the Decameron are a series of frescos in Siena by Ambrogio Lorenzetti painted about 1338–40, one of which shows a group of women doing a "bridge" figure while accompanied by another woman playing the tambourine.
England
In a life of Saint Dunstan composed about 1000, the author tells how Dunstan, going into a church, found maidens dancing in a ring and singing a hymn. According to the Oxford English Dictionary (1933) the term "carol" was first used in England for this type of circle dance accompanied by singing in manuscripts dating to as early as 1300. The word was used as both a noun and a verb and the usage of carol for a dance form persisted well into the 16th century. One of the earliest references is in Robert of Brunne's early 14th century Handlyng Synne (Handling Sin) where it occurs as a verb.
Other chain dances
Circle or line dances also existed in other parts of Europe outside England, France and Italy where the term carol was best known. These dances were of the same type with dancers hand-in-hand and a leader who sang the ballad.
Scandinavia
In Denmark, old ballads mention a closed Ring dance which can open into a Chain dance. A fresco in Ørslev church in Zealand from about 1400 shows nine people, men and women, dancing in a line. The leader and some others in the chain carry bouquets of flowers. Dances could be for men and women, or for men alone, or women alone. In the case of women's dances, however, there may have been a man who acted as the leader. Two dances specifically named in the Danish ballads which appear to be line dances of this type are The Beggar Dance, and The Lucky Dance which may have been a dance for women. A modern version of these medieval chains is seen in the Faroese chain dance, the earliest account of which goes back only to the 17th century.
In Sweden too, medieval songs often mentioned dancing. A long chain was formed, with the leader singing the verses and setting the time while the other dancers joined in the chorus. These "Long Dances" have lasted into modern times in Sweden.
A similar type of song dance may have existed in Norway in the Middle Ages as well, but no historical accounts have been found.
Central Europe
The same dance in Germany was called "Reigen" and may have originated from devotional dances at early Christian festivals. Dancing around the church or a fire was frequently denounced by church authorities which only underscores how popular it was. There are records of church and civic officials in various German towns forbidding dancing and singing from the 8th to the 10th centuries. Once again, in singing processions, the leader provided the verse and the other dancers supplied the chorus. The minnesinger Neidhart von Reuental, who lived in the first half of the 13th century wrote several songs for dancing, some of which use the term "reigen".
In southern Tyrol, at Runkelstein Castle, a series of frescos was executed in the last years of the 14th century. One of the frescos depicts Elisabeth of Poland, Queen of Hungary leading a chain dance.
Circle dances were also found in the area that is today the Czech Republic. Descriptions and illustrations of dancing can be found in church registers, chronicles and the 15th century writings of Bohuslav Hasištejnský z Lobkovic. Dancing was primarily done around trees on the village green but special houses for dancing appear from the 14th century. In Poland as well the earliest village dances were in circles or lines accompanied by the singing or clapping of the participants.
The Balkans
The present-day folk dances in the Balkans consist of dancers linked together in a hand or shoulder hold in an open or closed circle or a line. The basic round dance goes by many names in the various countries of the region: choros, kolo, oro, horo or hora. The modern couple dance so common in western and northern Europe has only made a few inroads into the Balkan dance repertory.
Chain dances of a similar type to these modern dance forms have been documented from the medieval Balkans. Tens of thousands of medieval tombstones called "Stećci" are found in Bosnia and Hercegovina and neighboring areas in Montenegro, Serbia and Croatia. They date from the end of the 12th century to the 16th century. Many of the stones bear inscription and figures, several of which have been interpreted as dancers in a ring or line dance. These mostly date to the 14th and 15th centuries. Usually men and women are portrayed dancing together holding hands at shoulder level but occasionally the groups consist of only one sex.
Further south in Macedonia, near the town of Zletovo, Lesnovo monastery, originally built in the 11th century, was renovated in the middle of the 14th century and a series of murals were painted. One of these shows a group of young men linking arms in a round dance. They are accompanied by two musicians, one playing the kanun while the other beats on a long drum.
There is also some documentary evidence from the Dalmatian coast area of what is now Croatia. An anonymous chronicle from 1344 exhorts the people of the city of Zadar to sing and dance circle dances for a festival while in the 14th and 15th centuries, authorities in Dubrovnik forbid circle dances and secular songs on the cathedral grounds. Another early reference comes from the area of present-day Bulgaria in a manuscript of a 14th-century sermon which calls chain dances "devilish and damned."
At a later period there are the accounts of two western European travelers to Constantinople, the capital of the Ottoman Empire. Salomon Schweigger (1551–1622) was a German preacher who traveled in the entourage of Jochim von Sinzendorf, Ambassador to Constantinople for Rudolf II in 1577. He describes the events at a Greek wedding:
da schrencken sie die Arm uebereinander / machen ein Ring / gehen also im Ring herumb / mit dem Fuessen hart tredent und stampffend / einer singt vor / welchem die andern alle nachfolgen.
"then they joined arms one upon the other, made a circle, went round the circle, with their feet stepping hard and stamping; one sang first, with the others all following after."
Another traveler, the German pharmacist Reinhold Lubenau, was in Constantinople in November 1588 and reports on a Greek wedding in these terms:
eine Companei, oft von zehen oder mehr Perschonen, Grichen herfuhr auf den Platz, fasten einander bei den Henden, machten einen runden Kreis und traten balde hinder sich, balde fur sich, balde gingen sie herumb, sungen grichisch drein, balde trampelden sie starck mit den Fussen auf die Erde.
"a company of Greeks, often of ten or more persons, stepped forth to the open place, took each other by the hand, made a round circle, and now stepped backward, now forward, sometimes went around, singing in Greek the while, sometimes stamped strongly on the ground with their feet."
Estampie
If the story is true that troubadour Raimbaut de Vaqueiras (about 1150–1207) wrote the famous Provençal song Kalenda Maya to fit the tune of an estampie that he heard two jongleurs play, then the history of the estampie extends back to the 12th century. The only musical examples actually identified as "estampie" or "istanpita" occur in two 14th-century manuscripts. The same manuscripts also contain other pieces named "danse real" or other dance names. These are similar in musical structure to the estampies but consensus is divided as to whether these should be considered the same.
In addition to these instrumental music compositions, there are also mentions of the estampie in various literary sources from the 13th and 14th centuries. One of these as "stampenie" is found in Gottfried von Strassburg's Tristan from 1210 in a catalog of Tristan's accomplishments:
(lines 2293–2295)
"he also sang most excellently subtle airs, 'chansons', 'refloits', and 'estampies'"
Later, in a description of Isolde:
(lines 8058–8062)
"She fiddled her 'estampie', her lays, and her strange tunes in the French style, about Sanze and St Denis"
A century and a half later in the poem La Prison amoreuse (1372–73) by French chronicler and poet Jean Froissart (c. 1337–1405), we find:
La estoient li menestrel
Qui s'acquittoient bien et bel
A piper et tout de novel
Unes danses teles qu'il sorent,
Et si trestot que cessé orent
Les estampies qu'il batoient,
Cil et celes qui s'esbatoient
Au danser sans gueres atendre
Commencierent leurs mains a tendre
Pour caroler.
"Here are all the minstrels rare Who now acquit themselves so fair In playing on their pipes whate'er The dances be that one may do. So soon as they have glided through The estampies of this sort Youths and maidens who disport Themselves in dancing now begin With scarce a wait to join hands in The choral".
Opinion is divided as to whether the Estampie was actually a dance or simply early instrumental music. Sachs believes the strong rhythm of the music, a derivation of the name from a term meaning "to stamp" and the quotation from the Froissart poem above definitely label the estampie as a dance. However, others stress the complex music in some examples as being uncharacteristic of dance melodies and interpret Froissart's poem to mean that the dancing begins with the carol. There is also debate on the derivation of the word "estampie". In any case, no description of dance steps or figures for the estampie are known.
Couple dances
According to German dance historian Aenne Goldschmidt, the oldest notice of a couple dance comes from the southern German Latin romance Ruodlieb probably composed in the early to mid-11th century. The dance is done at a wedding feast and is described in the translation by Edwin Zeydel as follows:
the young man arose and the young lady too.
He turns in the manner of a falcon and she like a swallow.
But when they came together, they passed one another again quickly,
he seemed to move (glide) along, she to float.
Another literary mention comes from a later period in Germany with a description of couple dancing in Wolfram von Eschenbach's epic poem Parzival, usually dated to the beginning of the 13th century. The scene occurs on manuscript page 639, the host is Gawain, the tables from the meal have been removed and musicians have been recruited:
Now give your thanks to the host that he did not restrain them in their joy. Many a fair lady danced there in his presence.
The knights mingled freely with the host of ladies, pairing off now with one, now with another, and the dance was a lovely sight.
Together they advanced to the attack on sorrow. Often a handsome knight was seen dancing with two ladies, one on either hand.
Eschenbach also remarks that while many of the noblemen present were good fiddlers, they knew only the old style dances, not the many new dances from Thuringia.
The early 14th century Codex Manesse from Heidelberg has miniatures of many Minnesang poets of the period. The portrait of Heinrich von Stretelingen shows him engaged in a "courtly pair dance" while the miniature of Hiltbolt von Schwangau depicts him in a trio dance with two ladies, one in each hand, with a fiddler providing the music.
See also
Danse Macabre (Dance of Death)
Notes
Further reading
Mullally, Robert. The Carole: A Study of a Medieval Dance. Farnham, Surrey: Ashgate, 2011.
External links
Public domain music recording
Dance
European court festivities |
19955 | https://en.wikipedia.org/wiki/Megatokyo | Megatokyo | (also stylized as MegaTokyo) is an English-language webcomic created by Fred Gallagher and Rodney Caston. Megatokyo debuted on August 14, 2000, and has been written and illustrated solely by Gallagher since July 17, 2002. Gallagher's style of writing and illustration is heavily influenced by Japanese manga. Megatokyo is freely available on its official website. The stated schedule for updates is Tuesday and Friday, but they typically are posted just once or twice a month on non-specific days (in the beginning a three-per-week schedule of Monday, Wednesday, and Friday was the goal). Recently, this schedule has slipped further, due to the health issues of Sarah Gallagher (Seraphim), Fred's wife. Megatokyo was also published in book-format by CMX, although the first three volumes were published by Dark Horse. For February 2005, sales of the comic's third printed volume were ranked third on BookScan's list of graphic novels sold in bookstores, then the best showing for an original English-language manga.
Set in a fictional version of Tokyo, Megatokyo portrays the adventures of Piro, a young fan of anime and manga, and his friend Largo, an American video game enthusiast. The comic often parodies and comments on the archetypes and clichés of anime, manga, dating sims, arcade and video games, occasionally making direct references to real-world works. Megatokyo originally emphasized humor, with continuity of the story a subsidiary concern. Over time, it focused more on developing a complex plot and the personalities of its characters. This transition was due primarily to Gallagher's increasing control over the comic, which led to Caston choosing to leave the project. Megatokyo has received praise from such sources as The New York Times, while Gallagher's changes to the comic have been criticized by sources including Websnark.
History
Megatokyo began publication as a joint project between Fred Gallagher and Rodney Caston, along with a few internet acquaintances. Gallagher and Caston later became business partners, as well. According to Gallagher, the comic's first two strips were drawn in reaction to Caston being "convinced that he and I could do [a webcomic] ... [and] bothering me incessantly about it", without any planning or pre-determined storyline. The comic's title was derived from an Internet domain owned by Caston, which had hosted a short-lived gaming news site maintained by Caston before the comic's creation. With Caston co-writing the comic's scripts and Gallagher supplying its artwork, the comic's popularity quickly increased, eventually reaching levels comparable to those of such popular webcomics as Penny Arcade and PvP. According to Gallagher, Megatokyo's popularity was not intended, as the project was originally an experiment to help him improve his writing and illustrating skills for his future project, Warmth.
In May 2002, Caston sold his ownership of the title to Gallagher, who has managed the comic on his own since then. In October of the same year, after Gallagher was laid off from his day job as an architect, he took up producing the comic as a full time profession. Caston's departure from Megatokyo was not fully explained at the time. Initially, Gallagher and Caston only briefly mentioned the split, with Gallagher publicly announcing Caston's departure on June 17, 2002. On January 15, 2005, Gallagher explained his view of the reasoning behind the split in response to a comment made by Scott Kurtz of PvP, in which he suggested that Gallagher had stolen ownership of Megatokyo from Caston. Calling Kurtz's claim "mean spirited", Gallagher responded:
"While things were good at first, over time we found that we were not working well together creatively. There is no fault in this, it happens. I've never blamed Rodney for this creative 'falling out' nor do I blame myself. Not all creative relationships click, ours didn't in the long run."
Four days later, Caston posted his view of the development on his website:
"After this he approached me and said either I would sell him my ownership of MegaTokyo or he would simply stop doing it entirely, and we'd divide up the company's assets and end it all.
This was right before the MT was to go into print form, and I really wanted to see it make it into print, rather [than] die on the vine."
In May 2011, it was announced that Endgames (a gameworld existing within Megatokyo) was being revamped in a light novel format, with a story written by webfiction author Thomas Knapp, with four light novels planned. A short story "Behind the Masque" was also announced, and released on Amazon's Kindle Store on June 10, 2011.
Production
Megatokyo is usually hand-drawn in pencil by Fred Gallagher, without any digital or physical "inking". Inking was originally planned, but dropped as Gallagher decided it was unfeasible. Megatokyo's first strips were created by roughly sketching on large sheets of paper, followed by tracing, scanning, digital clean-up of the traced comics with Adobe Photoshop, and final touches in Adobe Illustrator to achieve a finished product. Gallagher has stated that tracing was necessary because his sketches were not neat enough to use before tracing. Because of the tracing necessary, these comics regularly took six to eight hours to complete. As the comic progressed, Gallagher became able to draw "cleaner" comics without rough lines and tracing lines, and was able to abandon the tracing step. Gallagher believes "that this eventually led to better looking and more expressive comics".
Megatokyo's early strips were laid out in four square panels per strip, in a two-by-two square array – a formatting choice made as a compromise between the horizontal layout of American comic strips and the vertical layout of Japanese comic strips. The limitations of this format became apparent during the first year of Megatokyo's publication, and in the spring of 2001, the comic switched to a manga-style, free-form panel layout. This format allowed for both large, detailed drawings and small, abstract progressions, as based on the needs of the script. Gallagher has commented that his drawing speed had increased since the comic's beginning, and with four panel comics taking much less time to produce, it "made sense in some sort of twisted, masochistic way, that [he] could use that extra time to draw more for each comic".
Megatokyo's earliest strips were drawn entirely on single sheets of paper. Following these, Gallagher began drawing the comic's panels separately and assembling them in Adobe Illustrator, allowing him to draw more detailed frames. This changed during Megatokyo's eighth chapter, with Gallagher returning to drawing entire comics on single sheets of paper. Gallagher stated that this change allowed for more differentiated layouts, in addition to allowing him a better sense of momentum during comic creation.
The strip is currently drawn on inkjet paper in pencil, the text and speech being added later with Adobe Photoshop or Illustrator. In March 2009 he began Fredarting, a streaming live video feed of the comic being drawn.
Gallagher occasionally has guest artists participate in the production of the comic, including Mohammad F. Haque of Applegeeks.
Funding
Megatokyo has had several sources of funding during its production. In its early years, it was largely funded by Gallagher and Caston's full time jobs, with the additional support of banner advertisements. A store connected to ThinkGeek was launched during October 2000 in order to sell Megatokyo merchandise, and, in turn, help fund the comic. On August 1, 2004, this store was replaced by "Megagear", an independent online store created by Fred Gallagher and his wife, Sarah, to be used solely by Megatokyo, although it now also offers Applegeeks and Angerdog merchandise.
Gallagher has emphasized that Megatokyo will continue to remain on the Internet free of charge, and that releasing it in book form is simply another way for the comic to reach readers, as opposed to replacing its webcomic counterpart entirely. Additionally, he has stated that he is against micropayments, as he believes that word of mouth and public attention are powerful property builders, and that a "pay-per-click" system would only dampen their effectiveness. He has claimed that such systems are a superior option to direct monetary compensation, and that human nature is opposed to micropayments.
Themes and structure
Much of Megatokyo's early humor consists of jokes related to the video game subculture, as well as culture-clash issues. In these early strips, the comic progressed at a pace which Gallagher has called "haphazard", often interrupted by purely punchline-driven installments. As Gallagher gradually gained more control over Megatokyo's production, the comic began to gain more similarities to the Japanese shōjo manga that Gallagher enjoys. Following Gallagher's complete takeover of Megatokyo, the comic's thematic relation to Japanese manga continued to grow.
The comic features characteristics borrowed from anime and manga archetypes, often parodying the medium's clichés. Examples include Junpei, a ninja who becomes Largo's apprentice; Rent-a-zillas, giant monsters based on Godzilla; the Tokyo Police Cataclysm Division, which fights the monsters with giant robots and supervises the systematic destruction and reconstruction of predesignated areas of the city; fan service; a Japanese school girl, Yuki, who has also started being a magical girl in recent comics; and Ping, a robot girl. In addition, Dom and Ed, hitmen employed by Sega and Sony, respectively, are associated with a Japanese stereotype that all Americans are heavily armed.
Characters in Megatokyo usually speak Japanese, although some speak English, or English-based l33t. Typically, when a character is speaking Japanese, it is signified by enclosing English text between angle brackets (<>). Not every character speaks every language, so occasionally characters are unable to understand one another. In several scenes (such as this one), a character's speech is written entirely in rōmaji Japanese to emphasize this.
Megatokyo is divided into chapters. Chapter 0, which contains all of the comic's early phase, covers a time span in the comic of about six weeks. Each of the subsequent chapters chronicles the events of a single day. Chapter 0 was originally not given a title, although the book version retroactively dubbed it "Relax, we understand j00." Between the chapters, and occasionally referenced in the main comic, are a number of omake.
Main characters
The authors of Megatokyo chose to use "Surname–Given Name" order for characters of Japanese origin. The same format has been maintained here so as to avoid confusion.
Piro
Piro, the protagonist, is an author surrogate of Fred Gallagher. Gallagher has stated that Piro is an idealized version of himself when he was in college. As a character, he is socially inept and frequently depressed. His design was originally conceived as a visual parody of the character Ruri Hoshino, from the Martian Successor Nadesico anime series. His name is derived from Gallagher's online nickname, which was in turn taken from Makoto Sawatari's cat in the Japanese visual novel Kanon.
In the story, Piro has extreme difficulty understanding Megatokyo's female characters, making him for the most part ignorant of the feelings that the character Nanasawa Kimiko has for him, though he has become much more aware of her attraction as the series progressed. Gallagher has commented that Piro is the focal point of emotional damage, while his friend, Largo, takes the physical damage in the comic.
Largo
Largo is the comic's secondary protagonist, and the comic version of co-creator Rodney Caston. An impulsive alcoholic whose speech is rendered in L33t frequently, he serves as one of the primary sources of comic relief. A technologically gifted character, he is obsessed with altering devices, often with hazardous results. Gallagher designed Largo to be the major recipient of the comic's physical damage. Largo's name comes from Caston's online nickname, which refers to the villain from Bubblegum Crisis. For various reasons (including fire and battle damage) he often ends up wearing very little clothing. Largo seems to have awkwardly blundered into a relatively successful relationship with Hayasaka Erika at the current time in the comic.
Hayasaka Erika
is a strong-willed, cynical, and sometimes violent character. At the time of the story, she is a former popular Japanese idol (singer) and voice actress who has been out of the spotlight for three years, though she still possesses a considerable fanbase. Erika's past relationship troubles, combined with exposure to swarms of fanboys, have caused her to adopt a negative outlook on life. Gallagher has implied that her personality was loosely based on the tsundere (tough girl) stereotype often seen in anime and manga.
Nanasawa Kimiko
is a Japanese girl who previously worked as a waitress at an Anna Miller's restaurant, and is Piro's romantic interest. At the current point in the story, she is a voice actress for the possibly-failing Lockart game "Sight", playing the main heroine, Kannazuki Kotone. Kimiko is a kind and soft-spoken character, though she is prone to mood-swings, and often causes herself embarrassment by saying things she does not mean. Gallagher has commented that Kimiko was the only female character not based entirely on anime stereotypes.
Tohya Miho
is an enigmatic and manipulative young goth girl. She is drawn to resemble a "Gothic Lolita", and is often described as "darkly cute," with Gallagher occasionally describing her as a "perkigoth." Miho often acts strangely compared to the comic's other characters, and regularly accomplishes abnormal feats, such as leaping inhuman distances or perching herself atop telephone poles. Despite these displays of ability, it is hinted at that Miho has problems with her health. Little is revealed in the comic about Miho's past or motivations, although Gallagher states that these will eventually be explained. Largo believes that she (Miho) is the queen of the undead, and is the cause of the zombie invasion of Tokyo. It has been hinted that she is a magical girl who may have some past connection with the zombies. She is apparently killed in a robotic beam attack by Ed, but nine days later is found in the hospital reading and eating with no obvious signs of physical damage. More possibilities exist that she is some type of game prototype or archetype.
Plot
Megatokyo's story begins when Piro and Largo fly to Tokyo after an incident at the Electronic Entertainment Expo (E3). Piro has the proper paperwork; Largo must beat the ninja Junpei at a video game to enter. After a spending spree, the pair are stranded without enough money to buy plane tickets home, forcing them to live with Tsubasa, a Japanese friend of Piro's. When Tsubasa suddenly departs for America to seek his "first true love", the protagonists are forced out of the apartment. Tsubasa leaves Ping, a robot girl PlayStation 2 accessory, in their care. This leads to old friends of Piro and Largo showing up later. The two are shadow operatives for video game companies, Ed (Sony) and Dom (SEGA).
At one point, Piro, confronted with girl troubles, visits the local bookstore to "research"—look in the vast shelves of shoujo manga for a solution to his problem. A spunky schoolgirl, Sonoda Yuki, and her friends, Asako and Mami, see him sitting amidst piles of read manga, and ask him what he is doing. Piro, flustered, runs away, accidentally leaving behind his bookbag and sketchbook.
After their eviction, Piro begins work at "Megagamers", a store specializing in anime, manga, and video games. His employer allows him and Largo to live in the apartment above the store. Largo is mistaken for the new English teacher at a local school, where he takes on the alias "Great Teacher Largo" and instructs his students in L33t, video games, and about computers. Yuki's father, Inspector Sonada Masamichi of the "Tokyo Police Cataclysm Division" (TPCD) hires Largo after Largo manipulates Ping into stopping a rampaging monster, the drunken turtle Gameru.
As Largo is working at the local high school, Piro encounters Yuki again while working at Megagamers, when she returns his bookbag and sketchbook, scribbled all over with comments about his drawings. She then, to his consternation, asks if he would give her drawing lessons. Piro, flustered, agrees, and promptly forgets about them.
Earlier in the story, Piro had seen Nanasawa Kimiko at an Anna Miller's restaurant, where she is a waitress, after Tsubasa brought him and Largo there. Later on, Piro encounters Kimiko outside a train station, where she is worrying aloud that she will miss an audition because she has forgotten her money and railcard. Piro hands her his own railcard and walks off before she can refuse his offer. This event causes Kimiko to develop an idealized vision of her benefactor, an image which is shattered the next time they meet. Despite this, she gradually develops feelings for Piro, though she is too shy to admit them. Later on in the story, Kimiko's outburst on a radio talk show causes her to suddenly rise to idol status. Angered by the hosts' derisive comments about fanboys, she comes to the defense of her audience, immediately and unintentionally securing their obsessive adoration. Later, her new horde of fanboys find out where she works and flock to the restaurant, obsessively trying to get pictures up her skirt. Piro works undercover as a busboy to get rid of all cameras. The scene eventually builds to a climax, in which Kimiko shouts at the fanboys and lifts her skirt in defiance, and they take photographs. Piro, provoked by her outburst into actively defending her, threatens the fanboy crowd, and collects all of their memory cards with the photos. On the way back from the restaurant, Kimiko is suffering from the aftermath of the scene and lashes out at Piro on the subway, which causes him to walk off.
Meanwhile, Largo develops a relationship with Hayasaka Erika, Piro's coworker at Megagamers. She and Kimiko share a house. As with Piro and Kimiko, Largo and Erika meet by coincidence early in the story. Later, it is revealed that Erika is a former pop idol, who caused a big scene then disappeared from the public eye after her fiancé left her. When she is rediscovered by her fans, Largo helps thwart a fanboy horde, but not well enough to escape being dismissed by the TPCD for it. He then offers to help Erika to deal with her "vulnerabilities in the digital plane". Erika insists on protecting herself, so Largo instructs her in computer-building. This leads into a little more relationship than Largo can handle, partly because he insists all computer building be done in the nude or as close to it as possible, to avoid static electrical discharge ruining components, and partly because his behavior, crude though it may appear, impresses Erika in many ways.
The enigmatic Tohya Miho frequently meddles in the lives of the protagonists. Miho knows Piro and Largo from the Endgames MMORPG previous to Megatokyo's plot. She abused a hidden statistic in the game to gain control of nearly all of the game's player characters, but was ultimately defeated by Piro and Largo. In the comic, Miho becomes close friends with Ping, influencing Ping's relationship with Piro and pitting Ping against Largo in video game battles. Miho is also involved in Erika's backstory; Miho manipulated Erika's fans after Erika's disappearance. This effort ended badly, leaving Miho hospitalized, and the TPCD cleaning up the aftermath. Most of the exact details of what happened are left to the readers' imagination, as are her current motivations and ultimate goal. Miho and many of the events surrounding her involve a club in Harajuku, the Cave of Evil (CoE).
After getting yelled at for retaining her waitress job, Kimiko quits her voice acting job and goes home to find Erika assembling a new computer in her undergarments. Not long after Erika tells Kimiko to strip, Piro comes by, who she tells to get undressed as well. While Erika and Piro talk about her, Kimiko, who hid when Piro showed up, runs out of the apartment. Kimiko runs into Ping, who wanted to talk to Piro about why, after an explosion at school, she had started to cry uncontrollably. They encounter Largo at the store, who explains what went wrong, although no one knows what he means until Piro comes in and translates. Ping is relieved to know that she won't shut down and Kimiko hugs Piro and apologizes for her actions. Largo leaves for Erika's apartment after she calls looking for help. That night, while Piro and Kimiko fall asleep watching TV, Erika, who finished the computer with Largo's help, tries to seduce Largo, but it freaks him out and he runs out for home. The next morning, after Kimiko departs, Piro finds out she quit her voice acting job and tries to find her.
Kimiko and Miho are in the same diner, to which Ed has sent an attack robot (Kill-Bot) against Miho, since she has disrupted his attempts to destroy Ping. Miho is in the diner trying to contact Piro, Kimiko is talking with Erika. Dom is also there to talk with Kimiko. After rescuing both herself and Kimiko from the Kill-Bot and chaos at the diner, the two talk about things. Miho talks to Piro on her phone, argues with him, and then Piro and Kimiko have a conversation about that as the two females are leaving the area. Dom follows and tries to coerce Kimiko into joining SEGA for protection from fans, but she refuses. Drained, she has Miho finish talking to Piro on the phone. Piro then encounters a group who found Kimiko's cell phone and other belongings after she and Miho escaped the diner. The group wants to help Piro get together with Kimiko, partially due to feeling bad for trying to snap a picture up Kimiko's skirt. Piro and the group set out for a press conference Kimiko is going to for the voice acting project, Sight. Besides all of the other fans going to the event, a planned zombie outbreak occurs in the area. Miho, who helped Kimiko get ready for the event and accompanied her to it, later calls the zombies off for unexplained reasons through an unexplained mechanism.
Largo and Yuki, who has since been revealed along the way to be a magical girl like her mother Meimi (likewise revealed), steal a Rent-a-Zilla to fight the zombie outbreak. Largo leaves Yuki to help Piro get to Kimiko. Unfortunately, the Rent-a-Zilla gets bitten by zombies and turns into one itself, resulting in the TPCD capturing it. Yuki protects it from the TPCD, teleports it out of the area, and adopts it as a pet in a miniaturized form, all much to her father's chagrin.
After the event, Erika, Largo, Kimiko and Piro are reunited, and they talk a bit with Miho, who has shown up again after storming out following an argument with Kenji earlier. Miho declines an offer to eat with the group and wanders off thinking about games and Largo and Piro. She is shown walking amongst the zombies and then in Ed's gun-sights, and in the center of an attack by a number of Ed's Kill-Bots.
During the next nine days, Piro and Kimiko have made up and Kimiko returned to both of her jobs, with them seeing little of each other. Largo and Erika are shown to likewise be involved but more often, including going to dinner with the Sonoda family, as the inspector's brother was Erika's fiancé. Kimiko is attempting to get Piro working as an artist on Sight, which unbeknownst to them is now being funded by Dom. Ping is concerned about the whereabouts of Miho, who hasn't been seen during the time, but Piro is still upset about all that has happened and somewhat evasively refuses direct assistance. Ping and Junko, another one of Largo's students, who used to be a friend of Miho, work towards finding Miho. Yuki and Kobayashi Yutaka then also become involved with the attempt because of this. That night, Piro and Kimiko discuss Miho and Endgames, which Yuki overhears, they unaware she is there. This leads Yuki to appropriate Piro's powerless laptop and leave, believing him to still be in love with Miho and that the device might hold clues to finding her. Kimiko and Piro work on his portfolio for Sight and then they say goodnight and leave. He returns to his apartment, but Kimiko goes to the CoE club using a pass Miho gave her long ago in the beginning. Once at the club, Dom mockingly advises her, Yuki unknowingly whisps past her, and she unexpectedly meets up with an old friend Komugiko. During all this, Piro has left his apartment after looking at his sketchbook and a drawing of Miho. His current location is unknown.
Aside from Kimiko, concurrent overlapping events have led to almost every main character converging upon the club for various reasons involving Miho, or in support of others involved. Ed, attempting to destroy Ping, fights with Largo, as the staff of the club have maneuvered Ed and Ping into the protective radius of ex-Idol Erika. Yuki and Yutaka get Piro's laptop powered on, she reads the old chat logs between Piro and Miho, and follows instructions from her to him. Going to a "hidden-in-plain-sight" hospital room, she finds Miho alive and well, although seemingly in a weakened state. During a heated argument and Miho's goading, Yuki then forcibly moves Miho to the club. Shortly after the arrival of the two in the center of everyone, the bulk of the denizens go into trance-like states while others are fighting or confused about what to do next. Miho appears to be collapsing. Upon instructions from Erika, Largo finds then uses his Largo-Phone and the club's sound system to knock out power in the immediate area of the club. During this event, Piro has gone to visit Miho at the "hospital" room, where he discovers that she is missing. Following the blackout, Largo, Erika, and Miho board a train, where Miho decides to return home. However, a large crowd has blocked her path home, apparently waiting for someone's return.
The next morning, Piro has been brought to jail, where he has been interrogated by police about Miho's disappearance. He is able to leave jail by paying a suspiciously set low bail of about $100 US, which is obtained through a 10,000 yen bill that has been shaped into an origami 'zilla and left in the cell. Piro walks back home, where he finds Miho sleeping on a beanbag in the apartment. Piro and Miho then work out some of the confusion between them, which reveals several background events. She explains the Analogue Support Facility as a sort of safehouse, where she was able to come and go when she wanted. Since Ping in her extreme attempt to find Miho had posted tons of pictures, videos, and information on the internet, people are now using that to "build a 'real' me", as Miho explains it. During the process, at one point Kimiko calls from the studio, updating Piro on his artwork and telling him some of how last night she and others found Miho and how crazy it was. Largo and Erika, who are riding on the roof of a train in the Miyagi prefecture also call during the conversation. After a short conversation with both Largo and Erika on the phone, and a bit more conversation with Miho, Piro instructs her to stay in the apartment until they can figure out what to do. Junko and Ping are shown leaving for school, with Junko seeming taking Ed's shotguns from last night with her.
After receiving a phone call from Yutaka, whom Masamichi initially disapproves of, Yuki, who has not changed clothing from the events of the previous chapter, leaves her house, grabs him, and takes him to a rooftop, where they try to explain things after Yutaka was being questioned by Asako and Mami. She goes over everything, even why she referred to herself as a "monster", which Yuki's friends previously overheard and misunderstood. Realizing that Miho is the cause of this mess, Yutaka indirectly vows revenge, but Yuki stops him. Yutaka goes anyway and meets his brother in front of Megagamers, who has tracked Miho to the store since the previous night. Yutaka's brother is a member of a group of Nanasawa fans who plan to intervene and remind Piro who his true love is to get rid of Miho. However, Dom's van is blocking the store's entrance. Though Yuki protests against intervention to the group, Dom, who is unknown to them, performs his own method of intervention anyway and forces Piro to choose between Nanasawa and Miho. It is currently unknown if Dom knows who Miho is, but Miho, in a disguise, overhears the conversation and forces Piro to briefly wear a hat. At the same time, Yuki, deciding that she can wait no longer, steals Dom's van and guns, and rushes into the store with Yutaka in tow. Seeing this, Miho grabs Piro and rushes upstairs, discarding the hat in the process. Yuki subsequently collides with the hat and a presumed explosion occurs, stalling Yuki and Yutaka. Miho and Piro don cosplay outfits as a disguise, escape, and make their way to the local bath house. Just before Yuki grabs Yutaka again, Dom, now trapped under a pile of rubble, expresses his condolences to Yutaka, to which he does not understand. The pair quickly follow Miho and Piro and await for them to leave the bath house.
Books
Megatokyo was first published in print by Studio Ironcat, a partnership announced in September 2002. Following this, the first book, a compilation of Megatokyo strips under the title Megatokyo Volume One: Chapter Zero, was released by Studio Ironcat in January 2003. According to Gallagher, Studio Ironcat was unable to meet demand for the book, due to problems the company was facing at the time. On July 7, 2003, Gallagher announced that Ironcat would not continue to publish Megatokyo in book form. This was followed by an announcement on August 27, 2003 that Dark Horse Comics would publish Megatokyo Volume 2 and future collected volumes, including a revised edition of Megatokyo Volume 1. The comic once more changed publishers in February 2006, moving from Dark Horse Comics to the CMX Manga imprint of DC Comics. The comic then transferred to CMX's parent Wildstorm, with its last volume published in July 2010.
CMX, along with Wildstorm closed down in 2010. Former publisher Dark Horse regained the rights to the series and planned to release it in omnibus format in January 2013, but didn't.
, six volumes are available for purchase: volumes 1 through 3 from Dark Horse, volumes 4 and 5 by CMX/DC, and volume 6 by Wildstorm. The books have also been translated into German, Italian, French, and Polish. In July 2004, Megatokyo was the tenth best-selling manga property in the United States, and during the week ending February 20, 2005, volume 3 ranked third in the Nielsen BookScan figures, which was not only its highest ranking to date (), but also made it the highest monthly rank for an original English-language manga title.
In July 2007, Kodansha announced that in 2008 it intends to publish Megatokyo in a Japanese-language edition, (in a silver slipcased box as part of Kodansha Box editions, a new manga line started in November 2006). Depending on reader response, Kodansha hoped to subsequently publish the entire Megatokyo book series. The first volume was released in Japan on May 7, 2009.
Reception
The artwork and characterizations of Megatokyo have received praise from such publications as The New York Times and Comics Bulletin. Many critics praise Megatokyos character designs and pencil work, rendered entirely in grayscale; conversely, it has been criticized for perceived uniformity and simplicity in the designs of its peripheral characters, which have been regarded as confusing and difficult to tell apart due to their similar appearances.
Eric Burns of Websnark, found the comic to suffer from "incredibly slow pacing" (, only about two months of in-universe time have elapsed), unclear direction or resolutions for plot threads, a lack of official character profiles and plot summaries for the uninitiated, and an erratic update schedule. Burns also harshly criticized the often non-canonical filler material Gallagher employs to prevent the comic's front page content from becoming stagnant, such as Shirt Guy Dom, a punchline-driven stick figure comic strip written and illustrated by Megatokyo editor Dominic Nguyen. Following Gallagher taking on Megatokyo as a full-time occupation, some critics have complained that updates should be more frequent than when he worked on the comic part time. Update schedule issues prompted Gallagher to install an update progress bar for readers awaiting the next installment of the comic; however, it has since been removed as it itself often wasn't updated.
IGN called Megatokyo'''s fans "some of the most patient and forgiving in the webcomic world." During an interview, Gallagher stated that Megatokyo fans "always [tell him] they are patient and find that the final comics are always worth the wait," but he feels as though he "[has] a commitment to [his] readers and to [himself] to deliver the best comics [he] can, and to do it on schedule," finally saying that nothing would make him happier than "[getting] a better handle on the time it takes to create each page." Upon missing deadlines, Gallagher often makes self-disparaging comments. Poking fun at this, Jerry "Tycho" Holkins of Penny Arcade has claimed to have "gotten on famously" with Gallagher, ever since he "figured out that [Gallagher] legitimately detests himself and is not hoisting some kind of glamour."
While Megatokyo was originally presented as a slapstick comedy, it began focusing more on the romantic relationships between its characters after Caston's departure from the project. As a result, some fans, preferring the comic's gag-a-day format, have claimed its quality was superior when Caston was writing it. Additionally, it has been said that, without Caston's input, Largo's antics appear contrived. Comics Bulletin regards Megatokyo's characters as convincingly portrayed, commenting that "the reader truly feels connected to the characters, their romantic hijinks, and their wacky misadventures with the personal touches supplied by the author." Likewise, Anime News Network has praised the personal tone in which the comic is written, stating that much of its appeal is a result of the "friendly and casual feeling of a fan-made production."
Gallagher states early in Megatokyo Volume 1 that he and Caston "didn't want the humor ... to rely too heavily on what might be considered 'obscure knowledge.'" An article in The New York Times insists that such scenarios were unavoidable, commenting that the comic "sits at the intersection of several streams of obscure knowledge," including "gaming and hacking; manga ... the boom in Web comics over the past few years; and comics themselves." The article also held that "Gallagher doesn't mean to be exclusive ... he graciously offers translation of the strip's later occasional lapses into l33t ... [and] explains why the characters are occasionally dressed in knickers or as rabbits." The newspaper went on to argue that "The pleasure of a story like Megatokyo comes not in its novelistic coherence, but in its loose ranginess."Megatokyo was nominated in at least one category of the Web Cartoonist's Choice Awards every year from 2001 through 2007. It won Best Comic in 2002, as well as Best Writing, Best Serial Comic, and Best Dramatic Comic. The largest number of nominations it has received in one year is 14 in 2003, when it won Outstanding Environment Design. The series tied with Svetlana Chmakova's Dramacon for the 2007 Best Continuing OEL Manga.
References
External links
Fredart, other art by Fred Gallagher.
Rcaston.com, blog of Rodney Caston.
Megatokyo'' article at Comixpedia, a webcomic wiki via the Wayback Machine
Megatokyo discussion on Webcomicsreview.com via the Wayback Machine
Fredarting, live drawing of the comic and spinoffs
2000s webcomics
Anime and manga inspired webcomics
CMX (comics) titles
Action webcomics
American comedy webcomics
Dark Horse Comics titles
Dinosaurs in webcomics
Drama webcomics
Internet forums
Long-form webcomics
Romance webcomics
Original English-language manga
Parody webcomics
Video game webcomics
Web Cartoonists' Choice Award winners
Webcomics in print
Tokyo in fiction
2000 webcomic debuts |
19956 | https://en.wikipedia.org/wiki/Medieval%20music | Medieval music | Medieval music encompasses the sacred and secular music of Western Europe during the Middle Ages, from approximately the 6th to 15th centuries. It is the first and longest major era of Western classical music and followed by the Renaissance music; the two eras comprise what musicologists generally term as early music, preceding the common practice period. Following the traditional division of the Middle Ages, medieval music can be divided into Early (500–1150), High (1000–1300), and Late (1300–1400) medieval music.
Medieval music includes liturgical music used for the church, and secular music, non-religious music; solely vocal music, such as Gregorian chant and choral music (music for a group of singers), solely instrumental music, and music that uses both voices and instruments (typically with the instruments accompanying the voices). Gregorian chant was sung by monks during Catholic Mass. The Mass is a reenactment of Christ's Last Supper, intended to provide a spiritual connection between man and God. Part of this connection was established through music.
During the medieval period the foundation was laid for the music notation and music theory practices that would shape Western music into the norms that developed during the Common Practice period of shared music writing practices which encompassed the Baroque era (1600–1750), Classical era (1750–1820) and Romantic era (1800–1910). The most significant of these is the development of a comprehensive music notational system which enabled composers to write out their song melodies and instrumental pieces on parchment or paper. Prior to the development of musical notation, songs and pieces had to be learned "by ear", from one person who knew a song to another person. This greatly limited how many people could be taught new music and how wide music could spread to other regions or countries. The development of music notation made it easier to disseminate (spread) songs and musical pieces to a larger number of people and to a wider geographic area. However the theoretical advances, particularly in regard to rhythm—the timing of notes—and polyphony—using multiple, interweaving melodies at the same time—are equally important to the development of Western music.
Overview
Instruments
Many instruments used to perform medieval music still exist in the 21st century, but in different and typically more technologically developed forms. The flute was made of wood in the medieval era rather than silver or other metal, and could be made as a side-blown or end-blown instrument. While modern orchestral flutes are usually made of metal and have complex key mechanisms and airtight pads, medieval flutes had holes that the performer had to cover with the fingers (as with the recorder). The recorder was made of wood during the medieval era, and despite the fact that in the 21st century it may be made of synthetic materials such as plastic, it has more or less retained its past form. The gemshorn is similar to the recorder as it has finger holes on its front, though it is actually a member of the ocarina family. One of the flute's predecessors, the pan flute, was popular in medieval times, and is possibly of Hellenic origin. This instrument's pipes were made of wood, and were graduated in length to produce different pitches.
Medieval music used many plucked string instruments like the lute, a fretted instrument with a pear-shaped hollow body which is the predecessor to the modern guitar. Other plucked stringed instruments included the mandore, gittern, citole and psaltery. The dulcimers, similar in structure to the psaltery and zither, were originally plucked, but musicians began to strike the dulcimer with hammers in the 14th century after the arrival of new metal technology that made metal strings possible.
The bowed lyra of the Byzantine Empire was the first recorded European bowed string instrument. Like the modern violin, a performer produced sound by moving a bow with tensioned hair over tensioned strings. The Persian geographer Ibn Khurradadhbih of the 9th century (d. 911) cited the Byzantine lyra, in his lexicographical discussion of instruments as a bowed instrument equivalent to the Arab rabāb and typical instrument of the Byzantines along with the urghun (organ), shilyani (probably a type of harp or lyre) and the salandj (probably a bagpipe). The hurdy-gurdy was (and still is) a mechanical violin using a rosined wooden wheel attached to a crank to "bow" its strings. Instruments without sound boxes like the jew's harp were also popular. Early versions of the pipe organ, fiddle (or vielle), and a precursor to the modern trombone (called the sackbut) were used.
Genres
Medieval music was composed and, for some vocal and instrumental music, improvised for many different music genres (styles of music). Medieval music created for sacred (church use) and secular (non-religious use) was typically written by composers, except for some sacred vocal and secular instrumental music which was improvised (made up on the spot). During the earlier medieval period, the liturgical genre, predominantly Gregorian chant done by monks, was monophonic ("monophonic" means a single melodic line, without a harmony part or instrumental accompaniment). Polyphonic genres, in which multiple independent melodic lines are performed simultaneously, began to develop during the high medieval era, becoming prevalent by the later 13th and early 14th century. The development of polyphonic forms, with different voices interweaving, is often associated with the late medieval Ars nova style which flourished in the 1300s. The Ars Nova, which means "new art", was an innovative style of writing music that served as a key transition from the medieval music style to the more expressive styles of the post-1400s Renaissance music era.
The earliest innovations upon monophonic plainchant were heterophonic. "Heterophony" is the performance of the same melody by two different performers at the same time, in which each performer slightly alters the ornaments she or he is using. Another simple form of heterophony is for singers to sing the same shape of melody, but with one person singing the melody and a second person singing the melody at a higher or lower pitch. Organum, for example, expanded upon plainchant melody using an accompanying line, sung at a fixed interval (often a perfect fifth or perfect fourth away from the main melody), with a resulting alternation between a simple form of polyphony and monophony. The principles of organum date back to an anonymous 9th century tract, the Musica enchiriadis, which established the tradition of duplicating a preexisting plainchant in parallel motion at the interval of an octave, a fifth or a fourth.
Of greater sophistication was the motet, which developed from the clausula genre of medieval plainchant. The motet would become the most popular form of medieval polyphony. While early motets were liturgical or sacred (designed for use in a church service), by the end of the thirteenth century the genre had expanded to include secular topics, such as courtly love. Courtly love was the respectful veneration of a lady from afar by an amorous, noble man. Many popular motets had lyrics about a man's love and adoration of beautiful, noble and much-admired woman.
The medieval motet developed during the Renaissance music era (after 1400). During the Renaissance, the Italian secular genre of the Madrigal became popular. Similar to the polyphonic character of the motet, madrigals featured greater fluidity and motion in the leading melody line. The madrigal form also gave rise to polyphonic canons (songs in which multiple singers sing the same melody, but starting at different times), especially in Italy where they were called caccie. These were three-part secular pieces, which featured the two higher voices in canon, with an underlying instrumental long-note accompaniment.
Finally, purely instrumental music also developed during this period, both in the context of a growing theatrical tradition and for court performances for the aristocracy. Dance music, often improvised around familiar tropes, was the largest purely instrumental genre. The secular Ballata, which became very popular in Trecento Italy, had its origins, for instance, in medieval instrumental dance music.
Notation
During the medieval period the foundation was laid for the notational and theoretical practices that would shape Western music into the norms that developed during the common practice era. The most obvious of these is the development of a comprehensive music notational system; however the theoretical advances, particularly in regard to rhythm and polyphony, are equally important to the development of Western music.
The earliest medieval music did not have any kind of notational system. The tunes were primarily monophonic (a single melody without accompaniment) and transmitted by oral tradition. As Rome tried to centralize the various liturgies and establish the Roman rite as the primary church tradition the need to transmit these chant melodies across vast distances effectively was equally glaring. So long as music could only be taught to people "by ear," it limited the ability of the church to get different regions to sing the same melodies, since each new person would have to spend time with a person who already knew a song and learn it "by ear." The first step to fix this problem came with the introduction of various signs written above the chant texts to indicate direction of pitch movement, called neumes.
The origin of neumes is unclear and subject to some debate; however, most scholars agree that their closest ancestors are the classic Greek and Roman grammatical signs that indicated important points of declamation by recording the rise and fall of the voice. The two basic signs of the classical grammarians were the acutus, /, indicating a raising of the voice, and the gravis, \, indicating a lowering of the voice. A singer reading a chant text with neume markings would be able to get a general sense of whether the melody line went up in pitch, stayed the same, or went down in pitch. For a singer who already knew a song, seeing the written neume markings above the text could help to jog his or her memory about how the melody went. However, a singer reading a chant text with neume markings would not be able to sight read a song which he or she had never heard sung before.
These neumes eventually evolved into the basic symbols for neumatic notation, the virga (or "rod") which indicates a higher note and still looked like the acutus from which it came; and the punctum (or "dot") which indicates a lower note and, as the name suggests, reduced the gravis symbol to a point. Thus the acutus and the gravis could be combined to represent graphical vocal inflections on the syllable. This kind of notation seems to have developed no earlier than the eighth century, but by the ninth it was firmly established as the primary method of musical notation. The basic notation of the virga and the punctum remained the symbols for individual notes, but other neumes soon developed which showed several notes joined together. These new neumes—called ligatures—are essentially combinations of the two original signs.
The first music notation was the use of dots over the lyrics to a chant, with some dots being higher or lower, giving the reader a general sense of the direction of the melody. However, this form of notation only served as a memory aid for a singer who already knew the melody. This basic neumatic notation could only specify the number of notes and whether they moved up or down. There was no way to indicate exact pitch, any rhythm, or even the starting note. These limitations are further indication that the neumes were developed as tools to support the practice of oral tradition, rather than to supplant it. However, even though it started as a mere memory aid, the worth of having more specific notation soon became evident.
The next development in musical notation was "heighted neumes", in which neumes were carefully placed at different heights in relation to each other. This allowed the neumes to give a rough indication of the size of a given interval as well as the direction. This quickly led to one or two lines, each representing a particular note, being placed on the music with all of the neumes relating to the earlier ones. At first, these lines had no particular meaning and instead had a letter placed at the beginning indicating which note was represented. However, the lines indicating middle C and the F a fifth below slowly became most common. Having been at first merely scratched on the parchment, the lines now were drawn in two different colored inks: usually red for F, and yellow or green for C. This was the beginning of the musical staff. The completion of the four-line staff is usually credited to Guido d' Arezzo (c. 1000–1050), one of the most important musical theorists of the Middle Ages. While older sources attribute the development of the staff to Guido, some modern scholars suggest that he acted more as a codifier of a system that was already being developed. Either way, this new notation allowed a singer to learn pieces completely unknown to him in a much shorter amount of time. However, even though chant notation had progressed in many ways, one fundamental problem remained: rhythm. The neumatic notational system, even in its fully developed state, did not clearly define any kind of rhythm for the singing of notes.
Music theory
The music theory of the medieval period saw several advances over previous practice both in regard to tonal material, texture, and rhythm.
Rhythm
Concerning rhythm, this period had several dramatic changes in both its conception and notation. During the early medieval period there was no method to notate rhythm, and thus the rhythmical practice of this early music is subject to debate among scholars. The first kind of written rhythmic system developed during the 13th century and was based on a series of modes. This rhythmic plan was codified by the music theorist Johannes de Garlandia, author of the De Mensurabili Musica (c.1250), the treatise which defined and most completely elucidated these rhythmic modes. In his treatise Johannes de Garlandia describes six species of mode, or six different ways in which longs and breves can be arranged. Each mode establishes a rhythmic pattern in beats (or tempora) within a common unit of three tempora (a perfectio) that is repeated again and again. Furthermore, notation without text is based on chains of ligatures (the characteristic notations by which groups of notes are bound to one another).
The rhythmic mode can generally be determined by the patterns of ligatures used. Once a rhythmic mode had been assigned to a melodic line, there was generally little deviation from that mode, although rhythmic adjustments could be indicated by changes in the expected pattern of ligatures, even to the extent of changing to another rhythmic mode. The next step forward concerning rhythm came from the German theorist Franco of Cologne. In his treatise Ars cantus mensurabilis ("The Art of Mensurable Music"), written around 1280, he describes a system of notation in which differently shaped notes have entirely different rhythmic values. This is a striking change from the earlier system of de Garlandia. Whereas before the length of the individual note could only be gathered from the mode itself, this new inverted relationship made the mode dependent upon—and determined by—the individual notes or figurae that have incontrovertible durational values, an innovation which had a massive impact on the subsequent history of European music. Most of the surviving notated music of the 13th century uses the rhythmic modes as defined by Garlandia. The step in the evolution of rhythm came after the turn of the 13th century with the development of the Ars Nova style.
The theorist who is most well recognized in regard to this new style is Philippe de Vitry, famous for writing the Ars Nova ("New Art") treatise around 1320. This treatise on music gave its name to the style of this entire era. In some ways the modern system of rhythmic notation began with Vitry, who completely broke free from the older idea of the rhythmic modes. The notational predecessors of modern time meters also originate in the Ars Nova. This new style was clearly built upon the work of Franco of Cologne. In Franco's system, the relationship between a breve and a semibreves (that is, half breves) was equivalent to that between a breve and a long: and, since for him modus was always perfect (grouped in threes), the tempus or beat was also inherently perfect and therefore contained three semibreves. Sometimes the context of the mode would require a group of only two semibreves, however, these two semibreves would always be one of normal length and one of double length, thereby taking the same space of time, and thus preserving the perfect subdivision of the tempus. This ternary division held for all note values. In contrast, the Ars Nova period introduced two important changes: the first was an even smaller subdivision of notes (semibreves, could now be divided into minim), and the second was the development of "mensuration."
Mensurations could be combined in various manners to produce metrical groupings. These groupings of mensurations are the precursors of simple and compound meter. By the time of Ars Nova, the perfect division of the tempus was not the only option as duple divisions became more accepted. For Vitry the breve could be divided, for an entire composition, or section of one, into groups of two or three smaller semibreves. This way, the tempus (the term that came to denote the division of the breve) could be either "perfect" (tempus perfectum), with ternary subdivision, or "imperfect" (tempus imperfectum), with binary subdivision. In a similar fashion, the semibreve's division (termed prolation) could be divided into three minima (prolatio perfectus or major prolation) or two minima (prolatio imperfectus or minor prolation) and, at the higher level, the longs division (called modus) could be three or two breves (modus perfectus or perfect mode, or modus imperfectus or imperfect mode respectively). Vitry took this a step further by indicating the proper division of a given piece at the beginning through the use of a "mensuration sign", equivalent to our modern "time signature".
Tempus perfectum was indicated by a circle, while tempus imperfectum was denoted by a half-circle (the current symbol , used as an alternative for the time signature, is actually a holdover of this symbol, not a letter C as an abbreviation for "common time", as popularly believed). While many of these innovations are ascribed to Vitry, and somewhat present in the Ars Nova treatise, it was a contemporary—and personal acquaintance—of de Vitry, named Johannes de Muris (or Jehan des Mars) who offered the most comprehensive and systematic treatment of the new mensural innovations of the Ars Nova (for a brief explanation of the mensural notation in general, see the article Renaissance music). Many scholars, citing a lack of positive attributory evidence, now consider "Vitry's" treatise to be anonymous, but this does not diminish its importance for the history of rhythmic notation. However, this makes the first definitely identifiable scholar to accept and explain the mensural system to be de Muris, who can be said to have done for it what Garlandia did for the rhythmic modes.
For the duration of the medieval period, most music would be composed primarily in perfect tempus, with special effects created by sections of imperfect tempus; there is a great current controversy among musicologists as to whether such sections were performed with a breve of equal length or whether it changed, and if so, at what proportion. This Ars Nova style remained the primary rhythmical system until the highly syncopated works of the Ars subtilior at the end of the 14th century, characterized by extremes of notational and rhythmic complexity. This sub-genera pushed the rhythmic freedom provided by Ars Nova to its limits, with some compositions having different voices written in different mensurations simultaneously. The rhythmic complexity that was realized in this music is comparable to that in the 20th century.
Polyphony
Of equal importance to the overall history of western music theory were the textural changes that came with the advent of polyphony. This practice shaped western music into the harmonically dominated music that we know today. The first accounts of this textural development were found in two anonymous yet widely circulated treatises on music, the Musica and the Scolica enchiriadis. These texts are dated to sometime within the last half of the ninth century. The treatises describe a technique that seemed already to be well established in practice. This early polyphony is based on three simple and three compound intervals. The first group comprises fourths, fifths, and octaves; while the second group has octave-plus-fourths, octave-plus-fifths, and double octaves. This new practice is given the name organum by the author of the treatises. Organum can further be classified depending on the time period in which it was written. The early organum as described in the enchiriadis can be termed "strict organum" Strict organum can, in turn, be subdivided into two types: diapente (organum at the interval of a fifth) and diatesseron (organum at the interval of a fourth). However, both of these kinds of strict organum had problems with the musical rules of the time. If either of them paralleled an original chant for too long (depending on the mode) a tritone would result.
This problem was somewhat overcome with the use of a second type of organum. This second style of organum was called "free organum". Its distinguishing factor is that the parts did not have to move only in parallel motion, but could also move in oblique, or contrary motion. This made it much easier to avoid the dreaded tritone. The final style of organum that developed was known as "melismatic organum", which was a rather dramatic departure from the rest of the polyphonic music up to this point. This new style was not note against note, but was rather one sustained line accompanied by a florid melismatic line. This final kind of organum was also incorporated by the most famous polyphonic composer of this time—Léonin. He united this style with measured discant passages, which used the rhythmic modes to create the pinnacle of organum composition. This final stage of organum is sometimes referred to as Notre Dame school of polyphony, since that was where Léonin (and his student Pérotin) were stationed. Furthermore, this kind of polyphony influenced all subsequent styles, with the later polyphonic genera of motets starting as a trope of existing Notre Dame organums.
Another important element of medieval music theory was the system by which pitches were arranged and understood. During the Middle Ages, this systematic arrangement of a series of whole steps and half steps, what we now call a scale, was known as a mode. The modal system worked like the scales of today, insomuch that it provided the rules and material for melodic writing. The eight church modes are: Dorian, Hypodorian, Phrygian, Hypophrygian, Lydian, Hypolydian, Mixolydian, and Hypomixolydian. Much of the information concerning these modes, as well as the practical application of them, was codified in the 11th century by the theorist Johannes Afflighemensis. In his work he describes three defining elements to each mode: the final (or finalis), the reciting tone (tenor or confinalis), and the range (or ambitus). The finalis is the tone that serves as the focal point for the mode and, as the name suggests, is almost always used as the final tone. The reciting tone is the tone that serves as the primary focal point in the melody (particularly internally). It is generally also the tone most often repeated in the piece, and finally the range delimits the upper and lower tones for a given mode. The eight modes can be further divided into four categories based on their final (finalis).
Medieval theorists called these pairs maneriae and labeled them according to the Greek ordinal numbers. Those modes that have d, e, f, and g as their final are put into the groups protus, deuterus, tritus, and tetrardus respectively. These can then be divided further based on whether the mode is "authentic" or "plagal." These distinctions deal with the range of the mode in relation to the final. The authentic modes have a range that is about an octave (one tone above or below is allowed) and start on the final, whereas the plagal modes, while still covering about an octave, start a perfect fourth below the authentic. Another interesting aspect of the modal system is the universal allowance for altering B to B no matter what the mode. The inclusion of this tone has several uses, but one that seems particularly common is in order to avoid melodic difficulties caused, once again, by the tritone.
These ecclesiastical modes, although they have Greek names, have little relationship to the modes as set out by Greek theorists. Rather, most of the terminology seems to be a misappropriation on the part of the medieval theorists Although the church modes have no relation to the ancient Greek modes, the overabundance of Greek terminology does point to an interesting possible origin in the liturgical melodies of the Byzantine tradition. This system is called octoechos and is also divided into eight categories, called echoi.
For specific medieval music theorists, see also: Isidore of Seville, Aurelian of Réôme, Odo of Cluny, Guido of Arezzo, Hermannus Contractus, Johannes Cotto (Johannes Afflighemensis), Johannes de Muris, Franco of Cologne, Johannes de Garlandia (Johannes Gallicus), Anonymous IV, Marchetto da Padova (Marchettus of Padua), Jacques of Liège, Johannes de Grocheo, Petrus de Cruce (Pierre de la Croix), and Philippe de Vitry.
Early medieval music (500–1000)
Early chant traditions
Chant (or plainsong) is a monophonic sacred (single, unaccompanied melody) form which represents the earliest known music of the Christian church. Chant developed separately in several European centres. Although the most important were Rome, Hispania, Gaul, Milan, and Ireland, there were others as well. These styles were all developed to support the regional liturgies used when celebrating the Mass there. Each area developed its own chant and rules for celebration. In Spain and Portugal, Mozarabic chant was used and shows the influence of North African music. The Mozarabic liturgy even survived through Muslim rule, though this was an isolated strand and this music was later suppressed in an attempt to enforce conformity on the entire liturgy. In Milan, Ambrosian chant, named after St. Ambrose, was the standard, while Beneventan chant developed around Benevento, another Italian liturgical center. Gallican chant was used in Gaul, and Celtic chant in Ireland and Great Britain.
Around AD 1011, the Roman Catholic Church wanted to standardize the Mass and chant across its empire. At this time, Rome was the religious centre of western Europe, and Paris was the political centre. The standardization effort consisted mainly of combining these two (Roman and Gallican) regional liturgies. Pope Gregory I (540–604) and Charlemagne (742–814) sent trained singers throughout the Holy Roman Empire (800|962–1806) to teach this new form of chant. This body of chant became known as Gregorian Chant, named after Pope Gregory I. By the 12th and 13th centuries, Gregorian chant had superseded all the other Western chant traditions, with the exception of the Ambrosian chant in Milan and the Mozarabic chant in a few specially designated Spanish chapels. Hildegard von Bingen (1098–1179) was one of the earliest known female composers. She wrote many monophonic works for the Catholic Church, almost all of them for female voices.
Early polyphony: organum
Around the end of the 9th century, singers in monasteries such as St. Gall in Switzerland began experimenting with adding another part to the chant, generally a voice in parallel motion, singing mostly in perfect fourths or fifths above the original tune (see interval). This development is called organum and represents the beginnings of counterpoint and, ultimately, harmony. Over the next several centuries, organum developed in several ways.
The most significant of these developments was the creation of "florid organum" around 1100, sometimes known as the school of St. Martial (named after a monastery in south-central France, which contains the best-preserved manuscript of this repertory). In "florid organum" the original tune would be sung in long notes while an accompanying voice would sing many notes to each one of the original, often in a highly elaborate fashion, all the while emphasizing the perfect consonances (fourths, fifths and octaves), as in the earlier organa. Later developments of organum occurred in England, where the interval of the third was particularly favoured, and where organa were likely improvised against an existing chant melody, and at Notre Dame in Paris, which was to be the centre of musical creative activity throughout the thirteenth century.
Much of the music from the early medieval period is anonymous. Some of the names may have been poets and lyric writers, and the tunes for which they wrote words may have been composed by others. Attribution of monophonic music of the medieval period is not always reliable. Surviving manuscripts from this period include the Musica Enchiriadis, Codex Calixtinus of Santiago de Compostela, the Magnus Liber, and the Winchester Troper. For information about specific composers or poets writing during the early medieval period, see Pope Gregory I, St. Godric, Hildegard of Bingen, Hucbald, Notker Balbulus, Odo of Arezzo, Odo of Cluny, and Tutilo.
Liturgical drama
Another musical tradition of Europe originating during the early Middle Ages was the liturgical drama.
Liturgical drama developed possibly in the 10th century from the tropes—poetic embellishments of the liturgical texts. One of the tropes, the so-called Quem Quaeritis, belonging to the liturgy of Easter morning, developed into a short play around the year 950. The oldest surviving written source is the Winchester Troper. Around the year 1000 it was sung widely in Northern Europe.
Shortly, a similar Christmas play was developed, musically and textually following the Easter one, and other plays followed.
There is a controversy among musicologists as to the instrumental accompaniment of such plays, given that the stage directions, very elaborate and precise in other respects, do not request any participation of instruments. These dramas were performed by monks, nuns and priests. In contrast to secular plays, which were spoken, the liturgical drama was always sung. Many have been preserved sufficiently to allow modern reconstruction and performance (for example the Play of Daniel, which has been recently recorded at least ten times).
High medieval music (1000–1300)
Goliards
The Goliards were itinerant poet-musicians of Europe from the tenth to the middle of the thirteenth century. Most were scholars or ecclesiastics, and they wrote and sang in Latin. Although many of the poems have survived, very little of the music has. They were possibly influential—even decisively so—on the troubadour-trouvère tradition which was to follow. Most of their poetry is secular and, while some of the songs celebrate religious ideals, others are frankly profane, dealing with drunkenness, debauchery and lechery. One of the most important extant sources of Goliards chansons is the Carmina Burana.
Ars antiqua
The flowering of the Notre Dame school of polyphony from around 1150 to 1250 corresponded to the equally impressive achievements in Gothic architecture: indeed the centre of activity was at the cathedral of Notre Dame itself. Sometimes the music of this period is called the Parisian school, or Parisian organum, and represents the beginning of what is conventionally known as Ars antiqua. This was the period in which rhythmic notation first appeared in western music, mainly a context-based method of rhythmic notation known as the rhythmic modes.
This was also the period in which concepts of formal structure developed which were attentive to proportion, texture, and architectural effect. Composers of the period alternated florid and discant organum (more note-against-note, as opposed to the succession of many-note melismas against long-held notes found in the florid type), and created several new musical forms: clausulae, which were melismatic sections of organa extracted and fitted with new words and further musical elaboration; conductus, which were songs for one or more voices to be sung rhythmically, most likely in a procession of some sort; and tropes, which were additions of new words and sometimes new music to sections of older chant. All of these genres save one were based upon chant; that is, one of the voices, (usually three, though sometimes four) nearly always the lowest (the tenor at this point) sang a chant melody, though with freely composed note-lengths, over which the other voices sang organum. The exception to this method was the conductus, a two-voice composition that was freely composed in its entirety.
The motet, one of the most important musical forms of the high Middle Ages and Renaissance, developed initially during the Notre Dame period out of the clausula, especially the form using multiple voices as elaborated by Pérotin, who paved the way for this particularly by replacing many of his predecessor (as canon of the cathedral) Léonin's lengthy florid clausulae with substitutes in a discant style. Gradually, there came to be entire books of these substitutes, available to be fitted in and out of the various chants. Since, in fact, there were more than can possibly have been used in context, it is probable that the clausulae came to be performed independently, either in other parts of the mass, or in private devotions. The clausula, thus practised, became the motet when troped with non-liturgical words, and this further developed into a form of great elaboration, sophistication and subtlety in the fourteenth century, the period of Ars nova. Surviving manuscripts from this era include the Montpellier Codex, Bamberg Codex, and Las Huelgas Codex.
Composers of this time include Léonin, Pérotin, W. de Wycombe, Adam de St. Victor, and Petrus de Cruce (Pierre de la Croix). Petrus is credited with the innovation of writing more than three semibreves to fit the length of a breve. Coming before the innovation of imperfect tempus, this practice inaugurated the era of what are now called "Petronian" motets. These late 13th-century works are in three to four parts and have multiple texts sung simultaneously. Originally, the tenor line (from the Latin tenere, "to hold") held a preexisting liturgical chant line in the original Latin, while the text of the one, two, or even three voices above, called the voces organales, provided commentary on the liturgical subject either in Latin or in the vernacular French. The rhythmic values of the voces organales decreased as the parts multiplied, with the duplum (the part above the tenor) having smaller rhythmic values than the tenor, the triplum (the line above the duplum) having smaller rhythmic values than the duplum, and so on. As time went by, the texts of the voces organales became increasingly secular in nature and had less and less overt connection to the liturgical text in the tenor line.
The Petronian motet is a highly complex genre, given its mixture of several semibreve breves with rhythmic modes and sometimes (with increasing frequency) substitution of secular songs for chant in the tenor. Indeed, ever-increasing rhythmic complexity would be a fundamental characteristic of the 14th century, though music in France, Italy, and England would take quite different paths during that time.
Cantigas de Santa Maria
The Cantigas de Santa Maria ("Canticles of St. Mary") are 420 poems with musical notation, written in Galician-Portuguese during the reign of Alfonso X The Wise (1221–1284). The manuscript was probably compiled from 1270–1280, and is highly decorated, with an illumination every 10 poems. The illuminations often depict musicians making the manuscript a particularly important source of medieval music iconography. Though the Cantigas are often attributed to Alfonso, it remains unclear as to whether he was a composer himself, or perhaps a compiler; Alfonso is known to regularly invited musicians and poets to court whom were undoubtedly involved in the Cantigas production.
It is one of the largest collections of monophonic (solo) songs from the Middle Ages and is characterized by the mention of the Virgin Mary in every song, while every tenth song is a hymn. The manuscripts have survived in four codices: two at El Escorial, one at Madrid's National Library, and one in Florence, Italy. Some have colored miniatures showing pairs of musicians playing a wide variety of instruments.
Troubadours, trouvères and Minnesänger
The music of the troubadours and trouvères was a vernacular tradition of monophonic secular song, probably accompanied by instruments, sung by professional, occasionally itinerant, musicians who were as skilled as poets as they were singers and instrumentalists. The language of the troubadours was Occitan (also known as the langue d'oc, or Provençal); the language of the trouvères was Old French (also known as langue d'oil). The period of the troubadours corresponded to the flowering of cultural life in Provence which lasted through the twelfth century and into the first decade of the thirteenth. Typical subjects of troubadour song were war, chivalry and courtly love—the love of an idealized woman from afar. The period of the troubadours wound down after the Albigensian Crusade, the fierce campaign by Pope Innocent III to eliminate the Cathar heresy (and northern barons' desire to appropriate the wealth of the south). Surviving troubadours went either to Portugal, Spain, northern Italy or northern France (where the trouvère tradition lived on), where their skills and techniques contributed to the later developments of secular musical culture in those places.
The trouvères and troubadours shared similar musical styes, but the trouvères were generally noblemen. The music of the trouvères was similar to that of the troubadours, but was able to survive into the thirteenth century unaffected by the Albigensian Crusade. Most of the more than two thousand surviving trouvère songs include music, and show a sophistication as great as that of the poetry it accompanies.
The Minnesänger tradition was the Germanic counterpart to the activity of the troubadours and trouvères to the west. Unfortunately, few sources survive from the time; the sources of Minnesang are mostly from two or three centuries after the peak of the movement, leading to some controversy over the accuracy of these sources. Among the Minnesängers with surviving music are Wolfram von Eschenbach, Walther von der Vogelweide, and Niedhart von Reuenthal.
Trovadorismo
In the Middle Ages, Galician-Portuguese was the language used in nearly all of Iberia for lyric poetry. From this language derive both modern Galician and Portuguese. The Galician-Portuguese school, which was influenced to some extent (mainly in certain formal aspects) by the Occitan troubadours, is first documented at the end of the twelfth century and lasted until the middle of the fourteenth.
The earliest extant composition in this school is usually agreed to be Ora faz ost' o senhor de Navarra by the Portuguese João Soares de Paiva, usually dated just before or after 1200. The troubadours of the movement, not to be confused with the Occitan troubadours (who frequented courts in nearby León and Castile), wrote almost entirely cantigas. Beginning probably around the middle of the thirteenth century, these songs, known also as cantares or trovas, began to be compiled in collections known as cancioneiros (songbooks). Three such anthologies are known: the Cancioneiro da Ajuda, the Cancioneiro Colocci-Brancuti (or Cancioneiro da Biblioteca Nacional de Lisboa), and the Cancioneiro da Vaticana. In addition to these there is the priceless collection of over 400 Galician-Portuguese cantigas in the Cantigas de Santa Maria, which tradition attributes to Alfonso X.
The Galician-Portuguese cantigas can be divided into three basic genres: male-voiced love poetry, called cantigas de amor (or cantigas d'amor) female-voiced love poetry, called cantigas de amigo (cantigas d'amigo); and poetry of insult and mockery called cantigas d'escarnho e de mal dizer. All three are lyric genres in the technical sense that they were strophic songs with either musical accompaniment or introduction on a stringed instrument. But all three genres also have dramatic elements, leading early scholars to characterize them as lyric-dramatic.
The origins of the cantigas d'amor are usually traced to Provençal and Old French lyric poetry, but formally and rhetorically they are quite different. The cantigas d'amigo are probably rooted in a native song tradition, though this view has been contested. The cantigas d'escarnho e maldizer may also (according to Lang) have deep local roots. The latter two genres (totalling around 900 texts) make the Galician-Portuguese lyric unique in the entire panorama of medieval Romance poetry.
Troubadours with surviving melodies
Aimeric de Belenoi
Aimeric de Peguilhan
Airas Nunes
Albertet de Sestaro
Arnaut Daniel
Arnaut de Maruoill
Beatritz de Dia
Berenguier de Palazol
Bernart de Ventadorn
Bertran de Born
Blacasset
Cadenet
Daude de Pradas
Denis of Portugal
Folquet de Marselha
Gaucelm Faidit
Gui d'Ussel
Guilhem Ademar
Guilhem Augier Novella
Guilhem Magret
Guilhem de Saint Leidier
Guiraut de Bornelh
Guiraut d'Espanha
Guiraut Riquier
Jaufre Rudel
João Soares de Paiva
João Zorro
Jordan Bonel
Marcabru
Martín Codax
Monge de Montaudon
Peire d'Alvernhe
Peire Cardenal
Peire Raimon de Tolosa
Peire Vidal
Peirol
Perdigon
Pistoleta
Pons d'Ortaffa
Pons de Capduoill
Raimbaut d'Aurenga
Raimbaut de Vaqueiras
Raimon Jordan
Raimon de Miraval
Rigaut de Berbezilh
Uc Brunet
Uc de Saint Circ
William IX of Aquitaine
Composers of the high and late medieval era
Late medieval music (1300–1400)
France: Ars nova
The beginning of the Ars nova is one of the few clear chronological divisions in medieval music, since it corresponds to the publication of the Roman de Fauvel, a huge compilation of poetry and music, in 1310 and 1314. The Roman de Fauvel is a satire on abuses in the medieval church, and is filled with medieval motets, lais, rondeaux and other new secular forms. While most of the music is anonymous, it contains several pieces by Philippe de Vitry, one of the first composers of the isorhythmic motet, a development which distinguishes the fourteenth century. The isorhythmic motet was perfected by Guillaume de Machaut, the finest composer of the time.
During the Ars nova era, secular music acquired a polyphonic sophistication formerly found only in sacred music, a development not surprising considering the secular character of the early Renaissance (while this music is typically considered "medieval", the social forces that produced it were responsible for the beginning of the literary and artistic Renaissance in Italy—the distinction between Middle Ages and Renaissance is a blurry one, especially considering arts as different as music and painting). The term "Ars nova" (new art, or new technique) was coined by Philippe de Vitry in his treatise of that name (probably written in 1322), in order to distinguish the practice from the music of the immediately preceding age.
The dominant secular genre of the Ars Nova was the chanson, as it would continue to be in France for another two centuries. These chansons were composed in musical forms corresponding to the poetry they set, which were in the so-called formes fixes of rondeau, ballade, and virelai. These forms significantly affected the development of musical structure in ways that are felt even today; for example, the ouvert-clos rhyme-scheme shared by all three demanded a musical realization which contributed directly to the modern notion of antecedent and consequent phrases. It was in this period, too, in which began the long tradition of setting the mass ordinary. This tradition started around mid-century with isolated or paired settings of Kyries, Glorias, etc., but Machaut composed what is thought to be the first complete mass conceived as one composition. The sound world of Ars Nova music is very much one of linear primacy and rhythmic complexity. "Resting" intervals are the fifth and octave, with thirds and sixths considered dissonances. Leaps of more than a sixth in individual voices are not uncommon, leading to speculation of instrumental participation at least in secular performance. Surviving French manuscripts include the Ivrea Codex and the Apt Codex.
For information about specific French composers writing in late medieval era, see Jehan de Lescurel, Philippe de Vitry, Guillaume de Machaut, Borlet, Solage, and François Andrieu.
Italy: Trecento
Most of the music of Ars nova was French in origin; however, the term is often loosely applied to all of the music of the fourteenth century, especially to include the secular music in Italy. There this period was often referred to as Trecento. Italian music has always been known for its lyrical or melodic character, and this goes back to the 14th century in many respects. Italian secular music of this time (what little surviving liturgical music there is, is similar to the French except for somewhat different notation) featured what has been called the cantalina style, with a florid top voice supported by two (or even one; a fair amount of Italian Trecento music is for only two voices) that are more regular and slower moving. This type of texture remained a feature of Italian music in the popular 15th and 16th century secular genres as well, and was an important influence on the eventual development of the trio texture that revolutionized music in the 17th.
There were three main forms for secular works in the Trecento. One was the madrigal, not the same as that of 150–250 years later, but with a verse/refrain-like form. Three-line stanzas, each with different words, alternated with a two-line ritornello, with the same text at each appearance. Perhaps we can see the seeds of the subsequent late-Renaissance and Baroque ritornello in this device; it too returns again and again, recognizable each time, in contrast with its surrounding disparate sections. Another form, the caccia ("chase,") was written for two voices in a canon at the unison. Sometimes, this form also featured a ritornello, which was occasionally also in a canonic style. Usually, the name of this genre provided a double meaning, since the texts of caccia were primarily about hunts and related outdoor activities, or at least action-filled scenes. The third main form was the ballata, which was roughly equivalent to the French virelai.
Surviving Italian manuscripts include the Squarcialupi Codex and the Rossi Codex. For information about specific Italian composers writing in the late medieval era, see Francesco Landini, Gherardello da Firenze, Andrea da Firenze, Lorenzo da Firenze, Giovanni da Firenze (aka Giovanni da Cascia), Bartolino da Padova, Jacopo da Bologna, Donato da Cascia, Lorenzo Masini, Niccolò da Perugia, and Maestro Piero.
Germany: Geisslerlieder
The Geisslerlieder were the songs of wandering bands of flagellants, who sought to appease the wrath of an angry God by penitential music accompanied by mortification of their bodies. There were two separate periods of activity of Geisslerlied: one around the middle of the thirteenth century, from which, unfortunately, no music survives (although numerous lyrics do); and another from 1349, for which both words and music survive intact due to the attention of a single priest who wrote about the movement and recorded its music. This second period corresponds to the spread of the Black Death in Europe, and documents one of the most terrible events in European history. Both periods of Geisslerlied activity were mainly in Germany.
Ars subtilior
As often seen at the end of any musical era, the end of the medieval era is marked by a highly manneristic style known as Ars subtilior. In some ways, this was an attempt to meld the French and Italian styles. This music was highly stylized, with a rhythmic complexity that was not matched until the 20th century. In fact, not only was the rhythmic complexity of this repertoire largely unmatched for five and a half centuries, with extreme syncopations, mensural trickery, and even examples of augenmusik (such as a chanson by Baude Cordier written out in manuscript in the shape of a heart), but also its melodic material was quite complex as well, particularly in its interaction with the rhythmic structures. Already discussed under Ars Nova has been the practice of isorhythm, which continued to develop through late-century and in fact did not achieve its highest degree of sophistication until early in the 15th century. Instead of using isorhythmic techniques in one or two voices, or trading them among voices, some works came to feature a pervading isorhythmic texture which rivals the integral serialism of the 20th century in its systematic ordering of rhythmic and tonal elements. The term "mannerism" was applied by later scholars, as it often is, in response to an impression of sophistication being practised for its own sake, a malady which some authors have felt infected the Ars subtilior.
One of the most important extant sources of Ars Subtilior chansons is the Chantilly Codex. For information about specific composers writing music in Ars subtilior style, see Anthonello de Caserta, Philippus de Caserta (aka Philipoctus de Caserta), Johannes Ciconia, Matteo da Perugia, Lorenzo da Firenze, Grimace, Jacob Senleches, and Baude Cordier.
Transitioning to the Renaissance
Demarcating the end of the medieval era and the beginning of the Renaissance era, with regard to the composition of music, is difficult. While the music of the fourteenth century is fairly obviously medieval in conception, the music of the early fifteenth century is often conceived as belonging to a transitional period, not only retaining some of the ideals of the end of the Middle Ages (such as a type of polyphonic writing in which the parts differ widely from each other in character, as each has its specific textural function), but also showing some of the characteristic traits of the Renaissance (such as the increasingly international style developing through the diffusion of Franco-Flemish musicians throughout Europe, and in terms of texture an increasing equality of parts). Music historians do not agree on when the Renaissance era began, but most historians agree that England was still a medieval society in the early fifteenth century (see periodization issues of the Middle Ages). While there is no consensus, 1400 is a useful marker, because it was around that time that the Renaissance came into full swing in Italy.
The increasing reliance on the interval of the third as a consonance is one of the most pronounced features of transition into the Renaissance. Polyphony, in use since the 12th century, became increasingly elaborate with highly independent voices throughout the 14th century. With John Dunstaple and other English composers, partly through the local technique of faburden (an improvisatory process in which a chant melody and a written part predominantly in parallel sixths above it are ornamented by one sung in perfect fourths below the latter, and which later took hold on the continent as "fauxbordon"), the interval of the third emerges as an important musical development; because of this Contenance Angloise ("English countenance"), English composers' music is often regarded as the first to sound less truly bizarre to 2000s-era audiences who are not trained in music history.
English stylistic tendencies in this regard had come to fruition and began to influence continental composers as early as the 1420s, as can be seen in works of the young Dufay, among others. While the Hundred Years' War continued, English nobles, armies, their chapels and retinues, and therefore some of their composers, travelled in France and performed their music there; it must also of course be remembered that the English controlled portions of northern France at this time. English manuscripts include the Worcester Fragments, the Old St. Andrews Music Book, the Old Hall Manuscript, and Egerton Manuscript. For information about specific composers who are considered transitional between the medieval and the Renaissance, see Zacara da Teramo, Paolo da Firenze, Giovanni Mazzuoli, Antonio da Cividale, Antonius Romanus, Bartolomeo da Bologna, Roy Henry, Arnold de Lantins, Leonel Power, and John Dunstaple.
An early composer from the Franco-Flemish School of the Renaissance was Johannes Ockeghem (1410/1425 –1497). He was the most famous member of the Franco-Flemish School in the last half of the 15th century, and is often considered the most influential composer between Dufay and Josquin des Prez. Ockeghem probably studied with Gilles Binchois, and at least was closely associated with him at the Burgundian court. Antoine Busnois wrote a motet in honor of Ockeghem. Ockeghem is a direct link from the Burgundian style to the next generation of Netherlanders, such as Obrecht and Josquin. A strong influence on Josquin des Prez and the subsequent generation of Netherlanders, Ockeghem was famous throughout Europe Charles VII for his expressive music, although he was equally renowned for his technical prowess.
Influence
The musical styles of Pérotin influenced 20th-century composers such as John Luther Adams and minimalist composer Steve Reich.
Bardcore, which involves remixing famous pop songs to have a medieval instrumentation, became a popular meme in 2020.
References
Sources
Further reading
Butterfield, Ardis (2002), Poetry and Music in Medieval France, Cambridge: Cambridge University Press.
Cyrus, Cynthia J. (1999), "Music": Medieval Glossary ORB Online Encyclopedia (15 October) (Archive from 9 August 2011; accessed 4 May 2017.
Derrick, Henry (1983), The Listeners Guide to Medieval & Renaissance Music, New York, NY: Facts on File.
Haines, John. (2004). “Erasures in Thirteenth-Century Music”. Music and Medieval Manuscripts: Paleography and Performance. Andershot: Ashgate. pg. 60-88.
Haines, John. (2011). The Calligraphy of Medieval Music. Brepols Publishers.
Hartt, Jared C., ed. (2018), A Critical Companion to Medieval Motets, Woodbridge: Boydell.
Pirrotta, Nino (1980), "Medieval" in The New Grove Dictionary of Music and Musicians, ed. Stanley Sadie, vol. 20, London: Macmillan.
Remnant, M. 1965. 'The gittern in English medieval art', Galpin Society Journal, vol. 18, 104–9.
Remnant, M. "The Use of Frets on Rebecs and Medieval Fiddles" Galpin Society Journal, 21, 1968, p. 146.
Remnant, M. and Marks, R. 1980. 'A medieval "gittern"’, British Museum Yearbook 4, Music and Civilisation, 83–134.
Remnant, M. "Musical Instruments of the West". 240 pp. Batsford, London, 1978. Reprinted by Batsford in 1989 . Digitized by the University of Michigan 17 May 2010.
External links
Medieval Music & Arts Foundation
DIAMM, the Digital Image Archive of Medieval Music
The Schøyen Collection: Music (scans of medieval musical notation)
Répertoire International des Sources Musicales (RISM), a free, searchable database of worldwide locations for music manuscripts up to ca. 1800 |
19957 | https://en.wikipedia.org/wiki/Maser | Maser | A maser (, an acronym for microwave amplification by stimulated emission of radiation) is a device that produces coherent electromagnetic waves through amplification by stimulated emission. The first maser was built by Charles H. Townes, James P. Gordon, and Herbert J. Zeiger at Columbia University in 1953. Townes, Nikolay Basov and Alexander Prokhorov were awarded the 1964 Nobel Prize in Physics for theoretical work leading to the maser. Masers are used as the timekeeping device in atomic clocks, and as extremely low-noise microwave amplifiers in radio telescopes and deep space spacecraft communication ground stations.
Modern masers can be designed to generate electromagnetic waves at not only microwave frequencies but also radio and infrared frequencies. For this reason Charles Townes suggested replacing "microwave" with the word "molecular" as the first word in the acronym maser.
The laser works by the same principle as the maser, but produces higher frequency coherent radiation at visible wavelengths. The maser was the forerunner of the laser, inspiring theoretical work by Townes and Arthur Leonard Schawlow that led to the invention of the laser in 1960 by Theodore Maiman. When the coherent optical oscillator was first imagined in 1957, it was originally called the "optical maser". This was ultimately changed to for "Light Amplification by Stimulated Emission of Radiation". Gordon Gould is credited with creating this acronym in 1957.
History
The theoretical principles governing the operation of a maser were first described by Joseph Weber of the University of Maryland, College Park at the Electron Tube Research Conference in June 1952 in Ottawa, with a summary published in the June 1953 Transactions of the Institute of Radio Engineers Professional Group on Electron Devices, and simultaneously by Nikolay Basov and Alexander Prokhorov from Lebedev Institute of Physics at an All-Union Conference on Radio-Spectroscopy held by the USSR Academy of Sciences in May 1952, subsequently published in October 1954.
Independently, Charles Hard Townes, James P. Gordon, and H. J. Zeiger built the first ammonia maser at Columbia University in 1953. This device used stimulated emission in a stream of energized ammonia molecules to produce amplification of microwaves at a frequency of about 24.0 gigahertz. Townes later worked with Arthur L. Schawlow to describe the principle of the optical maser, or laser, of which Theodore H. Maiman created the first working model in 1960.
For their research in the field of stimulated emission, Townes, Basov and Prokhorov were awarded the Nobel Prize in Physics in 1964.
Technology
The maser is based on the principle of stimulated emission proposed by Albert Einstein in 1917. When atoms have been induced into an excited energy state, they can amplify radiation at a frequency particular to the element or molecule used as the masing medium (similar to what occurs in the lasing medium in a laser).
By putting such an amplifying medium in a resonant cavity, feedback is created that can produce coherent radiation.
Some common types
Atomic beam masers
Ammonia maser
Free electron maser
Hydrogen maser
Gas masers
Rubidium maser
Liquid-dye and chemical laser
Solid state masers
Ruby maser
Whispering-gallery modes iron-sapphire maser
Dual noble gas maser (The dual noble gas of a masing medium which is nonpolar.)
21st-century developments
In 2012, a research team from the National Physical Laboratory and Imperial College London developed a solid-state maser that operated at room temperature by using optically pumped, pentacene-doped p-Terphenyl as the amplifier medium. It produced pulses of maser emission lasting for a few hundred microseconds.
In 2018, a research team from Imperial College London and University College London demonstrated continuous-wave maser oscillation using synthetic diamonds containing nitrogen-vacancy defects.
Uses
Masers serve as high precision frequency references. These "atomic frequency standards" are one of the many forms of atomic clocks. Masers were also used as low-noise microwave amplifiers in radio telescopes, though these have largely been replaced by amplifiers based on FETs.
During the early 1960s, the Jet Propulsion Laboratory developed a maser to provide ultra-low-noise amplification of S-band microwave signals received from deep space probes. This maser used deeply refrigerated helium to chill the amplifier down to a temperature of 4 kelvin. Amplification was achieved by exciting a ruby comb with a 12.0 gigahertz klystron. In the early years, it took days to chill and remove the impurities from the hydrogen lines. Refrigeration was a two-stage process with a large Linde unit on the ground, and a crosshead compressor within the antenna. The final injection was at through a micrometer-adjustable entry to the chamber. The whole system noise temperature looking at cold sky (2.7 kelvin in the microwave band) was 17 kelvin. This gave such a low noise figure that the Mariner IV space probe could send still pictures from Mars back to the Earth even though the output power of its radio transmitter was only 15 watts, and hence the total signal power received was only −169 decibels with respect to a milliwatt (dBm).
Hydrogen maser
The hydrogen maser is used as an atomic frequency standard. Together with other kinds of atomic clocks, these help make up the International Atomic Time standard ("Temps Atomique International" or "TAI" in French). This is the international time scale coordinated by the International Bureau of Weights and Measures. Norman Ramsey and his colleagues first conceived of the maser as a timing standard. More recent masers are practically identical to their original design. Maser oscillations rely on the stimulated emission between two hyperfine energy levels of atomic hydrogen. Here is a brief description of how they work:
First, a beam of atomic hydrogen is produced. This is done by submitting the gas at low pressure to a high-frequency radio wave discharge (see the picture on this page).
The next step is "state selection"—in order to get some stimulated emission, it is necessary to create a population inversion of the atoms. This is done in a way that is very similar to the Stern–Gerlach experiment. After passing through an aperture and a magnetic field, many of the atoms in the beam are left in the upper energy level of the lasing transition. From this state, the atoms can decay to the lower state and emit some microwave radiation.
A high Q factor (quality factor) microwave cavity confines the microwaves and reinjects them repeatedly into the atom beam. The stimulated emission amplifies the microwaves on each pass through the beam. This combination of amplification and feedback is what defines all oscillators. The resonant frequency of the microwave cavity is tuned to the frequency of the hyperfine energy transition of hydrogen: 1,420,405,752 hertz.
A small fraction of the signal in the microwave cavity is coupled into a coaxial cable and then sent to a coherent radio receiver.
The microwave signal coming out of the maser is very weak (a few picowatts). The frequency of the signal is fixed and extremely stable. The coherent receiver is used to amplify the signal and change the frequency. This is done using a series of phase-locked loops and a high performance quartz oscillator.
Astrophysical masers
Maser-like stimulated emission has also been observed in nature from interstellar space, and it is frequently called "superradiant emission" to distinguish it from laboratory masers. Such emission is observed from molecules such as water (H2O), hydroxyl radicals (•OH), methanol (CH3OH), formaldehyde (HCHO), and silicon monoxide (SiO). Water molecules in star-forming regions can undergo a population inversion and emit radiation at about 22.0 GHz, creating the brightest spectral line in the radio universe. Some water masers also emit radiation from a rotational transition at a frequency of 96 GHz.
Extremely powerful masers, associated with active galactic nuclei, are known as megamasers and are up to a million times more powerful than stellar masers.
Terminology
The meaning of the term maser has changed slightly since its introduction. Initially the acronym was universally given as "microwave amplification by stimulated emission of radiation", which described devices which emitted in the microwave region of the electromagnetic spectrum.
The principle and concept of stimulated emission has since been extended to more devices and frequencies. Thus, the original acronym is sometimes modified, as suggested by Charles H. Townes, to "molecular amplification by stimulated emission of radiation." Some have asserted that Townes's efforts to extend the acronym in this way were primarily motivated by the desire to increase the importance of his invention, and his reputation in the scientific community.
When the laser was developed, Townes and Schawlow and their colleagues at Bell Labs pushed the use of the term optical maser, but this was largely abandoned in favor of laser, coined by their rival Gordon Gould. In modern usage, devices that emit in the X-ray through infrared portions of the spectrum are typically called lasers, and devices that emit in the microwave region and below are commonly called masers, regardless of whether they emit microwaves or other frequencies.
Gould originally proposed distinct names for devices that emit in each portion of the spectrum, including grasers (gamma ray lasers), xasers (x-ray lasers), uvasers (ultraviolet lasers), lasers (visible lasers), irasers (infrared lasers), masers (microwave masers), and rasers (RF masers). Most of these terms never caught on, however, and all have now become (apart from in science fiction) obsolete except for maser and laser.
In popular culture
In the Godzilla franchise, the Japanese Self-Defense Forces (JSDF) often use fictional maser tanks in a futile effort to defend Japan from Godzilla and other Kaiju.
See also
Spaser
List of laser types
References
Further reading
J.R. Singer, Masers, John Whiley and Sons Inc., 1959.
J. Vanier, C. Audoin, The Quantum Physics of Atomic Frequency Standards, Adam Hilger, Bristol, 1989.
External links
arXiv.org search for "maser"
Noble gas Maser
Bright Idea: The First Lasers
Invention of the Maser and Laser, American Physical Society
Shawlow and Townes Invent the Laser, Bell Labs
American inventions
Laser types
Microwave technology
Optical devices
Emerging technologies |
19958 | https://en.wikipedia.org/wiki/Mario%20Botta | Mario Botta | Mario Botta (born 1 April 1943) is a Swiss architect.
Career
Botta designed his first building, a two-family house at Morbio Superiore in Ticino, at age 16. He graduated from the Università Iuav di Venezia (1969). While the arrangements of spaces in this structure is inconsistent, its relationship to its site, separation of living from service spaces, and deep window recesses echo of what would become his stark, strong, towering style. His designs tend to include a strong sense of geometry, often being based on very simple shapes, yet creating unique volumes of space. His buildings are often made of brick, yet his use of material is wide, varied, and often unique.
His trademark style can be seen widely in Switzerland particularly the Ticino region and also in the Mediatheque in Villeurbanne (1988), a cathedral in Évry (1995), and the San Francisco Museum of Modern Art or SFMOMA (1994). He also designed the Europa-Park Dome, which houses many major events at the Europa-Park theme park resort in Germany. Religious works by Botta, including the Cymbalista Synagogue and Jewish Heritage Center were shown in London at the Royal Institute of British Architects in an exhibition entitled, Architetture del Sacro: Prayers in Stone. “A church is the place, par excellence, of architecture,” he said in an interview with architectural historian Judith Dupré. “When you enter a church, you already are part of what has transpired and will transpire there. The church is a house that puts a believer in a dimension where he or she is the protagonist. The sacred directly lives in the collective. Man becomes a participant in a church, even if he never says anything.”
In 1998, he designed the new bus station for Vimercate (near Milan), a red brick building linked to many facilities, underlining the city's recent development.
He worked at La Scala's theatre renovation, which proved controversial as preservationists feared that historic details would be lost.
In 2004, he designed Museum One of the Leeum, Samsung Museum of Art in Seoul, South Korea. On January 1, 2006 he received the Grand Officer award from President of the Italian Republic Carlo Azeglio Ciampi. In 2006, he designed his first ever spa, the Bergoase Spa in Arosa, Switzerland. The spa opened in December 2006 and cost an estimated CHF 35 million. Mario Botta participated in the Stock Exchange of Visions project in 2007. He was a member of the Jury of the Global Holcim Awards in 2012. In 2014, he was awarded with the Prize Javier Carvajal by the Universidad de Navarra.
Gallery
References
Sources
Markus Breitschmid (ed.), Architecture and the Ambient – Mario Botta. Architectura et Ars Series, Volume 2, Virginia Tech Architecture Publications, 2013.
External links
Official website
2007 Interview with Mario Botta in The Leaf Review
Stock Exchange Of Visions: Visions of Mario Botta (Video Interviews)
STORIES OF HOUSES: A Family House at Riva San Vitale, by Mario Botta
Santa Maria degli Angeli Monte Tamaro
Mario Botta Architecture on Google maps
Mario Botta. To be an architect, free online course on FutureLearn.com
20th-century Swiss architects
21st-century Swiss architects
Modernist architects
Postmodern architects
01
1943 births
Living people
Architects from Ticino
People from Mendrisio
Members of the Académie d'architecture
Swiss architects |
19960 | https://en.wikipedia.org/wiki/Mark%20Antony | Mark Antony | Marcus Antonius (14 January 1 August 30 BC), commonly known in English as Mark Antony, was a Roman politician and general who played a critical role in the transformation of the Roman Republic from a constitutional republic into the autocratic Roman Empire.
Antony was a relative and supporter of Julius Caesar, and served as one of his generals during the conquest of Gaul and the Civil War. Antony was appointed administrator of Italy while Caesar eliminated political opponents in Greece, North Africa, and Spain. After Caesar's assassination in 44 BC, Antony joined forces with Marcus Aemilius Lepidus, another of Caesar's generals, and Octavian, Caesar's great-nephew and adopted son, forming a three-man dictatorship known to historians as the Second Triumvirate. The Triumvirs defeated Caesar's killers, the Liberatores, at the Battle of Philippi in 42 BC, and divided the government of the Republic between themselves. Antony was assigned Rome's eastern provinces, including the client kingdom of Egypt, then ruled by Cleopatra VII Philopator, and was given the command in Rome's war against Parthia.
Relations among the triumvirs were strained as the various members sought greater political power. Civil war between Antony and Octavian was averted in 40 BC, when Antony married Octavian's sister, Octavia. Despite this marriage, Antony carried on a love affair with Cleopatra, who bore him three children, further straining Antony's relations with Octavian. Lepidus was expelled from the association in 36 BC, and in 33 BC disagreements between Antony and Octavian caused a split between the remaining Triumvirs. Their ongoing hostility erupted into civil war in 31 BC, as the Roman Senate, at Octavian's direction, declared war on Cleopatra and proclaimed Antony a traitor. Later that year, Antony was defeated by Octavian's forces at the Battle of Actium. Antony and Cleopatra fled to Egypt where, having again been defeated at the Battle of Alexandria, they committed suicide.
With Antony dead, Octavian became the undisputed master of the Roman world. In 27 BC, Octavian was granted the title of Augustus, marking the final stage in the transformation of the Roman Republic into an empire, with himself as the first Roman emperor.
Early life
A member of the plebeian Antonia gens, Antony was born in Rome on 14 January 83 BC. His father and namesake was Marcus Antonius Creticus, son of the noted orator Marcus Antonius who had been murdered during the purges of Gaius Marius in the winter of 87–86 BC. His mother was Julia, a third cousin of Julius Caesar. Antony was an infant at the time of Lucius Cornelius Sulla's march on Rome in 82 BC.
According to the Roman orator Marcus Tullius Cicero, Antony's father was incompetent and corrupt, and was only given power because he was incapable of using or abusing it effectively. In 74 BC he was given the military command to defeat the pirates of the Mediterranean, but he died in Crete in 71 BC without making any significant progress. The elder Antony's death left Antony and his brothers, Lucius and Gaius, in the care of their mother, Julia, who later married Publius Cornelius Lentulus Sura, an eminent member of the old Patrician nobility. Lentulus, despite exploiting his political success for financial gain, was constantly in debt due to his extravagance. He was a major figure in the Second Catilinarian Conspiracy and was summarily executed on the orders of the consul Cicero in 63 BC for his involvement.
According to the historian Plutarch, Antony spent his teenage years wandering through Rome with his brothers and friends gambling, drinking, and becoming involved in scandalous love affairs. Antony's contemporary and enemy, Cicero, charged that he had a homosexual relationship with Gaius Scribonius Curio. This form of slander was popular during this time in the Roman Republic to demean and discredit political opponents. There is little reliable information on his political activity as a young man, although it is known that he was an associate of Publius Clodius Pulcher and his street gang. He may also have been involved in the Lupercal cult as he was referred to as a priest of this order later in life. By age twenty, Antony had amassed an enormous debt. Hoping to escape his creditors, Antony fled to Greece in 58 BC, where he studied philosophy and rhetoric at Athens.
Early career
In 57 BC, Antony joined the military staff of Aulus Gabinius, the Proconsul of Syria, as chief of the cavalry. This appointment marks the beginning of his military career. As consul the previous year, Gabinius had consented to the exile of Cicero by Antony's mentor, Publius Clodius Pulcher.
Hyrcanus II, the Roman-supported Hasmonean High Priest of Judea, fled Jerusalem to Gabinius to seek protection against his rival and son-in-law Alexander. Years earlier in 63 BC, the Roman general Pompey had captured him and his father, King Aristobulus II, during his war against the remnant of the Seleucid Empire. Pompey had deposed Aristobulus and installed Hyrcanus as Rome's client ruler over Judea. Antony achieved his first military distinctions after securing important victories at Alexandrium and Machaerus. With the rebellion defeated by 56 BC, Gabinius restored Hyrcanus to his position as High Priest in Judea.
The following year, in 55 BC, Gabinius intervened in the political affairs of Ptolemaic Egypt. Pharaoh Ptolemy XII Auletes had been deposed in a rebellion led by his daughter Berenice IV in 58 BC, forcing him to seek asylum in Rome. During Pompey's conquests years earlier, Ptolemy had received the support of Pompey, who named him an ally of Rome. Gabinius' invasion sought to restore Ptolemy to his throne. This was done against the orders of the senate but with the approval of Pompey, then Rome's leading politician, and only after the deposed king provided a 10,000 talent bribe. The Greek historian Plutarch records it was Antony who convinced Gabinius to finally act. After defeating the frontier forces of the Egyptian kingdom, Gabinius' army proceeded to attack the palace guards but they surrendered before a battle commenced. With Ptolemy XII restored as Rome's client king, Gabinius garrisoned two thousand Roman soldiers, later known as the Gabiniani, in Alexandria to ensure Ptolemy's authority. In return for its support, Rome exercised considerable power over the kingdom's affairs, particularly control of the kingdom's revenues and crop yields. Antony claimed years later to have first met Cleopatra, the then 14-year-old daughter of Ptolemy XII, during this campaign in Egypt.
While Antony was serving Gabinius in the East, the domestic political situation had changed in Rome. In 60 BC, a secret agreement (known as the "First Triumvirate") was entered into between three men to control the Republic: Marcus Licinius Crassus, Gnaeus Pompey Magnus, and Gaius Julius Caesar. Crassus, Rome's wealthiest man, had defeated the slave rebellion of Spartacus in 70 BC; Pompey conquered much of the Eastern Mediterranean in the 60's BC; Caesar was Rome's Pontifex Maximus and a former general in Spain. In 59 BC, Caesar, with funding from Crassus, was elected consul to pursue legislation favourable to Crassus and Pompey's interests. In return, Caesar was assigned the governorship of Illyricum, Cisalpine Gaul, and Transalpine Gaul for five years beginning in 58 BC. Caesar used his governorship as a launching point for his conquest of free Gaul. In 55 BC, Crassus and Pompey served as consuls while Caesar's command was extended for another five years. Rome was effectively under the absolute power of these three men. The Triumvirate used Publius Clodius Pulcher, Antony's patron, to exile their political rivals, notably Cicero and Cato the Younger.
During his early military service, Antony married his cousin Antonia Hybrida Minor, the daughter of Gaius Antonius Hybrida. Sometime between 54 and 47 BC, the union produced a single known child, Antonia. It is unclear if this was Antony's first marriage.
Service under Caesar
Gallic Wars
Antony's association with Publius Clodius Pulcher allowed him to achieve greater prominence. Clodius, through the influence of his benefactor Marcus Licinius Crassus, had developed a positive political relationship with Julius Caesar. Clodius secured Antony a position on Caesar's military staff in 54 BC, joining his conquest of Gaul. Serving under Caesar, Antony demonstrated excellent military leadership. Despite a temporary alienation later in life, Antony and Caesar developed friendly relations which would continue until Caesar's assassination in 44 BC. Caesar's influence secured greater political advancement for Antony. After a year of service in Gaul, Caesar dispatched Antony to Rome to formally begin his political career, receiving election as quaestor for 52 BC as a member of the Populares faction. Assigned to assist Caesar, Antony returned to Gaul and commanded Caesar's cavalry during his victory at the Battle of Alesia against the Gallic chieftain Vercingetorix. Following his year in office, Antony was promoted by Caesar to the rank of Legate and assigned command of two legions (approximately 7,500 total soldiers).
Meanwhile, the alliance among Caesar, Pompey and Crassus had effectively ended. Caesar's daughter Julia, who had married Pompey to secure the alliance, died in 54 BC while Crassus was killed at the Battle of Carrhae in 53 BC. Without the stability they provided, the divide between Caesar and Pompey grew ever larger. Caesar's glory in conquering Gaul had served to further strain his alliance with Pompey, who, having grown jealous of his former ally, had drifted away from Caesar's democratic Populares party towards the oligarchic Optimates faction led by Cato. The supporters of Caesar, led by Clodius, and the supporters of Pompey, led by Titus Annius Milo, routinely clashed. In 52 BC, Milo succeeded in assassinating Clodius, resulting in widespread riots and the burning of the senate meeting house, the Curia Hostilia, by Clodius' street gang. Anarchy resulted, causing the senate to look to Pompey. Fearing the persecutions of Lucius Cornelius Sulla only thirty years earlier, they avoided granting Pompey the dictatorship by instead naming him sole consul for the year, giving him extraordinary but limited powers. Pompey ordered armed soldiers into the city to restore order and to eliminate the remnants of Clodius' gang.
Antony remained on Caesar's military staff until 50 BC, helping mopping-up actions across Gaul to secure Caesar's conquest. With the war over, Antony was sent back to Rome to act as Caesar's protector against Pompey and the other Optimates. With the support of Caesar, who as Pontifex Maximus was head of the Roman religion, Antony was appointed the College of Augurs, an important priestly office responsible for interpreting the will of the gods by studying the flight of birds. All public actions required favorable auspices, granting the college considerable influence. Antony was then elected as one of the ten plebeian tribunes for 49 BC. In this position, Antony could protect Caesar from his political enemies, by vetoing any actions unfavorable to his patron.
Civil War
The feud between Caesar and Pompey erupted into open confrontation by early 49 BC. The consuls for the year, Gaius Claudius Marcellus Maior and Lucius Cornelius Lentulus Crus, were firm Optimates opposed to Caesar. Pompey, though remaining in Rome, was then serving as the governor of Spain and commanded several legions. Upon assuming office in January, Antony immediately summoned a meeting of the senate to resolve the conflict: he proposed both Caesar and Pompey lay down their commands and return to the status of mere private citizens. His proposal was well received by most of the senators but the consuls and Cato vehemently opposed it. Antony then made a new proposal: Caesar would retain only two of his eight legions, and the governorship of Illyrium if he was allowed to stand for the consulship in absentia. This arrangement ensured his immunity from suit would continue: he had needed the consulship to protect himself from prosecution by Pompey. Though Pompey found the concession satisfactory, Cato and Lentulus refused to back down, with Lentulus even expelling Antony from the senate meeting by force. Antony fled Rome, fearing for his life, and returned to Caesar's camp on the banks of the Rubicon, the southern limit of Caesar's lawful command.
Within days of Antony's expulsion, on 7 January 49 BC, the senate reconvened. Under the leadership of Cato and with the tacit support of Pompey, the senate passed a senatus consultum ultimum, a decree stripping Caesar of his command and ordering him to return to Rome and stand trial for war crimes. The senate further declared Caesar a traitor and a public enemy if he did not immediately disband his army. With all hopes of finding a peaceful solution gone after Antony's expulsion, Caesar used Antony as a pretext for marching on Rome. As tribune, Antony's person was sacrosanct, so it was unlawful to harm him or to refuse to recognize his veto. Three days later, on 10 January, Caesar crossed the Rubicon, initiating the Civil War. During the southern march, Caesar placed Antony as his second in command.
Caesar's rapid advance surprised Pompey, who, along with the other chief members of the Optimates, fled Italy for Greece. After entering Rome, instead of pursuing Pompey, Caesar marched to Spain to defeat the Pompeian loyalists there. Meanwhile, Antony, with the rank of propraetor—despite never having served as praetor—was installed as governor of Italy and commander of the army, stationed there while Marcus Aemilius Lepidus, one of Caesar's staff officers, ran the provisional administration of Rome itself. Though Antony was well liked by his soldiers, most other citizens despised him for his lack of interest in the hardships they faced from the civil war.
By the end of the year 49 BC, Caesar, already the ruler of Gaul, had captured Italy, Spain, Sicily, and Sardinia out of Optimates control. In early 48 BC, he prepared to sail with seven legions to Greece to face Pompey. Caesar had entrusted the defense of Illyricum to Gaius Antonius, Antony's younger brother, and Publius Cornelius Dolabella. Pompey's forces, however, defeated them and assumed control of the Adriatic Sea along with it. Additionally, the two legions they commanded defected to Pompey. Without their fleet, Caesar lacked the necessary transport ships to cross into Greece with his seven legions. Instead, he sailed with only two and placed Antony in command of the remaining five at Brundisium with instructions to join him as soon as he was able. In early 48 BC, Lucius Scribonius Libo was given command of Pompey's fleet, comprising some fifty galleys. Moving off to Brundisium, he blockaded Antony. Antony, however, managed to trick Libo into pursuing some decoy ships, causing Libo's squadron to be trapped and attacked. Most of Libo's fleet managed to escape, but several of his troops were trapped and captured. With Libo gone, Antony joined Caesar in Greece by March 48 BC.
During the Greek campaign, Plutarch records that Antony was Caesar's top general, and second only to him in reputation. Antony joined Caesar at the western Balkan Peninsula and besieged Pompey's larger army at Dyrrhachium. With food sources running low, Caesar, in July, ordered a nocturnal assault on Pompey's camp, but Pompey's larger forces pushed back the assault. Though an indecisive result, the victory was a tactical win for Pompey. Pompey, however, did not order a counterassault on Caesar's camp, allowing Caesar to retreat unhindered. Caesar would later remark the civil war would have ended that day if only Pompey had attacked him. Caesar managed to retreat to Thessaly, with Pompey in pursuit.
Assuming a defensive position at the plain of Pharsalus, Caesar's army prepared for pitched battle with Pompey's, which outnumbered his own two to one. At the Battle of Pharsalus on 9 August 48 BC, Caesar commanded the right wing opposite Pompey while Antony commanded the left, indicating Antony's status as Caesar's top general. The resulting battle was a decisive victory for Caesar. Though the civil war had not ended at Pharsulus, the battle marked the pinnacle of Caesar's power and effectively ended the Republic. The battle gave Caesar a much needed boost in legitimacy, as prior to the battle much of the Roman world outside Italy supported Pompey and the Optimates as the legitimate government of Rome. After Pompey's defeat, most of the senate defected to Caesar, including many of the soldiers who had fought under Pompey. Pompey himself fled to Ptolemaic Egypt, but Pharaoh Ptolemy XIII Theos Philopator feared retribution from Caesar and had Pompey assassinated upon his arrival.
Governor of Italy
Instead of immediately pursuing Pompey and the remaining Optimates, Caesar returned to Rome and was appointed Dictator with Antony as his Master of the Horse and second in command. Caesar presided over his own election to a second consulship for 47 BC and then, after eleven days in office, resigned this dictatorship. Caesar then sailed to Egypt, where he deposed Ptolemy XIII in favor of his sister Cleopatra in 47 BC. The young Cleopatra became Caesar's mistress and bore him a son, Caesarion. Caesar's actions further strengthened Roman control over the already Roman-dominated kingdom.
While Caesar was away in Egypt, Antony remained in Rome to govern Italy and restore order. Without Caesar to guide him, however, Antony quickly faced political difficulties and proved himself unpopular. The chief cause of his political challenges concerned debt forgiveness. One of the tribunes for 47 BC, Publius Cornelius Dolabella, a former general under Pompey, proposed a law which would have canceled all outstanding debts. Antony opposed the law for political and personal reasons: he believed Caesar would not support such massive relief and suspected Dolabella had seduced his wife Antonia Hybrida Minor. When Dolabella sought to enact the law by force and seized the Roman Forum, Antony responded by unleashing his soldiers upon the assembled masses, killing hundreds. The resulting instability, especially among Caesar's veterans who would have benefited from the law, forced Caesar to return to Italy by October 47 BC.
Antony's handling of the affair with Dolabella caused a cooling of his relationship with Caesar. Antony's violent reaction had caused Rome to fall into a state of anarchy. Caesar sought to mend relations with the populist leader; he was elected to a third term as consul for 46 BC, but proposed the senate should transfer the consulship to Dolabella. When Antony protested, Caesar was forced to withdraw the motion out of shame. Later, Caesar sought to exercise his prerogatives as Dictator and directly proclaim Dolabella as consul instead. Antony again protested and, in his capacity as an Augur, declared the omens were unfavorable and Caesar again backed down. Seeing the expediency of removing Dolabella from Rome, Caesar ultimately pardoned him for his role in the riots and took him as one of his generals in his campaigns against the remaining Optimates resistance. Antony, however, was stripped of all official positions and received no appointments for the year 46 BC or 45 BC. Instead of Antony, Caesar appointed Marcus Aemilius Lepidus to be his consular colleague for 46 BC. While Caesar campaigned in North Africa, Antony remained in Rome as a mere private citizen. After returning victorious from North Africa, Caesar was appointed Dictator for ten years and brought Cleopatra and their son to Rome. Antony again remained in Rome while Caesar, in 45 BC, sailed to Spain to defeat the final opposition to his rule. When Caesar returned in late 45 BC, the civil war was over.
During this time Antony married his third wife, Fulvia. Following the scandal with Dolabella, Antony had divorced his second wife and quickly married Fulvia. Fulvia had previously been married to both Publius Clodius Pulcher and Gaius Scribonius Curio, having been a widow since Curio's death in the battle of the Bagradas in 49 BC. Though Antony and Fulvia were formally married in 47 BC, Cicero suggests the two had been in a relationship since at least 58 BC. The union produced two children: Marcus Antonius Antyllus (born 47) and Iullus Antonius (born 45).
Assassination of Caesar
Ides of March
Whatever conflicts existed between himself and Caesar, Antony remained faithful to Caesar, ensuring their estrangement did not last long. Antony reunited with Caesar at Narbo in 45 BC with full reconciliation coming in 44 BC when Antony was elected consul alongside Caesar. Caesar planned a new invasion of Parthia and desired to leave Antony in Italy to govern Rome in his name. The reconciliation came soon after Antony rejected an offer by Gaius Trebonius, one of Caesar's generals, to join a conspiracy to assassinate Caesar.
Soon after they assumed office together, the Lupercalia festival was held on 15 February 44 BC. The festival was held in honor of Lupa, the she-wolf who suckled the infant orphans Romulus and Remus, the founders of Rome. The political atmosphere of Rome at the time of the festival was deeply divided. Caesar had enacted a number of constitutional reforms which centralized effectively all political powers within his own hands. He was granted further honors, including a form of semi-official cult, with Antony as his high priest. Additionally, on 1 January 44 BC, Caesar had been named Dictator for Life, effectively granting unlimited power. Caesar's political rivals feared these reforms were his attempts at transforming the Republic into an open monarchy. During the festival's activities, Antony publicly offered Caesar a diadem, which Caesar threw off. When Antony placed the diadem in his lap, Caesar ordered the diadem to be placed in the Temple of Jupiter Optimus Maximus. The event presented a powerful message: a diadem was a symbol of a king. By refusing it, Caesar demonstrated he had no intention of making himself King of Rome. Antony's motive for such actions is not clear and it is unknown if he acted with Caesar's prior approval or on his own.
A group of senators resolved to kill Caesar to prevent him from establishing a monarchy. Chief among them were Marcus Junius Brutus and Gaius Cassius Longinus. Although Cassius was "the moving spirit" in the plot, winning over the chief assassins to the cause of tyrannicide, Brutus, with his family's history of deposing Rome's kings, became their leader. Cicero, though not personally involved in the conspiracy, later claimed Antony's actions sealed Caesar's fate as such an obvious display of Caesar's preeminence motivated them to act. Originally, the conspirators had planned to eliminate not only Caesar but also many of his supporters, including Antony, but Brutus rejected the proposal, limiting the conspiracy to Caesar alone. With Caesar preparing to depart for Parthia in late March, the conspirators prepared to act when Caesar appeared for the senate meeting on the Ides of March (15 March).
Antony also went with Caesar, but was waylaid at the door of the Theatre of Pompey by Trebonius and was distracted from aiding Caesar. According to the Greek historian Plutarch, as Caesar arrived at the senate, Lucius Tillius Cimber presented him with a petition to recall his exiled brother. The other conspirators crowded round to offer their support. Within moments, the group of five conspirators stabbed Caesar one by one. Caesar attempted to get away, but, being drenched by blood, he tripped and fell. According to Roman historian Eutropius, around 60 or more men participated in the assassination. Caesar was stabbed 23 times and died from the blood loss attributable to multiple stab wounds.
Leader of the Caesarian Party
In the turmoil surrounding the assassination, Antony escaped Rome dressed as a slave, fearing Caesar's death would be the start of a bloodbath among his supporters. When this did not occur, he soon returned to Rome. The conspirators, who styled themselves the Liberatores ("The Liberators"), had barricaded themselves on the Capitoline Hill for their own safety. Though they believed Caesar's death would restore the Republic, Caesar had been immensely popular with the Roman middle and lower classes, who became enraged upon learning a small group of aristocrats had killed their champion.
Antony, as the sole consul, soon took the initiative and seized the state treasury. Calpurnia, Caesar's widow, presented him with Caesar's personal papers and custody of his extensive property, clearly marking him as Caesar's heir and leader of the Caesarian faction. Caesar's Master of the Horse Marcus Aemilius Lepidus marched over 6,000 troops into Rome on 16 March to restore order and to act as the bodyguards of the Caesarian faction. Lepidus wanted to storm the Capitol, but Antony preferred a peaceful solution as a majority of both the Liberators and Caesar's own supporters preferred a settlement over civil war. On 17 March, at Antony's arrangement, the senate met to discuss a compromise, which, due to the presence of Caesar's veterans in the city, was quickly reached. Caesar's assassins would be pardoned of their crimes and, in return, all of Caesar's actions would be ratified. In particular, the offices assigned to both Brutus and Cassius by Caesar were likewise ratified. Antony also agreed to accept the appointment of his rival Dolabella as his consular colleague to replace Caesar. Having neither troops, money, nor popular support, the Liberatores were forced to accept Antony's proposal. This compromise was a great success for Antony, who managed to simultaneously appease Caesar's veterans, reconcile the senate majority, and appear to the Liberatores as their partner and protector.
On 19 March, Caesar's will was opened and read. In it, Caesar posthumously adopted his great-nephew Gaius Octavius and named him his principal heir. Then only nineteen years old and stationed with Caesar's army in Macedonia, the youth became a member of Caesar's Julian clan, changing his name to "Gaius Julius Caesar Octavianus" (Octavian) in accordance with the conventions of Roman adoption. Though not the chief beneficiary, Antony did receive some bequests.
Shortly after the compromise was reached, as a sign of good faith, Brutus, against the advice of Cassius and Cicero, agreed Caesar would be given a public funeral and his will would be validated. Caesar's funeral was held on 20 March. Antony, as Caesar's faithful lieutenant and incumbent consul, was chosen to preside over the ceremony and to recite the elegy. During the demagogic speech, he enumerated the deeds of Caesar and, publicly reading his will, detailed the donations Caesar had left to the Roman people. Antony then seized the blood-stained toga from Caesar's body and presented it to the crowd. Worked into a fury by the bloody spectacle, the assembly rioted. Several buildings in the Forum and some houses of the conspirators were burned to the ground. Panicked, many of the conspirators fled Italy. Under the pretext of not being able to guarantee their safety, Antony relieved Brutus and Cassius of their judicial duties in Rome and instead assigned them responsibility for procuring wheat for Rome from Sicily and Asia. Such an assignment, in addition to being unworthy of their rank, would have kept them far from Rome and shifted the balance towards Antony. Refusing such secondary duties, the two traveled to Greece instead. Additionally, Cleopatra left Rome to return to Egypt.
Despite the provisions of Caesar's will, Antony proceeded to act as leader of the Caesarian faction, including appropriating for himself a portion of Caesar's fortune rightfully belonging to Octavian. Antony enacted the Lex Antonia, which formally abolished the Dictatorship, in an attempt to consolidate his power by gaining the support of the senatorial class. He also enacted a number of laws he claimed to have found in Caesar's papers to ensure his popularity with Caesar's veterans, particularly by providing land grants to them. Lepidus, with Antony's support, was named Pontifex Maximus to succeed Caesar. To solidify the alliance between Antony and Lepidus, Antony's daughter Antonia Prima was engaged to Lepidus' son, also named Lepidus. Surrounding himself with a bodyguard of over six thousand of Caesar's veterans, Antony presented himself as Caesar's true successor, largely ignoring Octavian.
First conflict with Octavian
Octavian arrived in Rome in May to claim his inheritance. Although Antony had amassed political support, Octavian still had opportunity to rival him as the leading member of the Caesarian faction. The senatorial Republicans increasingly viewed Antony as a new tyrant. Antony had lost the support of many Romans and supporters of Caesar when he opposed the motion to elevate Caesar to divine status. When Antony refused to relinquish Caesar's vast fortune to him, Octavian borrowed heavily to fulfill the bequests in Caesar's will to the Roman people and to his veterans, as well as to establish his own bodyguard of veterans. This earned him the support of Caesarian sympathizers who hoped to use him as a means of eliminating Antony. The senate, and Cicero in particular, viewed Antony as the greater danger of the two. By summer 44 BC, Antony was in a difficult position due to his actions regarding his compromise with the Liberatores following Caesar's assassination. He could either denounce the Liberatores as murderers and alienate the senate or he could maintain his support for the compromise and risk betraying the legacy of Caesar, strengthening Octavian's position. In either case, his situation as ruler of Rome would be weakened. Roman historian Cassius Dio later recorded that while Antony, as consul, maintained the advantage in the relationship, the general affection of the Roman people was shifting to Octavian due to his status as Caesar's son.
Supporting the senatorial faction against Antony, Octavian, in September 44 BC, encouraged the leading senator Marcus Tullius Cicero to attack Antony in a series of speeches portraying him as a threat to the Republican order. Risk of civil war between Antony and Octavian grew. Octavian continued to recruit Caesar's veterans to his side, away from Antony, with two of Antony's legions defecting in November 44 BC. At that time, Octavian, only a private citizen, lacked legal authority to command the Republic's armies, making his command illegal. With popular opinion in Rome turning against him and his consular term nearing its end, Antony attempted to secure a favorable military assignment to secure an army to protect himself. The senate, as was custom, assigned Antony and Dolabella the provinces of Macedonia and Syria, respectively, to govern in 43 BC after their consular terms expired. Antony, however, objected to the assignment, preferring to govern Cisalpine Gaul which had been assigned to Decimus Junius Brutus Albinus, one of Caesar's assassins. When Decimus refused to surrender his province, Antony marched north in December 44 BC with his remaining soldiers to take the province by force, besieging Decimus at Mutina. The senate, led by a fiery Cicero, denounced Antony's actions and declared him an enemy of the state.
Ratifying Octavian's extraordinary command on 1 January 43 BC, the senate dispatched him along with consuls Hirtius and Pansa to defeat Antony and his exhausted five legions. Antony's forces were defeated at the Battle of Mutina in April 43 BC, forcing Antony to retreat to Transalpine Gaul. Both consuls were killed, however, leaving Octavian in sole command of their armies, some eight legions.
The Second Triumvirate
Forming the Alliance
With Antony defeated, the senate, hoping to eliminate Octavian and the remainder of the Caesarian party, assigned command of the Republic's legions to Decimus. Sextus Pompey, son of Caesar's old rival Pompey Magnus, was given command of the Republic's fleet from his base in Sicily while Brutus and Cassius were granted the governorships of Macedonia and Syria respectively. These appointments attempted to renew the "Republican" cause. However, the eight legions serving under Octavian, composed largely of Caesar's veterans, refused to follow one of Caesar's murderers, allowing Octavian to retain his command. Meanwhile, Antony recovered his position by joining forces with Marcus Aemilius Lepidus, who had been assigned the governorship of Transalpine Gaul and Nearer Spain. Antony sent Lepidus to Rome to broker a conciliation. Though he was an ardent Caesarian, Lepidus had maintained friendly relations with the senate and with Sextus Pompey. His legions, however, quickly joined Antony, giving him control over seventeen legions, the largest army in the West.
By mid-May, Octavian began secret negotiations to form an alliance with Antony to provide a united Caesarian party against the Liberators. Remaining in Cisalpine Gaul, Octavian dispatched emissaries to Rome in July 43 BC demanding he be appointed consul to replace Hirtius and Pansa and that the decree declaring Antony a public enemy be rescinded. When the senate refused, Octavian marched on Rome with his eight legions and assumed control of the city in August 43 BC. Octavian proclaimed himself consul, rewarded his soldiers, and then set about prosecuting Caesar's murderers. By the lex Pedia, all of the conspirators and Sextus Pompey were convicted ″in absentia″ and declared public enemies. Then, at the instigation of Lepidus, Octavian went to Cisalpine Gaul to meet Antony.
In November 43 BC, Octavian, Lepidus, and Antony met near Bononia. After two days of discussions, the group agreed to establish a three man dictatorship to govern the Republic for five years, known as the "Three Men for the Restoration of the Republic" (Latin: "Triumviri Rei publicae Constituendae"), known to modern historians as the Second Triumvirate. They shared military command of the Republic's armies and provinces among themselves: Antony received Gaul, Lepidus Spain, and Octavian (as the junior partner) Africa. They jointly governed Italy. The Triumvirate would have to conquer the rest of Rome's holdings; Brutus and Cassius held the Eastern Mediterranean, and Sextus Pompey held the Mediterranean islands. On 27 November 43 BC, the Triumvirate was formally established by a new law, the lex Titia. Octavian and Antony reinforced their alliance through Octavian's marriage to Antony's stepdaughter, Claudia.
The primary objective of the Triumvirate was to avenge Caesar's death and to make war upon his murderers. Before marching against Brutus and Cassius in the East, the Triumvirs issued proscriptions against their enemies in Rome. The Dictator Lucius Cornelius Sulla had taken similar action to purge Rome of his opponents in 82 BC. The proscribed were named on public lists, stripped of citizenship, and outlawed. Their wealth and property were confiscated by the state, and rewards were offered to anyone who secured their arrest or death. With such encouragements, the proscription produced deadly results; two thousand Roman knights were executed, and one third of the senate, among them Cicero, who was executed on 7 December. The confiscations helped replenish the State Treasury, which had been depleted by Caesar's civil war the decade before; when this seemed insufficient to fund the imminent war against Brutus and Cassius, the Triumvirs imposed new taxes, especially on the wealthy. By January 42 BC the proscription had ended; it had lasted two months, and though less bloody than Sulla's, it traumatized Roman society. A number of those named and outlawed had fled to either Sextus Pompey in Sicily or to the Liberators in the East. Senators who swore loyalty to the Triumvirate were allowed to keep their positions; on 1 January 42 BC, the senate officially deified Caesar as "The Divine Julius", and confirmed Antony's position as his high priest.
War against the Liberators
Due to the infighting within the Triumvirate during 43 BC, Brutus and Cassius had assumed control of much of Rome's eastern territories, and amassed a large army. Before the Triumvirate could cross the Adriatic Sea into Greece where the Liberators had stationed their army, the Triumvirate had to address the threat posed by Sextus Pompey and his fleet. From his base in Sicily, Sextus raided the Italian coast and blockaded the Triumvirs. Octavian's friend and admiral Quintus Salvidienus Rufus thwarted an attack by Sextus against the southern Italian mainland at Rhegium, but Salvidienus was then defeated in the resulting naval battle because of the inexperience of his crews. Only when Antony arrived with his fleet was the blockade broken. Though the blockade was defeated, control of Sicily remained in Sextus' hand, but the defeat of the Liberators was the Triumvirate's first priority.
In the summer of 42 BC, Octavian and Antony sailed for Macedonia to face the Liberators with nineteen legions, the vast majority of their army (approximately 100,000 regular infantry plus supporting cavalry and irregular auxiliary units), leaving Rome under the administration of Lepidus. Likewise, the army of the Liberators also commanded an army of nineteen legions; their legions, however, were not at full strength while the legions of Antony and Octavian were. While the Triumvirs commanded a larger number of infantry, the Liberators commanded a larger cavalry contingent. The Liberators, who controlled Macedonia, did not wish to engage in a decisive battle, but rather to attain a good defensive position and then use their naval superiority to block the Triumvirs' communications with their supply base in Italy. They had spent the previous months plundering Greek cities to swell their war-chest and had gathered in Thrace with the Roman legions from the Eastern provinces and levies from Rome's client kingdoms.
Brutus and Cassius held a position on the high ground along both sides of the via Egnatia west of the city of Philippi. The south position was anchored to a supposedly impassable marsh, while the north was bordered by impervious hills. They had plenty of time to fortify their position with a rampart and a ditch. Brutus put his camp on the north while Cassius occupied the south of the via Egnatia. Antony arrived shortly and positioned his army on the south of the via Egnatia, while Octavian put his legions north of the road. Antony offered battle several times, but the Liberators were not lured to leave their defensive stand. Thus, Antony tried to secretly outflank the Liberators' position through the marshes in the south. This provoked a pitched battle on 3 October 42 BC. Antony commanded the Triumvirate's army due to Octavian's sickness on the day, with Antony directly controlling the right flank opposite Cassius. Because of his health, Octavian remained in camp while his lieutenants assumed a position on the left flank opposite Brutus. In the resulting first battle of Philippi, Antony defeated Cassius and captured his camp while Brutus overran Octavian's troops and penetrated into the Triumvirs' camp but was unable to capture the sick Octavian. The battle was a tactical draw but due to poor communications Cassius believed the battle was a complete defeat and committed suicide to prevent being captured.
Brutus assumed sole command of the Liberator army and preferred a war of attrition over open conflict. His officers, however, were dissatisfied with these defensive tactics and his Caesarian veterans threatened to defect, forcing Brutus to give battle at the second battle of Philippi on 23 October. While the battle was initially evenly matched, Antony's leadership routed Brutus' forces. Brutus committed suicide the day after the defeat and the remainder of his army swore allegiance to the Triumvirate. Over fifty thousand Romans died in the two battles. While Antony treated the losers mildly, Octavian dealt cruelly with his prisoners and even beheaded Brutus' corpse.
The battles of Philippi ended the civil war in favor of the Caesarian faction. With the defeat of the Liberators, only Sextus Pompey and his fleet remained to challenge the Triumvirate's control over the Republic.
Master of the Roman East
Division of the Republic
The victory at Philippi left the members of the Triumvirate as masters of the Republic, save Sextus Pompey in Sicily. Upon returning to Rome, the Triumvirate repartitioned rule of Rome's provinces among themselves, with Antony as the clear senior partner. He received the largest distribution, governing all of the Eastern provinces while retaining Gaul in the West. Octavian's position improved, as he received Spain, which was taken from Lepidus. Lepidus was then reduced to holding only Africa, and he assumed a clearly tertiary role in the Triumvirate. Rule over Italy remained undivided, but Octavian was assigned the difficult and unpopular task of demobilizing their veterans and providing them with land distributions in Italy. Antony assumed direct control of the East while he installed one of his lieutenants as the ruler of Gaul. During his absence, several of his supporters held key positions in Rome to protect his interests there.
The East was in need of reorganization after the rule of the Liberators in the previous years. In addition, Rome contended with the Parthian Empire for dominance of the Near East. The Parthian threat to the Triumvirate's rule was urgent due to the fact that the Parthians supported the Liberators in the recent civil war, aid which included the supply troops at Philippi. As ruler of the East, Antony also assumed responsibility for overseeing Caesar's planned invasion of Parthia to avenge the defeat of Marcus Licinius Crassus at the Battle of Carrhae in 53 BC.
In 42 BC, the Roman East was composed of several directly controlled provinces and client kingdoms. The provinces included Macedonia, Asia, Bithynia, Cilicia, Cyprus, Syria, and Cyrenaica. Approximately half of the eastern territory was controlled by Rome's client kingdoms, nominally independent kingdoms subject to Roman direction. These kingdoms included:
Odrysian Thrace in Eastern Europe
The Bosporan Kingdom along the northern coast of the Black Sea
Galatia, Pontus, Cappadocia, Armenia, and several smaller kingdoms in Asia Minor
Judea, Commagene, and the Nabataean kingdom in the Middle East
Ptolemaic Egypt in Africa
Activities in the East
Antony spent the winter of 42 BC in Athens, where he ruled generously towards the Greek cities. A proclaimed philhellene ("Friend of all things Greek"), Antony supported Greek culture to win the loyalty of the inhabitants of the Greek East. He attended religious festivals and ceremonies, including initiation into the Eleusinian Mysteries, a secret cult dedicated to the worship of the goddesses Demeter and Persephone. Beginning in 41 BC, he traveled across the Aegean Sea to Anatolia, leaving his friend Lucius Marcius Censorius as governor of Macedonia and Achaea. Upon his arrival in Ephesus in Asia, Antony was worshiped as the god Dionysus born anew. He demanded heavy taxes from the Hellenic cities in return for his pro-Greek culture policies, but exempted those cities which had remained loyal to Caesar during the civil war and compensated those cities which had suffered under Caesar's assassins, including Rhodes, Lycia, and Tarsus. He granted pardons to all Roman nobles living in the East who had supported the Optimate cause, except for Caesar's assassins.
Ruling from Ephesus, Antony consolidated Rome's hegemony in the East, receiving envoys from Rome's client kingdoms and intervening in their dynastic affairs, extracting enormous financial "gifts" from them in the process. Though King Deiotarus of Galatia supported Brutus and Cassius following Caesar's assassination, Antony allowed him to retain his position. He also confirmed Ariarathes X as king of Cappadocia after the execution of his brother Ariobarzanes III of Cappadocia by Cassius before the Battle of Philippi. In Hasmonean Judea, several Jewish delegations complained to Antony of the harsh rule of Phasael and Herod, the sons of Rome's assassinated chief Jewish minister Antipater the Idumaean. After Herod offered him a large financial gift, Antony confirmed the brothers in their positions. Subsequently, influenced by the beauty and charms of Glaphyra, the widow of Archelaüs (formerly the high priest of Comana), Antony deposed Ariarathes, and appointed Glaphyra's son, Archelaüs, to rule Cappadocia.
In October 41, Antony requested Rome's chief eastern vassal, the queen of Ptolemaic Egypt Cleopatra, meet him at Tarsus in Cilicia. Antony had first met a young Cleopatra while campaigning in Egypt in 55 BC and again in 48 BC when Caesar had backed her as queen of Egypt over the claims of her half-sister Arsinoe. Cleopatra would bear Caesar a son, Caesarion, in 47 BC and the two living in Rome as Caesar's guests until his assassination in 44 BC. After Caesar's assassination, Cleopatra and Caesarion returned to Egypt, where she named the child as her co-ruler. In 42 BC, the Triumvirate, in recognition for Cleopatra's help towards Publius Cornelius Dolabella in opposition to the Liberators, granted official recognition to Caesarion's position as king of Egypt. Arriving in Tarsus aboard her magnificent ship, Cleopatra invited Antony to a grand banquet to solidify their alliance. As the most powerful of Rome's eastern vassals, Egypt was indispensable in Rome's planned military invasion of the Parthian Empire. At Cleopatra's request, Antony ordered the execution of Arsinoe, who, though marched in Caesar's triumphal parade in 46 BC, had been granted sanctuary at the temple of Artemis in Ephesus. Antony and Cleopatra then spent the winter of 41 BC together in Alexandria. Cleopatra bore Antony twin children, Alexander Helios and Cleopatra Selene II, in 40 BC, and a third, Ptolemy Philadelphus, in 36 BC. Antony also granted formal control over Cyprus, which had been under Egyptian control since 47 BC during the turmoil of Caesar's civil war, to Cleopatra in 40 BC as a gift for her loyalty to Rome.
Antony, in his first months in the East, raised money, reorganized his troops, and secured the alliance of Rome's client kingdoms. He also promoted himself as Hellenistic ruler, which won him the affection of the Greek peoples of the East but also made him the target of Octavian's propaganda in Rome. According to some ancient authors, Antony led a carefree life of luxury in Alexandria. Upon learning the Parthian Empire had invaded Rome's territory in early 40 BC, Antony left Egypt for Syria to confront the invasion. However, after a short stay in Tyre, he was forced to sail with his army to Italy to confront Octavian due to Octavian's war against Antony's wife and brother.
Fulvia's Civil War
Following the defeat of Brutus and Cassius, while Antony was stationed in the East, Octavian had authority over the West. Octavian's chief responsibility was distributing land to tens of thousands of Caesar's veterans who had fought for the Triumvirate. Additionally, tens of thousands of veterans who had fought for the Republican cause in the war also required land grants. This was necessary to ensure they would not support a political opponent of the Triumvirate. However, the Triumvirs did not possess sufficient state-controlled land to allot to the veterans. This left Octavian with two choices: alienating many Roman citizens by confiscating their land, or alienating many Roman soldiers who might back a military rebellion against the Triumvirate's rule. Octavian chose the former. As many as eighteen Roman towns through Italy were affected by the confiscations of 41 BC, with entire populations driven out.
Led by Fulvia, the wife of Antony, the senators grew hostile towards Octavian over the issue of the land confiscations. According to the ancient historian Cassius Dio, Fulvia was the most powerful woman in Rome at the time. According to Dio, while Publius Servilius Vatia and Lucius Antonius were the consuls for the year 41 BC, real power was vested in Fulvia. As the mother-in-law of Octavian and the wife of Antony, no action was taken by the senate without her support. Fearing Octavian's land grants would cause the loyalty of the Caesarian veterans to shift away from Antony, Fulvia traveled constantly with her children to the new veteran settlements in order to remind the veterans of their debt to Antony. Fulvia also attempted to delay the land settlements until Antony returned to Rome, so that he could share credit for the settlements. With the help of Antony's brother, the consul of 41 BC Lucius Antonius, Fulvia encouraged the senate to oppose Octavian's land policies.
The conflict between Octavian and Fulvia caused great political and social unrest throughout Italy. Tensions escalated into open war, however, when Octavian divorced Claudia, Fulvia's daughter from her first husband Publius Clodius Pulcher. Outraged, Fulvia, supported by Lucius, raised an army to fight for Antony's rights against Octavian. According to the ancient historian Appian, Fulvia's chief reason for the war was her jealousy of Antony's affairs with Cleopatra in Egypt and desire to draw Antony back to Rome. Lucius and Fulvia took a political and martial gamble in opposing Octavian and Lepidus, however, as the Roman army still depended on the Triumvirs for their salaries. Lucius and Fulvia, supported by their army, marched on Rome and promised the people an end to the Triumvirate in favor of Antony's sole rule. However, when Octavian returned to the city with his army, the pair were forced to retreat to Perusia in Etruria. Octavian placed the city under siege while Lucius waited for Antony's legions in Gaul to come to his aid. Away in the East and embarrassed by Fulvia's actions, Antony gave no instructions to his legions. Without reinforcements, Lucius and Fulvia were forced to surrender in February 40 BC. While Octavian pardoned Lucius for his role in the war and even granted him command in Spain as his chief lieutenant there, Fulvia was forced to flee to Greece with her children. With the war over, Octavian was left in sole control over Italy. When Antony's governor of Gaul died, Octavian took over his legions there, further strengthening his control over the West.
Despite the Parthian Empire's invasion of Rome's eastern territories, Fulvia's civil war forced Antony to leave the East and return to Rome in order to secure his position. Meeting her in Athens, Antony rebuked Fulvia for her actions before sailing on to Italy with his army to face Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their shared service under Caesar. The legions under their command followed suit. Meanwhile, in Sicyon, Fulvia died of a sudden and unknown illness. Fulvia's death and the mutiny of their soldiers allowed the triumvirs to effect a reconciliation through a new power sharing agreement in September 40 BC. The Roman world was redivided, with Antony receiving the Eastern provinces, Octavian the Western provinces, and Lepidus relegated to a clearly junior position as governor of Africa. This agreement, known as the Treaty of Brundisium, reinforced the Triumvirate and allowed Antony to begin preparing for Caesar's long-awaited campaign against the Parthian Empire. As a symbol of their renewed alliance, Antony married Octavia, Octavian's sister, in October 40 BC.
Antony's Parthian War
Roman–Parthian relations
The rise of the Parthian Empire in the 3rd century BC and Rome's expansion into the Eastern Mediterranean during the 2nd century BC brought the two powers into direct contact, causing centuries of tumultuous and strained relations. Though periods of peace developed cultural and commercial exchanges, war was a constant threat. Influence over the buffer state of the Kingdom of Armenia, located to the north-east of Roman Syria, was often a central issue in the Roman-Parthian conflict. In 95 BC, Tigranes the Great, a Parthian ally, became king. Tigranes would later aid Mithradates of Pontus against Rome before being decisively defeated by Pompey in 66 BC. Thereafter, with his son Artavasdes in Rome as a hostage, Tigranes would rule Armenia as an ally of Rome until his death in 55 BC. Rome then released Artavasdes, who succeeded his father as king.
In 53 BC, Rome's governor of Syria, Marcus Licinius Crassus, led an expedition across the Euphrates River into Parthian territory to confront the Parthian Shah Orodes II. Artavasdes II offered Crassus the aid of nearly forty thousand troops to assist his Parthian expedition on the condition that Crassus invade through Armenia as the safer route. Crassus refused, choosing instead the more direct route by crossing the Euphrates directly into desert Parthian territory. Crassus' actions proved disastrous as his army was defeated at the Battle of Carrhae by a numerically inferior Parthian force. Crassus' defeat forced Armenia to shift its loyalty to Parthia, with Artavasdes II's sister marrying Orodes' son and heir Pacorus.
In early 44 BC, Julius Caesar announced his intentions to invade Parthia and restore Roman power in the East. His reasons were to punish the Parthians for assisting Pompey in the recent civil war, to avenge Crassus' defeat at Carrhae, and especially to match the glory of Alexander the Great for himself. Before Caesar could launch his campaign, however, he was assassinated. As part of the compromise between Antony and the Republicans to restore order following Caesar's murder, Publius Cornelius Dolabella was assigned the governorship of Syria and command over Caesar's planned Parthian campaign. The compromise did not hold, however, and the Republicans were forced to flee to the East. The Republicans directed Quintus Labienus to attract the Parthians to their side in the resulting war against Antony and Octavian. After the Republicans were defeated at the Battle of Philippi, Labienus joined the Parthians. Despite Rome's internal turmoil during the time, the Parthians did not immediately benefit from the power vacuum in the East due to Orodes II's reluctance despite Labienus' urgings to the contrary.
In the summer of 41 BC, Antony, to reassert Roman power in the East, conquered Palmyra on the Roman-Parthian border. Antony then spent the winter of 41 BC in Alexandria with Cleopatra, leaving only two legions to defend the Syrian border against Parthian incursions. The legions, however, were composed of former Republican troops and Labienus convinced Orodes II to invade.
Parthian Invasion
A Parthian army, led by Orodes II's eldest son Pacorus, invaded Syria in early 40 BC. Labienus, the Republican ally of Brutus and Cassius, accompanied him to advise him and to rally the former Republican soldiers stationed in Syria to the Parthian cause. Labienus recruited many of the former Republican soldiers to the Parthian campaign in opposition to Antony. The joint Parthian–Roman force, after initial success in Syria, separated to lead their offensive in two directions: Pacorus marched south toward Hasmonean Judea while Labienus crossed the Taurus Mountains to the north into Cilicia. Labienus conquered southern Anatolia with little resistance. The Roman governor of Asia, Lucius Munatius Plancus, a partisan of Antony, was forced to flee his province, allowing Labienus to recruit the Roman soldiers stationed there. For his part, Pacorus advanced south to Phoenicia and Palestine. In Hasmonean Judea, the exiled prince Antigonus allied himself with the Parthians. When his brother, Rome's client king Hyrcanus II, refused to accept Parthian domination, he was deposed in favor of Antigonus as Parthia's client king in Judea. Pacorus' conquest had captured much of the Syrian and Palestinian interior, with much of the Phoenician coast occupied as well. The city of Tyre remained the last major Roman outpost in the region.
Antony, then in Egypt with Cleopatra, did not respond immediately to the Parthian invasion. Though he left Alexandria for Tyre in early 40 BC, when he learned of the civil war between his wife and Octavian, he was forced to return to Italy with his army to secure his position in Rome rather than defeat the Parthians. Instead, Antony dispatched Publius Ventidius Bassus to check the Parthian advance. Arriving in the East in spring 39 BC, Ventidius surprised Labienus near the Taurus Mountains, claiming victory at the Cilician Gates. Ventidius ordered Labienus executed as a traitor and the formerly rebellious Roman soldiers under his command were reincorporated under Antony's control. He then met a Parthian army at the border between Cilicia and Syria, defeating it and killing a large portion of the Parthian soldiers at the Amanus Pass. Ventidius' actions temporarily halted the Parthian advance and restored Roman authority in the East, forcing Pacorus to abandon his conquests and return to Parthia.
In the spring of 38 BC, the Parthians resumed their offensive with Pacorus leading an army across the Euphrates. Ventidius, in order to gain time, leaked disinformation to Pacorus implying that he should cross the Euphrates River at their usual ford. Pacorus did not trust this information and decided to cross the river much farther downstream; this was what Ventidius hoped would occur and gave him time to get his forces ready. The Parthians faced no opposition and proceeded to the town of Gindarus in Cyrrhestica where Ventidius' army was waiting. At the Battle of Cyrrhestica, Ventidius inflicted an overwhelming defeat against the Parthians which resulted in the death of Pacorus. Overall, the Roman army had achieved a complete victory with Ventidius' three successive victories forcing the Parthians back across the Euphrates. Pacorus' death threw the Parthian Empire into chaos. Shah Orodes II, overwhelmed by the grief of his son's death, appointed his younger son Phraates IV as his successor. However, Phraates IV assassinated Orodes II in late 38 BC, succeeding him on the throne.
Ventidius feared Antony's wrath if he invaded Parthian territory, thereby stealing his glory; so instead he attacked and subdued the eastern kingdoms, which had revolted against Roman control following the disastrous defeat of Crassus at Carrhae. One such rebel was King Antiochus of Commagene, whom he besieged in Samosata. Antiochus tried to make peace with Ventidius, but Ventidius told him to approach Antony directly. After peace was concluded, Antony sent Ventidius back to Rome where he celebrated a triumph, the first Roman to triumph over the Parthians.
Conflict with Sextus Pompey
While Antony and the other Triumvirs ratified the Treaty of Brundisium to redivide the Roman world among themselves, the rebel Sextus Pompey, the son of Caesar's rival Pompey the Great, was largely ignored. From his stronghold on Sicily, he continued his piratical activities across Italy and blocked the shipment of grain to Rome. The lack of food in Rome caused the public to blame the Triumvirate and shift its sympathies towards Pompey. This pressure forced the Triumvirs to meet with Sextus in early 39 BC.
While Octavian wanted an end to the ongoing blockade of Italy, Antony sought peace in the West in order to make the Triumvirate's legions available for his service in his planned campaign against the Parthians. Though the Triumvirs rejected Sextus' initial request to replace Lepidus as the third man within the Triumvirate, they did grant other concessions. Under the terms of the Treaty of Misenum, Sextus was allowed to retain control over Sicily and Sardinia, with the provinces of Corsica and Greece being added to his territory. He was also promised a future position with the Priestly College of Augurs and the consulship for 35 BC. In exchange, Sextus agreed to end his naval blockade of Italy, supply Rome with grain, and halt his piracy of Roman merchant ships. However, the most important provision of the Treaty was the end of the proscription the Trimumvirate had begun in late 43 BC. Many of the proscribed senators, rather than face death, fled to Sicily seeking Sextus' protection. With the exception of those responsible for Caesar's assassination, all those proscribed were allowed to return to Rome and promised compensation. This caused Sextus to lose many valuable allies as the formerly exiled senators gradually aligned themselves with either Octavian or Antony. To secure the peace, Octavian betrothed his three-year-old nephew and Antony's stepson Marcus Claudius Marcellus to Sextus' daughter Pompeia. With peace in the West secured, Antony planned to retaliate against Parthia by invading their territory. Under an agreement with Octavian, Antony would be supplied with extra troops for his campaign. With this military purpose on his mind, Antony sailed to Greece with Octavia, where he behaved in a most extravagant manner, assuming the attributes of the Greek god Dionysus in 39 BC.
The peace with Sextus was short-lived, however. When Sextus demanded control over Greece as the agreement provided, Antony demanded the province's tax revenues be to fund the Parthian campaign. Sextus refused. Meanwhile, Sextus' admiral Menas betrayed him, shifting his loyalty to Octavian and thereby granting him control of Corsica, Sardinia, three of Sextus' legions, and a larger naval force. These actions worked to renew Sextus' blockade of Italy, preventing Octavian from sending the promised troops to Antony for the Parthian campaign. This new delay caused Antony to quarrel with Octavian, forcing Octavia to mediate a truce between them. Under the Treaty of Tarentum, Antony provided a large naval force for Octavian's use against Sextus while Octavian promised to raise new legions for Antony to support his invasion of Parthia. As the term of the Triumvirate was set to expire at the end of 38 BC, the two unilaterally extended their term of office another five years until 33 BC without seeking approval of the senate or the popular assemblies. To seal the Treaty, Antony's elder son Marcus Antonius Antyllus, then only 6 years old, was betrothed to Octavian's only daughter Julia, then only an infant. With the Treaty signed, Antony returned to the East, leaving Octavia in Italy.
Reconquest of Judea
With Publius Ventidius Bassus returned to Rome in triumph for his defensive campaign against the Parthians, Antony appointed Gaius Sosius as the new governor of Syria and Cilicia in early 38 BC. Antony, still in the West negotiating with Octavian, ordered Sosius to depose Antigonus, who had been installed in the recent Parthian invasion as the ruler of Hasmonean Judea, and to make Herod the new Roman client king in the region. Years before in 40 BC, the Roman senate had proclaimed Herod "King of the Jews" because Herod had been a loyal supporter of Hyrcanus II, Rome's previous client king before the Parthian invasion, and was from a family with long standing connections to Rome. The Romans hoped to use Herod as a bulwark against the Parthians in the coming campaign.
Advancing south, Sosius captured the island-city of Aradus on the coast of Phoenicia by the end of 38 BC. The following year, the Romans besieged Jerusalem. After a forty-day siege, the Roman soldiers stormed the city and, despite Herod's pleas for restraint, acted without mercy, pillaging and killing all in their path, prompting Herod to complain to Antony. Herod finally resorted to bribing Sosius and his troops in order that they would not leave him "king of a desert". Antigonus was forced to surrender to Sosius, and was sent to Antony for the triumphal procession in Rome. Herod, however, fearing that Antigonus would win backing in Rome, bribed Antony to execute Antigonus. Antony, who recognized that Antigonus would remain a permanent threat to Herod, ordered him beheaded in Antioch. Now secure on his throne, Herod would rule the Herodian Kingdom until his death in 4 BC, and would be an ever-faithful client king of Rome.
Parthian Campaign
With the Triumvirate renewed in 38 BC, Antony returned to Athens in the winter with his new wife Octavia, the sister of Octavian. With the assassination of the Parthian king Orodes II by his son Phraates IV, who then seized the Parthian throne, in late 38 BC, Antony prepared to invade Parthia himself.
Antony, however, realized Octavian had no intention of sending him the additional legions he had promised under the Treaty of Tarentum. To supplement his own armies, Antony instead looked to Rome's principal vassal in the East: his lover Cleopatra. In addition to significant financial resources, Cleopatra's backing of his Parthian campaign allowed Antony to amass the largest army Rome had ever assembled in the East. Wintering in Antioch during 37, Antony's combined Roman–Egyptian army numbered some 200,000, including sixteen legions (approximately 160,000 soldiers) plus an additional 40,000 auxiliaries. Such a force was twice the size of Marcus Licinius Crassus's army from his failed Parthian invasion of 53 BC and three times those of Lucius Licinius Lucullus and Lucius Cornelius Sulla during the Mithridatic Wars. The size of his army indicated Antony's intention to conquer Parthia, or at least receive its submission by capturing the Parthian capital of Ecbatana. Antony's rear was protected by Rome's client kingdoms in Anatolia, Syria, and Judea, while the client kingdoms of Cappadocia, Pontus, and Commagene would provide supplies along the march.
Antony's first target for his invasion was the Kingdom of Armenia. Ruled by King Artavasdes II of Armenia, Armenia had been an ally of Rome since the defeat of Tigranes the Great by Pompey the Great in 66 BC during the Third Mithridatic War. However, following Marcus Licinius Crassus's defeat at the Battle of Carrhae in 53 BC, Armenia was forced into an alliance with Parthia due to Rome's weakened position in the East. Antony dispatched Publius Canidius Crassus to Armenia, receiving Artavasdes II's surrender without opposition. Canidius then led an invasion into the South Caucasus, subduing Iberia. There, Canidius forced the Iberian King Pharnavaz II into an alliance against Zober, king of neighboring Albania, subduing the kingdom and reducing it to a Roman protectorate.
With Armenia and the Caucasus secured, Antony marched south, crossing into the Parthian province of Media Atropatene. Though Antony desired a pitched battle, the Parthians would not engage, allowing Antony to march deep into Parthian territory by mid-August of 36 BC. This forced Antony to leave his logistics train in the care of two legions (approximately 10,000 soldiers), which was then attacked and completely destroyed by the Parthian army before Antony could rescue them. Though the Armenian King Artavasdes II and his cavalry were present during the massacre, they did not intervene. Despite the ambush, Antony continued the campaign. However, Antony was soon forced to retreat in mid-October after a failed two-month siege of the provincial capital.
The retreat soon proved a disaster as Antony's demoralized army faced increasing supply difficulties in the mountainous terrain during winter while constantly being harassed by the Parthian army. According to the Greek historian Plutarch, eighteen battles were fought between the retreating Romans and the Parthians during the month-long march back to Armenia, with approximately 20,000 infantry and 4,000 cavalry dying during the retreat alone. Once in Armenia, Antony quickly marched back to Syria to protect his interests there by late 36 BC, losing an additional 8,000 soldiers along the way. In all, two-fifths of his original army (some 80,000 men) had died during his failed campaign.
Antony and Cleopatra
Meanwhile, in Rome, the triumvirate was no more. Octavian forced Lepidus to resign after the older triumvir attempted to take control of Sicily after the defeat of Sextus. Now in sole power, Octavian was occupied in wooing the traditional Republican aristocracy to his side. He married Livia and started to attack Antony in order to raise himself to power. He argued that Antony was a man of low morals to have left his faithful wife abandoned in Rome with the children to be with the promiscuous queen of Egypt. Antony was accused of everything, but most of all, of "going native", an unforgivable crime to the proud Romans. Several times Antony was summoned to Rome, but remained in Alexandria with Cleopatra.
Again with Egyptian money, Antony invaded Armenia, this time successfully. In the return, a mock Roman triumph was celebrated in the streets of Alexandria. The parade through the city was a pastiche of Rome's most important military celebration. For the finale, the whole city was summoned to hear a very important political statement. Surrounded by Cleopatra and her children, Antony ended his alliance with Octavian.
He distributed kingdoms among his children: Alexander Helios was named king of Armenia, Media and Parthia (territories which were not for the most part under the control of Rome), his twin Cleopatra Selene got Cyrenaica and Libya, and the young Ptolemy Philadelphus was awarded Syria and Cilicia. As for Cleopatra, she was proclaimed Queen of Kings and Queen of Egypt, to rule with Caesarion (Ptolemy XV Caesar, son of Cleopatra by Julius Caesar), King of Kings and King of Egypt. Most important of all, Caesarion was declared legitimate son and heir of Caesar. These proclamations were known as the Donations of Alexandria and caused a fatal breach in Antony's relations with Rome.
While the distribution of nations among Cleopatra's children was hardly a conciliatory gesture, it did not pose an immediate threat to Octavian's political position. Far more dangerous was the acknowledgment of Caesarion as legitimate and heir to Caesar's name. Octavian's base of power was his link with Caesar through adoption, which granted him much-needed popularity and loyalty of the legions. To see this convenient situation attacked by a child borne by the richest woman in the world was something Octavian could not accept. The triumvirate expired on the last day of 33 BC and was not renewed. Another civil war was beginning.
During 33 and 32 BC, a propaganda war was fought in the political arena of Rome, with accusations flying between sides. Antony (in Egypt) divorced Octavia and accused Octavian of being a social upstart, of usurping power, and of forging the adoption papers by Caesar. Octavian responded with treason charges: of illegally keeping provinces that should be given to other men by lots, as was Rome's tradition, and of starting wars against foreign nations (Armenia and Parthia) without the consent of the senate.
Antony was also held responsible for Sextus Pompey's execution without a trial. In 32 BC, the senate deprived him of his powers and declared war against Cleopatra – not Antony, because Octavian had no wish to advertise his role in perpetuating Rome's internecine bloodshed. Both consuls, Gnaeus Domitius Ahenobarbus and Gaius Sosius, and a third of the senate abandoned Rome to meet Antony and Cleopatra in Greece.
In 31 BC, the war started. Octavian's general Marcus Vipsanius Agrippa captured the Greek city and naval port of Methone, loyal to Antony. The enormous popularity of Octavian with the legions secured the defection of the provinces of Cyrenaica and Greece to his side. On 2 September, the naval Battle of Actium took place. Antony and Cleopatra's navy was overwhelmed, and they were forced to escape to Egypt with 60 ships.
Death
Octavian, now close to absolute power, invaded Egypt in August, 30 BC, assisted by Agrippa. With no other refuge to escape to, Antony stabbed himself with his sword in the mistaken belief that Cleopatra had already done so. When he found out that Cleopatra was still alive, his friends brought him to Cleopatra's monument in which she was hiding, and he died in her arms.
Cleopatra was allowed to conduct Antony's burial rites after she had been captured by Octavian. Realising that she was destined for Octavian's triumph in Rome, she made several attempts to take her life and finally succeeded in mid-August. Octavian had Caesarion and Antyllus killed, but he spared Iullus as well as Antony's children by Cleopatra, who were paraded through the streets of Rome.
Aftermath and legacy
Cicero's son, Cicero Minor, announced Antony's death to the senate. Antony's honours were revoked and his statues removed, but he was not subject to a complete damnatio memoriae. Cicero Minor also made a decree that no member of the Antonii would ever bear the name Marcus again. "In this way Heaven entrusted the family of Cicero the final acts in the punishment of Antony."
When Antony died, Octavian became uncontested ruler of Rome. In the following years, Octavian, who was known as Augustus after 27 BC, managed to accumulate in his person all administrative, political, and military offices. When Augustus died in AD 14, his political powers passed to his adopted son Tiberius; the Roman Empire had begun.
The rise of Caesar and the subsequent civil war between his two most powerful adherents effectively ended the credibility of the Roman oligarchy as a governing power and ensured that all future power struggles would centre upon which one individual would achieve supreme control of the government, eliminating the senate and the former magisterial structure as important foci of power in these conflicts. Thus, in history, Antony appears as one of Caesar's main adherents, he and Octavian Augustus being the two men around whom power coalesced following the assassination of Caesar, and finally as one of the three men chiefly responsible for the demise of the Roman Republic.
Marriages and issue
Antony was known to have an obsession with women and sex. He had many mistresses (including Cytheris) and was married in succession to Fadia, Antonia, Fulvia, Octavia and Cleopatra. He left a number of children. Through his daughters by Octavia, he would be ancestor to the Roman emperors Caligula, Claudius and Nero.
Marriage to Fadia, a daughter of a freedman. According to Cicero, Fadia bore Antony several children. Nothing is known about Fadia or their children. Cicero is the only Roman source that mentions Antony's first wife.
Marriage to first paternal cousin Antonia Hybrida Minor. According to Plutarch, Antony threw her out of his house in Rome because she slept with his friend, the tribune Publius Cornelius Dolabella. This occurred by 47 BC and Antony divorced her. By Antonia, he had a daughter:
Antonia, married the wealthy Greek Pythodoros of Tralles.
Marriage to Fulvia, by whom he had two sons:
Marcus Antonius Antyllus, murdered by Octavian in 30 BC.
Iullus Antonius, married Claudia Marcella the Elder, daughter of Octavia.
Marriage to Octavia the Younger, sister of Octavian, later emperor Augustus; they had two daughters:
Antonia the Elder married Lucius Domitius Ahenobarbus (consul 16 BC); maternal grandmother of the Empress Valeria Messalina and paternal grandmother of the emperor Nero.
Antonia the Younger married Nero Claudius Drusus, the younger son of the Empress Livia Drusilla and brother of the emperor Tiberius; mother of the emperor Claudius, paternal grandmother of the emperor Caligula and empress Agrippina the Younger, and maternal great-grandmother of the emperor Nero.
Children with the Queen Cleopatra VII of Egypt, the former lover of Julius Caesar:
Alexander Helios
Cleopatra Selene II, married King Juba II of Numidia and later Mauretania; the queen of Syria, Zenobia of Palmyra, was reportedly descended from Selene and Juba II.
Ptolemy Philadelphus.
Descendants
Through his daughters by Octavia, he was the paternal great grandfather of Roman emperor Caligula, the maternal grandfather of emperor Claudius, and both maternal great-great-grandfather and paternal great-great uncle of the emperor Nero of the Julio-Claudian dynasty. Through his eldest daughter, he was ancestor to the long line of kings and co-rulers of the Bosporan Kingdom, the longest-living Roman client kingdom, as well as the rulers and royalty of several other Roman client states. Through his daughter by Cleopatra, Antony was ancestor to the royal family of Mauretania, another Roman client kingdom, while through his sole surviving son Iullus, he was ancestor to several famous Roman statesmen.
1. Antonia, born 50 BC, had 1 child
A. Pythodorida of Pontus, 30 BC or 29 BC – 38 AD, had 3 children
I. Artaxias III, King of Armenia, 13 BC – 35 AD, died without issue
II. Polemon II, King of Pontus, 12 BC or 11 BC – 74 AD, died without issue
III. Antonia Tryphaena, Queen of Thrace, 10 BC – 55 AD, had 4 children
a. Rhoemetalces II, King of Thrace, died 38 AD, died without issue
b. Gepaepyris, Queen of the Bosporan Kingdom, had 2 children
i. Tiberius Julius Mithridates, King of the Bosporan Kingdom, died 68 AD, died without issue
ii. Tiberius Julius Cotys I, King of the Bosporan Kingdom, had 1 child
i. Tiberius Julius Rhescuporis I, King of the Bosporan Kingdom, died 90 AD, had 1 child
i. Tiberius Julius Sauromates I, King of the Bosporan Kingdom, had 1 child
i. Tiberius Julius Cotys II, King of the Bosporan Kingdom, had 1 child
i. Rhoemetalces, King of the Bosporan Kingdom, died 153 AD, had 1 child
i. Eupator, King of the Bosporan Kingdom, died 174 AD, had 1 child
i. Tiberius Julius Sauromates II, King of the Bosporan Kingdom, died 210 AD or 211 AD, had 2 children
i. Tiberius Julius Rhescuporis II, King of the Bosporan Kingdom, died 227 AD, had 1 child
i. Tiberius Julius Rhescuporis III, King of the Bosporan Kingdom, died 227 AD
ii. Tiberius Julius Cotys III, King of the Bosporan Kingdom, died 235 AD, had 3 children
i. Tiberius Julius Sauromates III, King of the Bosporan Kingdom, died 232 AD
ii. Tiberius Julius Rhescuporis IV, King of the Bosporan Kingdom, died 235 AD
iii. Tiberius Julius Ininthimeus, King of the Bosporan Kingdom, died 240 AD, had 1 child
i. Tiberius Julius Rhescuporis V, King of the Bosporan Kingdom, died 276 AD, had 3 children
i. Tiberius Julius Pharsanzes, King of the Bosporan Kingdom, died 254 AD
ii. Synges, King of the Bosporan Kingdom, died 276 AD
iii. Tiberius Julius Teiranes, King of the Bosporan Kingdom, died 279 AD, had 2 children
i. Tiberius Julius Sauromates IV, King of the Bosporan Kingdom, died 276 AD
ii. Theothorses, King of the Bosporan Kingdom, died 309 AD, had 3 children
i. Tiberius Julius Rhescuporis VI, King of the Bosporan Kingdom, died 342 AD
ii. Rhadamsades, King of the Bosporan Kingdom, died 323 AD
iii. Nana, Queen of Caucasian Iberia, died 363 AD
i. Rev II of Iberia
i. Sauromaces II of Iberia
ii. Trdat of Iberia
ii. Aspacures II of Iberia
c. Cotys IX, King of Lesser Armenia
d. Pythodoris II of Thrace, died without issue
2. Marcus Antonius Antyllus, 47–30 BC, died without issue
3. Iullus Antonius, 43–2 BC, had 3 children
A. Antonius, died young, no issue
B. Lucius Antonius, 20 BC – 25 AD, issue unknown
C. Iulla Antonia ?? born after 19 BC, issue unknown
4. Prince Alexander Helios of Egypt, born 40 BC, died without issue (presumably)
5. Cleopatra Selene, Queen of Mauretania, 40 BC – 6 AD, had 2 children
A. Ptolemy, King of Mauretania, 1 BC – 40 AD, had 1 child
I. Drusilla, Queen of Emesa, 38–79 AD, had 1 child
a. Gaius Julius Alexio, King of Emesa, had 1 child
B. Princess Drusilla of Mauretania, born 5 AD or 8 BC
6. Antonia Major, 39 BC – before 25 AD, had 3 children
A. Domitia Lepida the Elder, c. 19 BC – 59 AD, had 1 child
I. Quintus Haterius Antoninus
B. Gnaeus Domitius Ahenobarbus, 17 BC – 40 AD, had 1 child
I. Nero (Lucius Domitius Ahenobarbus) (see line of Antonia Minor below)
C. Domitia Lepida the Younger, 10 BC – 54 AD, had 3 children
I. Marcus Valerius Messala Corvinus
II. Valeria Messalina, 17 or 20–48 AD, had 2 children
a. (Messalina was the mother of the two youngest children of the Roman emperor Claudius listed below)
III. Faustus Cornelius Sulla Felix, 22–62 AD, had 1 child
a. a son (this child and the only child of the Claudia Antonia listed below are the same person)
7. Antonia Minor, 36 BC – 37 AD, had 3 children
A. Germanicus Julius Caesar, 15 BC – 19 AD, had 6 children
I. Nero Julius Caesar Germanicus, 6–30 AD, died without issue
II. Drusus Julius Caesar Germanicus, 8–33 AD, died without issue
III. Gaius Julius Caesar Augustus Germanicus (Caligula), 12–41 AD, had 1 child;
a. Julia Drusilla, 39–41 AD, died young
IV. Julia Agrippina (Agrippina the Younger), 15–59 AD, had 1 child;
a. Nero Claudius Caesar Augustus Germanicus, 37–68 AD, had 1 child;
i. Claudia Augusta, January 63 AD – April 63 AD, died young
V. Julia Drusilla, 16–38 AD, died without issue
VI. Julia Livilla, 18–42 AD, died without issue
B. Claudia Livia Julia (Livilla), 13 BC – 31 AD, had three children
I. Julia Livia, 7–43 AD, had 4 children
a. Gaius Rubellius Plautus, 33–62 AD, had several children
b. Rubellia Bassa, born between 33 AD and 38 AD, had at least 1 child
i. Octavius Laenas, had at least 1 child
i. Sergius Octavius Laenas Pontianus
c. Gaius Rubellius Blandus
d. Rubellius Drusus
II. Tiberius Julius Caesar Nero Gemellus, 19–37 or 38 AD, died without issue
III. Tiberius Claudius Caesar Germanicus II Gemellus, 19–23 AD, died young
C. Tiberius Claudius Caesar Augustus Germanicus, 10 BC – 54 AD, had 4 children
I. Tiberius Claudius Drusus, died young
II. Claudia Antonia, c. 30–66 AD, had 1 child
a. a son, died young
III. Claudia Octavia, 39 or 40–62 AD, died without issue
IV. Tiberius Claudius Caesar Britannicus, 41–55 AD, died without issue
8. Prince Ptolemy Philadelphus of Egypt, 36–29 BC, died without issue (presumably)
Artistic portrayals
Works in which the character of Mark Antony plays a central role:
William Shakespeare's Julius Caesar
Julius Caesar (1950 film) based on this (played by Charlton Heston)
Julius Caesar (1953 film) based on this (played by Marlon Brando)
Julius Caesar (1970 film) based on this (played by Charlton Heston again)
Antony and Cleopatra, several works with that title
John Dryden's 1677 play All for Love
Jules Massenet's 1914 opera Cléopâtre
The 1934 film Cleopatra (played by Henry Wilcoxon)
Orson Welles' innovative 1937 adaptation of William Shakespeare at Mercury Theatre has George Coulouris as Marcus Antonius.
The 1953 film Serpent of the Nile (played by Raymond Burr)
The 1963 film Cleopatra (played by Richard Burton)
The 1964 film Carry On Cleo (played by Sid James)
The 1983 miniseries The Cleopatras (played by Christopher Neame)
The TV series Xena: Warrior Princess (played by Manu Bennett)
In the Age of Empires: The Rise of Rome, Mark Antony featured as a short swordsman.
The 1999 film Cleopatra (played by Billy Zane)
The Capcom video game Shadow of Rome, in which he is depicted as the main antagonist
The 2003 TV movie Imperium: Augustus (played by Massimo Ghini)
The 2005 TV mini series Empire (played by Vincent Regan)
The 2005–2007 HBO/BBC TV series Rome (played by James Purefoy)
The 2009–2013 TV series Horrible Histories (played by Mathew Baynton), and the 2015 reboot series of the same name (portrayed by Tom Stourton in 2019)
The 2006 BBC One docudrama Ancient Rome: The Rise and Fall of an Empire (played by Alex Ferns)
As Cleopatra's guardian and level boss (of Lust) in the Xbox 360 game Dante's Inferno released by Visceral Games in 2010.
The Choices: Stories You Play visual novel A Courtesan of Rome, in which he is depicted as one of the love interests.
The 2021 TV series Domina (played by Liam Garrigan)
Novels
In Colleen McCullough's Masters of Rome series (1990–2007), Antony is portrayed as a deeply flawed character, a brave warrior but sexually promiscuous, often drunk and foolish, and a monster of vanity who loves riding in a chariot drawn by lions.
Margaret George's The Memoirs of Cleopatra (1997)
Conn Iggulden's Emperor novels (2003–13)
Robert Harris's Dictator (2015)
Michael Livingston's The Shards of Heaven (2015)
Poetry
Geoffrey Chaucer's fourteenth-century poem The Legend of Good Women.
Lytle, William Haines (1826–1863), Antony and Cleopatra.
Constantine P. Cavafy's poem The God Abandons Antony (1911), a hymn to human dignity, depicts the imaginary last moments of Mark Antony while he sees his fortunes turning around.
See also
Flamen Divi Julii, priest of the cult of Caesar, of which Mark Antony was the first to serve.
Antonia gens, the ancestral gens of Mark Antony.
Notes
References
Citations
Primary sources
Dio Cassius xli.–liii
Appian, Bell. Civ. i.–v.
Caesar, Commentarii de Bello Gallico and Commentarii de Bello Civili
Cicero, Letters and Philippics
Orations: The fourteen Philippics against Marcus Antonius ~ Tufts University Classics Collection
Plutarch, Parallel Lives (Lives of the Noble Greeks and Romans)
Plutarch's Parallel Lives: "Antony" ~ Internet Classics Archive (MIT)
Plutarch's Parallel Lives: "Pompey" ~ Internet Classics Archive (MIT)
Plutarch's Parallel Lives: "Life of Antony" – Loeb Classical Library edition, 1920
Plutarch's Parallel Lives: "The Comparison of Demetrius and Antony" ~ Internet Classics Archive (MIT)
Josephus, The Jewish War
Velleius Paterculus, The Roman History, II.60–87.
Secondary sources
Renucci, Pierre. Marc Antoine, un destin inachevé entre César et Cléopâtre(2014)
External links
MarkAntony.org
Shakespeare's Funeral Oration of Mark Antony in English and Latin translation
The Life of Marc Antony, in BTM Format
|-
|-
|-
83 BC births
30 BC deaths
1st-century BC Roman augurs
1st-century BC Roman consuls
1st-century BC Roman generals
Ancient Egyptian royal consorts
Magistri equitum (Roman Republic)
Ancient Roman military personnel who committed suicide
Ancient Romans who committed suicide
Mark
Correspondents of Cicero
Husbands of Cleopatra
Husbands of Fulvia
Military personnel of Julius Caesar
People of the Roman–Parthian Wars
Populares
Ptolemaic dynasty
Roman people of the Gallic Wars
Suicides by sharp instrument in Egypt
Tribunes of the plebs |
19961 | https://en.wikipedia.org/wiki/Manchester%20United%20F.C. | Manchester United F.C. | Manchester United Football Club is a professional football club based in Old Trafford, Greater Manchester, England, that competes in the Premier League, the top flight of English football. Nicknamed "the Red Devils", the club was founded as Newton Heath LYR Football Club in 1878, but changed its name to Manchester United in 1902. The club moved from Newton Heath to its current stadium, Old Trafford, in 1910.
Manchester United have won the joint-most trophies in English club football, including a record 20 League titles, 12 FA Cups, five League Cups and a record 21 FA Community Shields. They have won the European Cup/UEFA Champions League three times, and the UEFA Europa League, the UEFA Cup Winners' Cup, the UEFA Super Cup, the Intercontinental Cup and the FIFA Club World Cup once each. In 1968, under the management of Matt Busby, 10 years after eight of the club's players were killed in the Munich air disaster, they became the first English club to win the European Cup. Alex Ferguson is the club's longest-serving and most successful manager, winning 38 trophies, including 13 league titles, 5 FA Cups and 2 UEFA Champions League titles, between 1986 and 2013. In the 1998–99 season, under Ferguson, the club became the first in the history of English football to achieve the European treble of the Premier League, FA Cup and UEFA Champions League. In winning the UEFA Europa League under José Mourinho in 2016–17, they also became one of five clubs to have won the original three main UEFA club competitions (the Champions League, Europa League and Cup Winners' Cup).
Manchester United is one of the most widely supported football clubs in the world, and has rivalries with Liverpool, Manchester City, Arsenal and Leeds United. Manchester United was the highest-earning football club in the world for 2016–17, with an annual revenue of €676.3 million, and the world's third most valuable football club in 2019, valued at £3.15 billion ($3.81 billion). After being floated on the London Stock Exchange in 1991, the club was taken private in 2005 after a purchase by Malcolm Glazer valued at almost £800 million, of which over £500 million of borrowed money became the club's debt. From 2012, some shares of the club were listed on the New York Stock Exchange, although the Glazer family retains overall ownership and control of the club.
History
Early years (1878–1945)
Manchester United was formed in 1878 as Newton Heath LYR Football Club by the Carriage and Wagon department of the Lancashire and Yorkshire Railway (LYR) depot at Newton Heath. The team initially played games against other departments and railway companies, but on 20 November 1880, they competed in their first recorded match; wearing the colours of the railway company – green and gold – they were defeated 6–0 by Bolton Wanderers' reserve team. By 1888, the club had become a founding member of The Combination, a regional football league. Following the league's dissolution after only one season, Newton Heath joined the newly formed Football Alliance, which ran for three seasons before being merged with The Football League. This resulted in the club starting the 1892–93 season in the First Division, by which time it had become independent of the railway company and dropped the "LYR" from its name. After two seasons, the club was relegated to the Second Division.
In January 1902, with debts of £2,670 – equivalent to £ in – the club was served with a winding-up order. Captain Harry Stafford found four local businessmen, including John Henry Davies (who became club president), each willing to invest £500 in return for a direct interest in running the club and who subsequently changed the name; on 24 April 1902, Manchester United was officially born. Under Ernest Mangnall, who assumed managerial duties in 1903, the team finished as Second Division runners-up in 1906 and secured promotion to the First Division, which they won in 1908 – the club's first league title. The following season began with victory in the first ever Charity Shield and ended with the club's first FA Cup title. Manchester United won the First Division for the second time in 1911, but at the end of the following season, Mangnall left the club to join Manchester City.
In 1922, three years after the resumption of football following the First World War, the club was relegated to the Second Division, where it remained until regaining promotion in 1925. Relegated again in 1931, Manchester United became a yo-yo club, achieving its all-time lowest position of 20th place in the Second Division in 1934. Following the death of principal benefactor John Henry Davies in October 1927, the club's finances deteriorated to the extent that Manchester United would likely have gone bankrupt had it not been for James W. Gibson, who, in December 1931, invested £2,000 and assumed control of the club. In the 1938–39 season, the last year of football before the Second World War, the club finished 14th in the First Division.
Busby years (1945–1969)
In October 1945, the impending resumption of football after the war led to the managerial appointment of Matt Busby, who demanded an unprecedented level of control over team selection, player transfers and training sessions. Busby led the team to second-place league finishes in 1947, 1948 and 1949, and to FA Cup victory in 1948. In 1952, the club won the First Division, its first league title for 41 years. They then won back-to-back league titles in 1956 and 1957; the squad, who had an average age of 22, were nicknamed "the Busby Babes" by the media, a testament to Busby's faith in his youth players. In 1957, Manchester United became the first English team to compete in the European Cup, despite objections from The Football League, who had denied Chelsea the same opportunity the previous season. En route to the semi-final, which they lost to Real Madrid, the team recorded a 10–0 victory over Belgian champions Anderlecht, which remains the club's biggest victory on record.
The following season, on the way home from a European Cup quarter-final victory against Red Star Belgrade, the aircraft carrying the Manchester United players, officials and journalists crashed while attempting to take off after refuelling in Munich, Germany. The Munich air disaster of 6 February 1958 claimed 23 lives, including those of eight players – Geoff Bent, Roger Byrne, Eddie Colman, Duncan Edwards, Mark Jones, David Pegg, Tommy Taylor and Billy Whelan – and injured several more.
Assistant manager Jimmy Murphy took over as manager while Busby recovered from his injuries and the club's makeshift side reached the FA Cup final, which they lost to Bolton Wanderers. In recognition of the team's tragedy, UEFA invited the club to compete in the 1958–59 European Cup alongside eventual League champions Wolverhampton Wanderers. Despite approval from The Football Association, The Football League determined that the club should not enter the competition, since it had not qualified. Busby rebuilt the team through the 1960s by signing players such as Denis Law and Pat Crerand, who combined with the next generation of youth players – including George Best – to win the FA Cup in 1963. The following season, they finished second in the league, then won the title in 1965 and 1967. In 1968, Manchester United became the first English club to win the European Cup, beating Benfica 4–1 in the final with a team that contained three European Footballers of the Year: Bobby Charlton, Denis Law and George Best. They then represented Europe in the 1968 Intercontinental Cup against Estudiantes of Argentina, but lost the tie after losing the first leg in Buenos Aires, before a 1–1 draw at Old Trafford three weeks later. Busby resigned as manager in 1969 before being replaced by the reserve team coach, former Manchester United player Wilf McGuinness.
1969–1986
Following an eighth-place finish in the 1969–70 season and a poor start to the 1970–71 season, Busby was persuaded to temporarily resume managerial duties, and McGuinness returned to his position as reserve team coach. In June 1971, Frank O'Farrell was appointed as manager, but lasted less than 18 months before being replaced by Tommy Docherty in December 1972. Docherty saved Manchester United from relegation that season, only to see them relegated in 1974; by that time the trio of Best, Law, and Charlton had left the club. The team won promotion at the first attempt and reached the FA Cup final in 1976, but were beaten by Southampton. They reached the final again in 1977, beating Liverpool 2–1. Docherty was dismissed shortly afterwards, following the revelation of his affair with the club physiotherapist's wife.
Dave Sexton replaced Docherty as manager in the summer of 1977. Despite major signings, including Joe Jordan, Gordon McQueen, Gary Bailey, and Ray Wilkins, the team failed to win any trophies; they finished second in 1979–80 and lost to Arsenal in the 1979 FA Cup Final. Sexton was dismissed in 1981, even though the team won the last seven games under his direction. He was replaced by Ron Atkinson, who immediately broke the British record transfer fee to sign Bryan Robson from his former club West Bromwich Albion. Under Atkinson, Manchester United won the FA Cup in 1983 and 1985 and beat rivals Liverpool to win the 1983 Charity Shield. In 1985–86, after 13 wins and two draws in its first 15 matches, the club was favourite to win the league but finished in fourth place. The following season, with the club in danger of relegation by November, Atkinson was dismissed.
Ferguson years (1986–2013)
Alex Ferguson and his assistant Archie Knox arrived from Aberdeen on the day of Atkinson's dismissal, and guided the club to an 11th-place finish in the league. Despite a second-place finish in 1987–88, the club was back in 11th place the following season. Reportedly on the verge of being dismissed, Ferguson's job was saved by victory over Crystal Palace in the 1990 FA Cup Final. The following season, Manchester United claimed their first UEFA Cup Winners' Cup title. That triumph allowed the club to compete in the European Super Cup for the first time, where United beat European Cup holders Red Star Belgrade 1–0 at Old Trafford. The club appeared in two consecutive League Cup finals in 1991 and 1992, beating Nottingham Forest 1–0 in the second to claim that competition for the first time as well. In 1993, the club won its first league title since 1967, and a year later, for the first time since 1957, it won a second consecutive title – alongside the FA Cup – to complete the first "Double" in the club's history. United then became the first English club to do the Double twice when they won both competitions again in 1995–96, before retaining the league title once more in 1996–97 with a game to spare.
In the 1998–99 season, Manchester United became the first team to win the Premier League, FA Cup and UEFA Champions League – "The Treble" – in the same season. Losing 1–0 going into injury time in the 1999 UEFA Champions League Final, Teddy Sheringham and Ole Gunnar Solskjær scored late goals to claim a dramatic victory over Bayern Munich, in what is considered one of the greatest comebacks of all time. That summer, Ferguson received a knighthood for his services to football. In November 1999, the club became the only British team to ever win the Intercontinental Cup with a 1–0 victory over Palmeiras in Tokyo.
Manchester United won the league again in the 1999–2000 and 2000–01 seasons, becoming only the fourth club to win the English title three times in a row. The team finished third in 2001–02, before regaining the title in 2002–03. They won the 2003–04 FA Cup, beating Millwall 3–0 in the final at the Millennium Stadium in Cardiff to lift the trophy for a record 11th time. In the 2005–06 season, Manchester United failed to qualify for the knockout phase of the UEFA Champions League for the first time in over a decade, but recovered to secure a second-place league finish and victory over Wigan Athletic in the 2006 Football League Cup Final. The club regained the Premier League title in the 2006–07 season, before completing the European double in 2007–08 with a 6–5 penalty shoot-out victory over Chelsea in the 2008 UEFA Champions League Final in Moscow to go with their 17th English league title. Ryan Giggs made a record 759th appearance for the club in that game, overtaking previous record holder Bobby Charlton. In December 2008, the club became the first British team to win the FIFA Club World Cup and followed this with the 2008–09 Football League Cup, and its third successive Premier League title. That summer, forward Cristiano Ronaldo was sold to Real Madrid for a world record £80 million. In 2010, Manchester United defeated Aston Villa 2–1 at Wembley to retain the League Cup, its first successful defence of a knockout cup competition.
After finishing as runners-up to Chelsea in the 2009–10 season, United achieved a record 19th league title in 2010–11, securing the championship with a 1–1 away draw against Blackburn Rovers on 14 May 2011. This was extended to 20 league titles in 2012–13, securing the championship with a 3–0 home win against Aston Villa on 22 April 2013.
2013–present
On 8 May 2013, Ferguson announced that he was to retire as manager at the end of the football season, but would remain at the club as a director and club ambassador. He retired as the most decorated manager in football history. The club announced the next day that Everton manager David Moyes would replace him from 1 July, having signed a six-year contract. Ryan Giggs took over as interim player-manager 10 months later, on 22 April 2014, when Moyes was sacked after a poor season in which the club failed to defend their Premier League title and failed to qualify for the UEFA Champions League for the first time since 1995–96. They also failed to qualify for the Europa League, meaning that it was the first time Manchester United had not qualified for a European competition since 1990. On 19 May 2014, it was confirmed that Louis van Gaal would replace Moyes as Manchester United manager on a three-year deal, with Giggs as his assistant. Malcolm Glazer, the patriarch of the family that owns the club, died on 28 May 2014.
Under Van Gaal, United won a 12th FA Cup, but a disappointing slump in the middle of his second season led to rumours of the board sounding out potential replacements. Van Gaal was ultimately sacked just two days after the cup final victory, with United having finished fifth in the league. Former Porto, Chelsea, Inter Milan and Real Madrid manager José Mourinho was appointed in his place on 27 May 2016. Mourinho signed a three-year contract, and in his first season won the FA Community Shield, EFL Cup and UEFA Europa League. Wayne Rooney scored his 250th goal for United, a stoppage-time equaliser in a league game against Stoke City in January 2017, surpassing Sir Bobby Charlton as the club's all-time top scorer. The following season, United finished second in the league – their highest league placing since 2013 – but were still 19 points behind rivals Manchester City. Mourinho also guided the club to a 19th FA Cup Final, but they lost 1–0 to Chelsea. On 18 December 2018, with United in sixth place in the Premier League table, 19 points behind leaders Liverpool and 11 points outside the Champions League places, Mourinho was sacked after 144 games in charge. The following day, former United striker Ole Gunnar Solskjær was appointed as caretaker manager until the end of the season. On 28 March 2019, after winning 14 of his first 19 matches in charge, Solskjær was appointed permanent manager on a three-year deal.
On 18 April 2021, Manchester United announced they were joining 11 other European clubs as founding members of the European Super League, a proposed 20-team competition intended to rival the UEFA Champions League. The announcement drew a significant backlash from supporters, other clubs, media partners, sponsors, players and the UK Government, forcing the club to withdraw just two days later. The failure of the project led to the resignation of executive vice-chairman Ed Woodward, while resultant protests against Woodward and the Glazer family led to a pitch invasion ahead of a league match against Liverpool on 2 May 2021, which saw the first postponement of a Premier League game due to supporter protests in the competition's history.
On the pitch, United equalled their own record for the biggest win in Premier League history with a 9–0 win over Southampton on 2 February 2021, but ended the season with defeat on penalties in the UEFA Europa League Final against Villarreal, going four straight seasons without a trophy. On 20 November 2021, Solskjær left his role as manager. Former midfielder Michael Carrick took charge for the next three games, before the appointment of Ralf Rangnick as interim manager until the end of the season.
Crest and colours
The club crest is derived from the Manchester City Council coat of arms, although all that remains of it on the current crest is the ship in full sail. The devil stems from the club's nickname "The Red Devils"; it was included on club programmes and scarves in the 1960s, and incorporated into the club crest in 1970, although the crest was not included on the chest of the shirt until 1971. In 1975, the red devil ("A devil facing the sinister guardant supporting with both hands a trident gules") was granted as a heraldic badge by the College of Arms to the English Football League for use by Manchester United.
Newton Heath's uniform in 1879, four years before the club played its first competitive match, has been documented as 'white with blue cord'. A photograph of the Newton Heath team, taken in 1892, is believed to show the players wearing red-and-white quartered jerseys and navy blue knickerbockers. Between 1894 and 1896, the players wore green and gold jerseys which were replaced in 1896 by white shirts, which were worn with navy blue shorts.
After the name change in 1902, the club colours were changed to red shirts, white shorts, and black socks, which has become the standard Manchester United home kit. Very few changes were made to the kit until 1922 when the club adopted white shirts bearing a deep red "V" around the neck, similar to the shirt worn in the 1909 FA Cup Final. They remained part of their home kits until 1927. For a period in 1934, the cherry and white hooped change shirt became the home colours, but the following season the red shirt was recalled after the club's lowest ever league placing of 20th in the Second Division and the hooped shirt dropped back to being the change. The black socks were changed to white from 1959 to 1965, where they were replaced with red socks up until 1971 with white used on occasion, when the club reverted to black. Black shorts and white socks are sometimes worn with the home strip, most often in away games, if there is a clash with the opponent's kit. For 2018–19, black shorts and red socks became the primary choice for the home kit. Since 1997–98, white socks have been the preferred choice for European games, which are typically played on weeknights, to aid with player visibility. The current home kit is a red shirt with the trademark Adidas three stripes in red on the shoulders, white shorts, and black socks.
The Manchester United away strip has often been a white shirt, black shorts and white socks, but there have been several exceptions. These include an all-black strip with blue and gold trimmings between 1993 and 1995, the navy blue shirt with silver horizontal pinstripes worn during the 1999–2000 season, and the 2011–12 away kit, which had a royal blue body and sleeves with hoops made of small midnight navy blue and black stripes, with black shorts and blue socks. An all-grey away kit worn during the 1995–96 season was dropped after just five games; in its final outing against Southampton, Alex Ferguson instructed the team to change into the third kit during half-time. The reason for dropping it being that the players claimed to have trouble finding their teammates against the crowd, United failed to win a competitive game in the kit in five attempts. In 2001, to celebrate 100 years as "Manchester United", a reversible white and gold away kit was released, although the actual match day shirts were not reversible.
The club's third kit is often all-blue; this was most recently the case during the 2014–15 season. Exceptions include a green-and-gold halved shirt worn between 1992 and 1994, a blue-and-white striped shirt worn during the 1994–95 and 1995–96 seasons and once in 1996–97, an all-black kit worn during the Treble-winning 1998–99 season, and a white shirt with black-and-red horizontal pinstripes worn between 2003–04 and 2005–06. From 2006–07 to 2013–14, the third kit was the previous season's away kit, albeit updated with the new club sponsor in 2006–07 and 2010–11, apart from the 2008–09 season, when an all-blue kit was launched to mark the 40th anniversary of the 1967–68 European Cup success.
Grounds
1878–1893: North Road
Newton Heath initially played on a field on North Road, close to the railway yard; the original capacity was about 12,000, but club officials deemed the facilities inadequate for a club hoping to join The Football League. Some expansion took place in 1887, and in 1891, Newton Heath used its minimal financial reserves to purchase two grandstands, each able to hold 1,000 spectators. Although attendances were not recorded for many of the earliest matches at North Road, the highest documented attendance was approximately 15,000 for a First Division match against Sunderland on 4 March 1893. A similar attendance was also recorded for a friendly match against Gorton Villa on 5 September 1889.
1893–1910: Bank Street
In June 1893, after the club was evicted from North Road by its owners, Manchester Deans and Canons, who felt it was inappropriate for the club to charge an entry fee to the ground, secretary A. H. Albut procured the use of the Bank Street ground in Clayton. It initially had no stands, by the start of the 1893–94 season, two had been built; one spanning the full length of the pitch on one side and the other behind the goal at the "Bradford end". At the opposite end, the "Clayton end", the ground had been "built up, thousands thus being provided for". Newton Heath's first league match at Bank Street was played against Burnley on 1 September 1893, when 10,000 people saw Alf Farman score a hat-trick, Newton Heath's only goals in a 3–2 win. The remaining stands were completed for the following league game against Nottingham Forest three weeks later. In October 1895, before the visit of Manchester City, the club purchased a 2,000-capacity stand from the Broughton Rangers rugby league club, and put up another stand on the "reserved side" (as distinct from the "popular side"); however, weather restricted the attendance for the Manchester City match to just 12,000.
When the Bank Street ground was temporarily closed by bailiffs in 1902, club captain Harry Stafford raised enough money to pay for the club's next away game at Bristol City and found a temporary ground at Harpurhey for the next reserves game against Padiham. Following financial investment, new club president John Henry Davies paid £500 for the erection of a new 1,000-seat stand at Bank Street. Within four years, the stadium had cover on all four sides, as well as the ability to hold approximately 50,000 spectators, some of whom could watch from the viewing gallery atop the Main Stand.
1910–present: Old Trafford
Following Manchester United's first league title in 1908 and the FA Cup a year later, it was decided that Bank Street was too restrictive for Davies' ambition; in February 1909, six weeks before the club's first FA Cup title, Old Trafford was named as the home of Manchester United, following the purchase of land for around £60,000. Architect Archibald Leitch was given a budget of £30,000 for construction; original plans called for seating capacity of 100,000, though budget constraints forced a revision to 77,000. The building was constructed by Messrs Brameld and Smith of Manchester. The stadium's record attendance was registered on 25 March 1939, when an FA Cup semi-final between Wolverhampton Wanderers and Grimsby Town drew 76,962 spectators.
Bombing in the Second World War destroyed much of the stadium; the central tunnel in the South Stand was all that remained of that quarter. After the war, the club received compensation from the War Damage Commission in the amount of £22,278. While reconstruction took place, the team played its "home" games at Manchester City's Maine Road ground; Manchester United was charged £5,000 per year, plus a nominal percentage of gate receipts. Later improvements included the addition of roofs, first to the Stretford End and then to the North and East Stands. The roofs were supported by pillars that obstructed many fans' views, and they were eventually replaced with a cantilevered structure. The Stretford End was the last stand to receive a cantilevered roof, completed in time for the 1993–94 season. First used on 25 March 1957 and costing £40,000, four pylons were erected, each housing 54 individual floodlights. These were dismantled in 1987 and replaced by a lighting system embedded in the roof of each stand, which remains in use today.
The Taylor Report's requirement for an all-seater stadium lowered capacity at Old Trafford to around 44,000 by 1993. In 1995, the North Stand was redeveloped into three tiers, restoring capacity to approximately 55,000. At the end of the 1998–99 season, second tiers were added to the East and West Stands, raising capacity to around 67,000, and between July 2005 and May 2006, 8,000 more seats were added via second tiers in the north-west and north-east quadrants. Part of the new seating was used for the first time on 26 March 2006, when an attendance of 69,070 became a new Premier League record. The record was pushed steadily upwards before reaching its peak on 31 March 2007, when 76,098 spectators saw Manchester United beat Blackburn Rovers 4–1, with just 114 seats (0.15 per cent of the total capacity of 76,212) unoccupied. In 2009, reorganisation of the seating resulted in a reduction of capacity by 255 to 75,957. Manchester United has the second highest average attendance of European football clubs only behind Borussia Dortmund. In 2021 United co-chairman Joel Glazer said that "early-stage planning work" for the redevelopment of Old Trafford was underway. This followed "increasing criticism" over the lack of development of the ground since 2006.
Support
Manchester United is one of the most popular football clubs in the world, with one of the highest average home attendances in Europe. The club states that its worldwide fan base includes more than 200 officially recognised branches of the Manchester United Supporters Club (MUSC), in at least 24 countries. The club takes advantage of this support through its worldwide summer tours. Accountancy firm and sports industry consultants Deloitte estimate that Manchester United has 75 million fans worldwide. The club has the third highest social media following in the world among sports teams (after Barcelona and Real Madrid), with over 72 million Facebook followers as of July 2020. A 2014 study showed that Manchester United had the loudest fans in the Premier League.
Supporters are represented by two independent bodies; the Independent Manchester United Supporters' Association (IMUSA), which maintains close links to the club through the MUFC Fans Forum, and the Manchester United Supporters' Trust (MUST). After the Glazer family's takeover in 2005, a group of fans formed a splinter club, F.C. United of Manchester. The West Stand of Old Trafford – the "Stretford End" – is the home end and the traditional source of the club's most vocal support.
Rivalries
Manchester United has rivalries with Arsenal, Leeds United, Liverpool, and Manchester City, against whom they contest the Manchester derby.
The rivalry with Liverpool is rooted in competition between the cities during the Industrial Revolution when Manchester was famous for its textile industry while Liverpool was a major port. The two clubs are the most successful English teams in both domestic and international competitions; and between them they have won 39 league titles, 9 European Cups, 4 UEFA Cups, 5 UEFA Super Cups, 19 FA Cups, 13 League Cups, 2 FIFA Club World Cups, 1 Intercontinental Cup and 36 FA Community Shields. It is considered to be one of the biggest rivalries in the football world and is considered the most famous fixture in English football. Former Manchester United manager Alex Ferguson said in 2002, "My greatest challenge was knocking Liverpool right off their fucking perch".
The "Roses Rivalry" with Leeds stems from the Wars of the Roses, fought between the House of Lancaster and the House of York, with Manchester United representing Lancashire and Leeds representing Yorkshire.
The rivalry with Arsenal arises from the numerous times the two teams, as well as managers Alex Ferguson and Arsène Wenger, have battled for the Premier League title. With 33 titles between them (20 for Manchester United, 13 for Arsenal) this fixture has become known as one of the finest Premier League match-ups in history.
Global brand
Manchester United has been described as a global brand; a 2011 report by Brand Finance, valued the club's trademarks and associated intellectual property at £412 million – an increase of £39 million on the previous year, valuing it at £11 million more than the second best brand, Real Madrid – and gave the brand a strength rating of AAA (Extremely Strong). In July 2012, Manchester United was ranked first by Forbes magazine in its list of the ten most valuable sports team brands, valuing the Manchester United brand at $2.23 billion. The club is ranked third in the Deloitte Football Money League (behind Real Madrid and Barcelona). In January 2013, the club became the first sports team in the world to be valued at $3 billion. Forbes magazine valued the club at $3.3 billion – $1.2 billion higher than the next most valuable sports team. They were overtaken by Real Madrid for the next four years, but Manchester United returned to the top of the Forbes list in June 2017, with a valuation of $3.689 billion.
The core strength of Manchester United's global brand is often attributed to Matt Busby's rebuilding of the team and subsequent success following the Munich air disaster, which drew worldwide acclaim. The "iconic" team included Bobby Charlton and Nobby Stiles (members of England's World Cup winning team), Denis Law and George Best. The attacking style of play adopted by this team (in contrast to the defensive-minded "catenaccio" approach favoured by the leading Italian teams of the era) "captured the imagination of the English footballing public". Busby's team also became associated with the liberalisation of Western society during the 1960s; George Best, known as the "Fifth Beatle" for his iconic haircut, was the first footballer to significantly develop an off-the-field media profile.
As the second English football club to float on the London Stock Exchange in 1991, the club raised significant capital, with which it further developed its commercial strategy. The club's focus on commercial and sporting success brought significant profits in an industry often characterised by chronic losses. The strength of the Manchester United brand was bolstered by intense off-the-field media attention to individual players, most notably David Beckham (who quickly developed his own global brand). This attention often generates greater interest in on-the-field activities, and hence generates sponsorship opportunities – the value of which is driven by television exposure. During his time with the club, Beckham's popularity across Asia was integral to the club's commercial success in that part of the world.
Because higher league placement results in a greater share of television rights, success on the field generates greater income for the club. Since the inception of the Premier League, Manchester United has received the largest share of the revenue generated from the BSkyB broadcasting deal. Manchester United has also consistently enjoyed the highest commercial income of any English club; in 2005–06, the club's commercial arm generated £51 million, compared to £42.5 million at Chelsea, £39.3 million at Liverpool, £34 million at Arsenal and £27.9 million at Newcastle United. A key sponsorship relationship was with sportswear company Nike, who managed the club's merchandising operation as part of a £303 million 13-year partnership between 2002 and 2015. Through Manchester United Finance and the club's membership scheme, One United, those with an affinity for the club can purchase a range of branded goods and services. Additionally, Manchester United-branded media services – such as the club's dedicated television channel, MUTV – have allowed the club to expand its fan base to those beyond the reach of its Old Trafford stadium.
Sponsorship
In an initial five-year deal worth £500,000, Sharp Electronics became the club's first shirt sponsor at the beginning of the 1982–83 season, a relationship that lasted until the end of the 1999–2000 season, when Vodafone agreed a four-year, £30 million deal. Vodafone agreed to pay £36 million to extend the deal by four years, but after two seasons triggered a break clause in order to concentrate on its sponsorship of the Champions League.
To commence at the start of the 2006–07 season, American insurance corporation AIG agreed a four-year £56.5 million deal which in September 2006 became the most valuable in the world. At the beginning of the 2010–11 season, American reinsurance company Aon became the club's principal sponsor in a four-year deal reputed to be worth approximately £80 million, making it the most lucrative shirt sponsorship deal in football history. Manchester United announced their first training kit sponsor in August 2011, agreeing a four-year deal with DHL reported to be worth £40 million; it is believed to be the first instance of training kit sponsorship in English football. The DHL contract lasted for over a year before the club bought back the contract in October 2012, although they remained the club's official logistics partner. The contract for the training kit sponsorship was then sold to Aon in April 2013 for a deal worth £180 million over eight years, which also included purchasing the naming rights for the Trafford Training Centre.
The club's first kit manufacturer was Umbro, until a five-year deal was agreed with Admiral Sportswear in 1975. Adidas received the contract in 1980, before Umbro started a second spell in 1992. Umbro's sponsorship lasted for ten years, followed by Nike's record-breaking £302.9 million deal that lasted until 2015; 3.8 million replica shirts were sold in the first 22 months with the company. In addition to Nike and Chevrolet, the club also has several lower-level "platinum" sponsors, including Aon and Budweiser.
On 30 July 2012, United signed a seven-year deal with American automotive corporation General Motors, which replaced Aon as the shirt sponsor from the 2014–15 season. The new $80m-a-year shirt deal is worth $559m over seven years and features the logo of General Motors brand Chevrolet. Nike announced that they would not renew their kit supply deal with Manchester United after the 2014–15 season, citing rising costs. Since the start of the 2015–16 season, Adidas has manufactured Manchester United's kit as part of a world-record 10-year deal worth a minimum of £750 million. Plumbing products manufacturer Kohler became the club's first sleeve sponsor ahead of the 2018–19 season. Manchester United and General Motors did not renew their sponsorship deal, and the club subsequently signed a five-year, £235m sponsorship deal with TeamViewer ahead of the 2021–22 season.
Ownership and finances
Originally funded by the Lancashire and Yorkshire Railway Company, the club became a limited company in 1892 and sold shares to local supporters for £1 via an application form. In 1902, majority ownership passed to the four local businessmen who invested £500 to save the club from bankruptcy, including future club president John Henry Davies. After his death in 1927, the club faced bankruptcy yet again, but was saved in December 1931 by James W. Gibson, who assumed control of the club after an investment of £2,000. Gibson promoted his son, Alan, to the board in 1948, but died three years later; the Gibson family retained ownership of the club through James' wife, Lillian, but the position of chairman passed to former player Harold Hardman.
Promoted to the board a few days after the Munich air disaster, Louis Edwards, a friend of Matt Busby, began acquiring shares in the club; for an investment of approximately £40,000, he accumulated a 54 per cent shareholding and took control in January 1964. When Lillian Gibson died in January 1971, her shares passed to Alan Gibson who sold a percentage of his shares to Louis Edwards' son, Martin, in 1978; Martin Edwards went on to become chairman upon his father's death in 1980. Media tycoon Robert Maxwell attempted to buy the club in 1984, but did not meet Edwards' asking price. In 1989, chairman Martin Edwards attempted to sell the club to Michael Knighton for £20 million, but the sale fell through and Knighton joined the board of directors instead.
Manchester United was floated on the stock market in June 1991 (raising £6.7 million), and received yet another takeover bid in 1998, this time from Rupert Murdoch's British Sky Broadcasting Corporation. This resulted in the formation of Shareholders United Against Murdoch – now the Manchester United Supporters' Trust – who encouraged supporters to buy shares in the club in an attempt to block any hostile takeover. The Manchester United board accepted a £623 million offer, but the takeover was blocked by the Monopolies and Mergers Commission at the final hurdle in April 1999. A few years later, a power struggle emerged between the club's manager, Alex Ferguson, and his horse-racing partners, John Magnier and J. P. McManus, who had gradually become the majority shareholders. In a dispute that stemmed from contested ownership of the horse Rock of Gibraltar, Magnier and McManus attempted to have Ferguson removed from his position as manager, and the board responded by approaching investors to attempt to reduce the Irishmen's majority.
Glazer ownership
In May 2005, Malcolm Glazer purchased the 28.7 per cent stake held by McManus and Magnier, thus acquiring a controlling interest through his investment vehicle Red Football Ltd in a highly leveraged takeover valuing the club at approximately £800 million (then approx. $1.5 billion). Once the purchase was complete, the club was taken off the stock exchange. Much of the takeover money was borrowed by the Glazers; the debts were transferred to the club. As a result, the club went from being debt-free to being saddled with debts of £540 million, at interest rates of between 7% to 20%.
In July 2006, the club announced a £660 million debt refinancing package, resulting in a 30 per cent reduction in annual interest payments to £62 million a year. In January 2010, with debts of £716.5 million ($1.17 billion), Manchester United further refinanced through a bond issue worth £504 million, enabling them to pay off most of the £509 million owed to international banks. The annual interest payable on the bonds – which were to mature on 1 February 2017 – is approximately £45 million per annum. Despite restructuring, the club's debt prompted protests from fans on 23 January 2010, at Old Trafford and the club's Trafford Training Centre. Supporter groups encouraged match-going fans to wear green and gold, the colours of Newton Heath. On 30 January, reports emerged that the Manchester United Supporters' Trust had held meetings with a group of wealthy fans, dubbed the "Red Knights", with plans to buying out the Glazers' controlling interest. The club's debts reached a high of £777 million in June 2007.
In August 2011, the Glazers were believed to have approached Credit Suisse in preparation for a $1 billion (approx. £600 million) initial public offering (IPO) on the Singapore stock exchange that would value the club at more than £2 billion; however, in July 2012, the club announced plans to list its IPO on the New York Stock Exchange instead. Shares were originally set to go on sale for between $16 and $20 each, but the price was cut to $14 by the launch of the IPO on 10 August, following negative comments from Wall Street analysts and Facebook's disappointing stock market debut in May. Even after the cut, Manchester United was valued at $2.3 billion, making it the most valuable football club in the world.
The New York Stock Exchange allows for different shareholders to enjoy different voting rights over the club. Shares offered to the public ("Class A") had 10 times lesser voting rights than shares retained by the Glazers ("Class B"). Initially in 2012, only 10% of shares were offered to the public. As of 2019, the Glazers retain ultimate control over the club, with over 70% of shares, and even higher voting power.
In 2012, The Guardian estimated that the club had paid a total of over £500 million in debt interest and other fees on behalf of the Glazers, and in 2019, reported that the total sum paid by the club for such fees had risen to £1 billion. At the end of 2019, the club had a net debt of nearly £400 million.
Players
First-team squad
On loan
Reserves and academy
List of under-23s and academy players with articles
On loan
Player of the Year
Coaching staff
Managerial history
Management
Owner: Glazer family via Red Football Shareholder Limited
Manchester United Limited
Manchester United Football Club
Honours
Manchester United is one of the most successful clubs in Europe in terms of trophies won. The club's first trophy was the Manchester Cup, which they won as Newton Heath LYR in 1886. In 1908, the club won their first league title, and won the FA Cup for the first time the following year. Since then, they have gone on to win a record 20 top-division titles – including a record 13 Premier League titles – and their total of 12 FA Cups is second only to Arsenal (14). Those titles have meant the club has appeared a record 30 times in the FA Community Shield (formerly the FA Charity Shield), which is played at the start of each season between the winners of the league and FA Cup from the previous season; of those 30 appearances, Manchester United have won a record 21, including four times when the match was drawn and the trophy shared by the two clubs.
The club had a successful period under the management of Matt Busby, starting with the FA Cup in 1948 and culminating with becoming the first English club to win the European Cup in 1968, winning five league titles in the intervening years. The club's most successful decade, however, came in the 1990s under Alex Ferguson; five league titles, four FA Cups, one League Cup, five Charity Shields (one shared), one UEFA Champions League, one UEFA Cup Winners' Cup, one UEFA Super Cup and one Intercontinental Cup. The club has won the Double (winning the Premier League and FA Cup in the same season) three times; the second in 1995–96 saw them become the first club to do so twice, and it became referred to as the "Double Double". United became the sole British club to win the Intercontinental Cup in 1999 and are one of only two British clubs to have won the FIFA Club World Cup, in 2008. In 1999, United became the first English club to win the Treble.
The club's most recent trophy came in May 2017, with the 2016–17 UEFA Europa League. In winning that title, United became the fifth club to have won the "European Treble" of European Cup/UEFA Champions League, Cup Winners' Cup, and UEFA Cup/Europa League after Juventus, Ajax, Bayern Munich and Chelsea.
Domestic
League
First Division/Premier League
Winners (20; record): 1907–08, 1910–11, 1951–52, 1955–56, 1956–57, 1964–65, 1966–67, 1992–93, 1993–94, 1995–96, 1996–97, 1998–99, 1999–2000, 2000–01, 2002–03, 2006–07, 2007–08, 2008–09, 2010–11, 2012–13
Second Division
Winners (2): 1935–36, 1974–75
Cups
FA Cup
Winners (12): 1908–09, 1947–48, 1962–63, 1976–77, 1982–83, 1984–85, 1989–90, 1993–94, 1995–96, 1998–99, 2003–04, 2015–16
Football League Cup/EFL Cup
Winners (5): 1991–92, 2005–06, 2008–09, 2009–10, 2016–17
FA Charity Shield/FA Community Shield
Winners (21; record): 1908, 1911, 1952, 1956, 1957, 1965*, 1967*, 1977*, 1983, 1990*, 1993, 1994, 1996, 1997, 2003, 2007, 2008, 2010, 2011, 2013, 2016 (* shared)
European
European Cup/UEFA Champions League
Winners (3): 1967–68, 1998–99, 2007–08
European Cup Winners' Cup
Winners (1): 1990–91
UEFA Europa League
Winners (1): 2016–17
European Super Cup
Winners (1): 1991
Worldwide
Intercontinental Cup
Winners (1; British record): 1999
FIFA Club World Cup
Winners (1; British joint record): 2008
Doubles and Trebles
Doubles
League and FA Cup (3): 1993–94, 1995–96, 1998–99
League and UEFA Champions League (2): 1998–99, 2007–08
League and EFL Cup (1): 2008–09
EFL Cup and UEFA Europa League (1): 2016–17
Trebles
League, FA Cup and UEFA Champions League (1): 1998–99
Especially short competitions such as the Charity/Community Shield, Intercontinental Cup (now defunct), FIFA Club World Cup or UEFA Super Cup are not generally considered to contribute towards a Double or Treble.
Manchester United Women
A team called Manchester United Supporters Club Ladies began operations in the late 1970s and was unofficially recognised as the club's senior women's team. They became founding members of the North West Women's Regional Football League in 1989. The team made an official partnership with Manchester United in 2001, becoming the club's official women's team; however, in 2005, following Malcolm Glazer's takeover, the club was disbanded as it was seen to be "unprofitable". In 2018, Manchester United formed a new women's football team, which entered the second division of women's football in England for their debut season.
Footnotes
References
Further reading
External links
Official statistics website
Official Manchester United Supporters' Trust
Manchester United at Sky Sports
Manchester United at Premier League
1878 establishments in England
Association football clubs established in 1878
Association football clubs established in 1902
Companies established in 1878
Companies formerly listed on the London Stock Exchange
Companies listed on the New York Stock Exchange
FA Cup winners
Football clubs in England
Football clubs in Trafford
EFL Cup winners
Former English Football League clubs
G-14 clubs
Laureus World Sports Awards winners
Premier League clubs
Publicly traded sports companies
Football clubs in Manchester
FIFA Club World Cup winning clubs
UEFA Champions League winning clubs
UEFA Cup Winners' Cup winning clubs
UEFA Europa League winning clubs
UEFA Super Cup winning clubs |
19962 | https://en.wikipedia.org/wiki/Mesa%20%28programming%20language%29 | Mesa (programming language) | Mesa is a programming language developed in the late 1970s at the Xerox Palo Alto Research Center in Palo Alto, California, United States. The language name was a pun based upon the programming language catchphrases of the time, because Mesa is a "high level" programming language.
Mesa is an ALGOL-like language with strong support for modular programming. Every library module has at least two source files: a definitions file specifying the library's interface plus one or more program files specifying the implementation of the procedures in the interface. To use a library, a program or higher-level library must "import" the definitions. The Mesa compiler type-checks all uses of imported entities; this combination of separate compilation with type-checking was unusual at the time.
Mesa introduced several other innovations in language design and implementation, notably in the handling of software exceptions, thread synchronization, and incremental compilation.
Mesa was developed on the Xerox Alto, one of the first personal computers with a graphical user interface, however, most of the Alto's system software was written in BCPL. Mesa was the system programming language of the later Xerox Star workstations, and for the GlobalView desktop environment. Xerox PARC later developed Cedar, which was a superset of Mesa.
Mesa and Cedar had a major influence on the design of other important languages, such as Modula-2 and Java, and was an important vehicle for the development and dissemination of the fundamentals of GUIs, networked environments, and the other advances Xerox contributed to the field of computer science.
History
Mesa was originally designed in the Computer Systems Laboratory (CSL), a branch of the Xerox Palo Alto Research Center, for the Alto, an experimental micro-coded workstation. Initially, its spread was confined to PARC and a few universities to which Xerox had donated some Altos.
Mesa was later adopted as the systems programming language for Xerox's commercial workstations such as the Xerox 8010 (Xerox Star, Dandelion) and Xerox 6085 (Daybreak), in particular for the Pilot operating system.
A secondary development environment, called the Xerox Development Environment (XDE) allowed developers to debug both the operating system Pilot as well as ViewPoint GUI applications using a world swap mechanism. This allowed the entire "state" of the world to be swapped out, and allowed low-level system crashes which paralyzed the whole system to be debugged. This technique did not scale very well to large application images (several megabytes), and so the Pilot/Mesa world in later releases moved away from the world swap view when the micro-coded machines were phased out in favor of SPARC workstations and Intel PCs running a Mesa PrincOps emulator for the basic hardware instruction set.
Mesa was compiled into a stack-machine language, purportedly with the highest code density ever achieved (roughly 4 bytes per high-level language statement). This was touted in a 1981 paper where implementors from the Xerox Systems Development Department (then, the development arm of PARC), tuned up the instruction set and published a paper on the resultant code density.
Mesa was taught via the Mesa Programming Course that took people through the wide range of technology Xerox had available at the time and ended with the programmer writing a "hack", a workable program designed to be useful. An actual example of such a hack is the BWSMagnifier, which was written in 1988 and allowed people to magnify sections of the workstation screen as defined by a resizable window and a changeable magnification factor. Trained Mesa programmers from Xerox were well versed in the fundamental of GUIs, networking, exceptions, and multi-threaded programming, almost a decade before they became standard tools of the trade.
Within Xerox, Mesa was eventually superseded by the Cedar programming language. Many Mesa programmers and developers left Xerox in 1985; some of them went to DEC Systems Research Center where they used their experience with Mesa in the design of Modula-2+, and later of Modula-3.
Main features
Semantics
Mesa was a strongly typed programming language with type-checking across module boundaries, but with enough flexibility in its type system that heap allocators could be written in Mesa.
Because of its strict separation between interface and implementation, Mesa allows true incremental compilation and encourages architecture- and platform-independent programming. They also simplified source-level debugging, including remote debugging via the Ethernet.
Mesa had rich exception handling facilities, with four types of exceptions. It had support for thread synchronization via monitors. Mesa was the first language to implement monitor BROADCAST, a concept introduced by the Pilot operating system.
Syntax
Mesa has an "imperative" and "algebraic" syntax, based on ALGOL and Pascal rather than on BCPL or C; for instance, compound commands are indicated by the and keywords rather than braces. In Mesa, all keywords are written in uppercase.
Due to PARC's using the 1963 variant of ASCII rather than the more common 1967 variant, the Alto's character set included a left-pointing arrow (←) rather than an underscore. The result of this is that Alto programmers (including those using Mesa, Smalltalk etc.) conventionally used CamelCase for compound identifiers, a practice which was incorporated in PARC's standard programming style. On the other hand, the availability of the left-pointing arrow allowed them to use it for the assignment operator, as it originally had been in ALGOL.
When the Mesa designers wanted to implement an exception facility, they hired a recent M.Sc. graduate from Colorado who had written his thesis on exception handling facilities in algorithmic languages. This led to the richest exception facility for its time, with primitives , , , , , and . Because the language did not have type-safe checks to verify full coverage for signal handling, uncaught exceptions were a common cause of bugs in released software.
Cedar
Mesa was the precursor to the programming language Cedar. Cedar's main additions were garbage collection, dynamic types, better string support through ropes, a limited form of type parameterization, and special syntax for identifying the type-safe parts of multi-module software packages, to ensure deterministic execution and prevent memory leaks.
Descendants
The United States Department of Defense approached Xerox to use Mesa for its "IronMan" programming language (see Steelman language requirements), but Xerox declined due to conflicting goals. Xerox PARC employees argued that Mesa was a proprietary advantage that made Xerox software engineers more productive than engineers at other companies. The Department of Defense instead eventually chose and developed the Ada programming language from the candidates.
The original Star Desktop evolved into the ViewPoint Desktop and later became GlobalView which was ported to various Unix platforms, such as SunOS Unix and AIX. A Mesa to C compiler was written and the resulting code compiled for the target platform. This was a workable solution but made it nearly impossible to develop on the Unix machines since the power of the Mesa compiler and associated tool chain was lost using this approach. There was some commercial success on Sun SPARC workstations in the publishing world, but this approach resulted in isolating the product to narrow market opportunities.
In 1976, during a sabbatical at Xerox PARC, Niklaus Wirth became acquainted with Mesa, which had a major influence in the design of his Modula-2 language.
Java explicitly refers to Mesa as a predecessor.
See also
History of the graphical user interface
References
External links
Mesa Programming Language Manual, Version 5 (1979) at bitsavers.org
Other Mesa documents at bitsavers.org
World-Stop Debuggers, Don Gillies, Xerox SDD/ISD Employee, 1984–86.
Xerox
Procedural programming languages
Concurrent programming languages
Programming languages created in 1976
Statically typed programming languages
Systems programming languages |
19963 | https://en.wikipedia.org/wiki/Marsilio%20Ficino | Marsilio Ficino | Marsilio Ficino (; Latin name: ; 19 October 1433 – 1 October 1499) was an Italian scholar and Catholic priest who was one of the most influential humanist philosophers of the early Italian Renaissance. He was an astrologer, a reviver of Neoplatonism in touch with the major academics of his day, and the first translator of Plato's complete extant works into Latin. His Florentine Academy, an attempt to revive Plato's Academy, influenced the direction and tenor of the Italian Renaissance and the development of European philosophy.
Early life
Ficino was born at Figline Valdarno. His father, Diotifeci d'Agnolo, was a physician under the patronage of Cosimo de' Medici, who took the young man into his household and became the lifelong patron of Marsilio, who was made tutor to his grandson, Lorenzo de' Medici. Giovanni Pico della Mirandola, the Italian humanist philosopher and scholar was another of his students.
Career and thought
Platonic Academy
During the sessions at Florence of the Council of Ferrara-Florence in 1438–1445, during the failed attempts to heal the schism of the Eastern (Orthodox) and Western (Catholic) churches, Cosimo de' Medici and his intellectual circle had made acquaintance with the Neoplatonic philosopher George Gemistos Plethon, whose discourses upon Plato and the Alexandrian mystics so fascinated the humanists of Florence that they named him the second Plato. In 1459 John Argyropoulos was lecturing on Greek language and literature at Florence, and Ficino became his pupil.
When Cosimo decided to refound Plato's Academy at Florence, he chose Ficino as its head. In 1462, Cosimo supplied Ficino with Greek manuscripts of Plato's work, whereupon Ficino started translating the entire corpus into Latin (draft translation of the dialogues finished 1468–9; published 1484). Ficino also produced a translation of a collection of Hellenistic Greek documents found by Leonardo da Pistoia later called Hermetica, and the writings of many of the Neoplatonists, including Porphyry, Iamblichus, and Plotinus.
Among his many students was Francesco Cattani da Diacceto, who was considered by Ficino to be his successor as the head of the Florentine Platonic Academy. Diacceto's student, Giovanni di Bardo Corsi, produced a short biography of Ficino in 1506.
Theology, astrology, and the soul
Though trained as a physician, Ficino became a priest in 1473. In 1474 Ficino completed his treatise on the immortality of the soul, Theologia Platonica de immortalitate animae (Platonic Theology). In the rush of enthusiasm for every rediscovery from Antiquity, he exhibited a great interest in the arts of astrology, which landed him in trouble with the Catholic Church. In 1489 he was accused of heresy before Pope Innocent VIII and was acquitted.
Writing in 1492 Ficino proclaimed:
Ficino's letters, extending over the years 1474–1494, survive and have been published. He wrote De amore (Of Love) in 1484. De vita libri tres (Three books on life), or De triplici vita (The Book of Life), published in 1489, provides a great deal of medical and astrological advice for maintaining health and vigor, as well as espousing the Neoplatonist view of the world's ensoulment and its integration with the human soul:
One metaphor for this integrated "aliveness" is Ficino's astrology. In the Book of Life, he details the interlinks between behavior and consequence. It talks about a list of things that hold sway over a man's destiny.
Medical works
Probably due to early influences from his father, Diotifeci, who was a doctor to Cosimo de' Medici, Ficino published Latin and Italian treatises on medical subjects such as Consiglio contro la pestilenza (Recommendations for the treatment of the plague) and De vita libri tres (Three books on life). His medical works exerted considerable influence on Renaissance physicians such as Paracelsus, with whom he shared the perception on the unity of the microcosmos and macrocosmos, and their interactions, through somatic and psychological manifestations, with the aim to investigate their signatures to cure diseases. Those works, which were very popular at the time, dealt with astrological and alchemical concepts. Thus Ficino came under the suspicion of heresy; especially after the publication of the third book in 1489, which contained specific instructions on healthful living in a world of demons and other spirits.
Platonic love
Notably, Ficino coined the term Platonic love, which first appeared in his letter to Alamanno Donati in 1476. In 1492, Ficino published Epistulae (Epistles), which contained Platonic love letters, written in Latin, to his academic colleague and life-long friend, Giovanni Cavalcanti, concerning the nature of Platonic love. Importantly, Ficino's letters to Cavalcanti resulted in the popularization of the term Platonic love in Western Europe.
Death
Ficino died on 1 October 1499 at Careggi. In 1521 his memory was honored with a bust sculpted by Andrea Ferrucci, which is located in the south side of the nave in the Cathedral of Santa Maria del Fiore.
Publications
Theologia Platonica de immortalitate animae (Platonic Theology). Harvard University Press, Latin with English translation.
vol. I, 2001.
vol. II, 2002.
vol. III, 2003.
vol. IV, 2004.
vol. V, 2005.
vol. VI with index, 2006.
The Letters of Marsilio Ficino. Shepheard-Walwyn Publishers. English translation with extensive notes; the Language Department of the School of Economic Science.
vol. I, 1975.
vol. II, 1978.
vol. III, 1981.
vol. IV, 1988.
vol. V, 1994.
vol. VI, 1999.
vol. VII, 2003
vol. VIII, 2010
vol. IX, 2013
Commentaries on Plato. I Tatti Renaissance Library. Bilingual, annotated English/Latin editions of Ficino's commentaries on the works of Plato.
vol. I, 2008, Phaedrus, and Ion, tr. by Michael J. B. Allen,
vol. II, 2012, Parmenides, part I, tr. by Maude Vanhaelen,
vol. III, 2012, Parmenides, part II, tr. by Maude Vanhaelen,
Icastes. Marsilio Ficino's Interpretation of Plato's Sophist, edited and translated by Michael J. B. Allen, Berkeley: University of California Press, 1989.
The Book of Life, translated with an introduction by Charles Boer, Dallas: Spring Publications, 1980. ISBN 0-88214-212-7
De vita libri tres (Three Books on Life, 1489) translated by Carol V. Kaske and John R. Clarke, Tempe, Arizona: The Renaissance Society of America, 2002. With notes, commentaries, and Latin text on facing pages.
De religione Christiana et fidei pietate (1475–6), dedicated to Lorenzo de' Medici.
In Epistolas Pauli commentaria, Marsilii Ficini Epistolae (Venice, 1491; Florence, 1497).
Meditations on the Soul: Selected letters of Marsilio Ficino, tr. by the Language Department of the School of Economic Science, London. Rochester, Vermont: Inner Traditions International, 1996. . Note for instance, letter 31: A man is not rightly formed who does not delight in harmony, pp. 5–60; letter 9: One can have patience without religion, pp. 16–18; Medicine heals the body, music the spirit, theology the soul, pp. 63–64; letter 77: The good will rule over the stars, p. 166.
Commentary on Plato's Symposium on Love, translated with an introduction and notes by Sears Jayne. Woodstock, Conn.: Spring Publications (1985), 2nd edition, 2000.
Collected works: Opera (Florence,1491, Venice, 1516, Basel, 1561).
See also
References
Further reading
Allen, Michael J. B., Nuptial Arithmetic: Marsilio Ficino's Commentary on the Fatal Number in Book VIII of Plato's Republic. Berkeley: University of California Press, 1994.
Ernst Cassirer, Paul Oskar Kristeller, John Herman Randall, Jr., The Renaissance Philosophy of Man. The University of Chicago Press (Chicago, 1948.) Marsilio Ficino, Five Questions Concerning the Mind, pp. 193–214.
Anthony Gottlieb, The Dream of Reason: A History of Western Philosophy from the Greeks to the Renaissance (Penguin, London, 2001)
James Heiser, Prisci Theologi and the Hermetic Reformation in the Fifteenth Century (Repristination Press, Malone, Texas, 2011)
Paul Oskar Kristeller, Eight Philosophers of the Italian Renaissance. Stanford University Press (Stanford California, 1964) Chapter 3, "Ficino," pp. 37–53.
Raffini, Christine, "Marsilio Ficino, Pietro Bembo, Baldassare Castiglione: Philosophical, Aesthetic, and Political Approaches in Renaissance Platonism", Renaissance and Baroque Studies and Texts, v.21, Peter Lang Publishing, 1998.
Robb, Nesca A., Neoplatonism of the Italian Renaissance, New York: Octagon Books, Inc., 1968.
Reeser, Todd W. Setting Plato Straight: Translating Ancient Sexuality in the Renaissance. Chicago: UChicagoP, 2016.
Field, Arthur, The Origins of the Platonic Academy of Florence, New Jersey: Princeton, 1988.
Allen, Michael J.B., and Valery Rees, with Martin Davies, eds. Marsilio Ficino : His Theology, His Philosophy, His Legacy.Leiden : E.J.Brill, 2002. A wide range of new essays.
Voss, Angela, Marsilio Ficino, Western Esoteric Masters series. North Atlantic Books, 2006.
External links
Platonis Opera Omnia (Latin)
Marsilio Ficino entry by James G. Snyder in Internet Encyclopedia of Philosophy
Short Biography of Ficino
Catholic Encyclopedia entry
The Influence of Marsilio Ficino
www.ficino.it Website of the International Ficino Society
Online Galleries, History of Science Collections, University of Oklahoma Libraries. High resolution images of works by and/or portraits of Marsilio Ficino in .jpg and .tiff format.
1433 births
1499 deaths
15th-century astrologers
15th-century Italian philosophers
15th-century Italian Roman Catholic priests
15th-century Italian Roman Catholic theologians
15th-century Italian writers
15th-century Latin writers
15th-century non-fiction writers
15th-century philosophers
15th-century translators
Book and manuscript collectors
Catholic philosophers
Christian humanists
Commentators on Plato
Cultural critics
Epistemologists
Greek–Latin translators
Historians of philosophy
Historians of religion
History of astrology
History of philosophy
History of science
Intellectual history
Italian astrologers
Italian essayists
Italian ethicists
Italian letter writers
Italian male non-fiction writers
Italian philosophers
Italian Renaissance humanists
Italian Roman Catholics
Italian translators
Literacy and society theorists
Literary theorists
Medieval letter writers
Metaphilosophers
Metaphysicians
Metaphysics writers
Moral philosophers
Mystics
Neoplatonists
Ontologists
People from the Province of Florence
Perennial philosophy
Philosophers of art
Philosophers of culture
Philosophers of education
Philosophers of ethics and morality
Philosophers of history
Philosophers of literature
Philosophers of love
Philosophers of mind
Philosophers of religion
Philosophers of science
Philosophers of social science
Philosophy writers
Renaissance philosophy
Rhetoric theorists
Rhetoricians
Social commentators
Social critics
Social philosophers
Writers about activism and social change
Writers about religion and science |
19965 | https://en.wikipedia.org/wiki/Morphogenesis | Morphogenesis | Morphogenesis (from the Greek morphê shape and genesis creation, literally "the generation of form") is the biological process that causes a cell, tissue or organism to develop its shape. It is one of three fundamental aspects of developmental biology along with the control of tissue growth and patterning of cellular differentiation.
The process controls the organized spatial distribution of cells during the embryonic development of an organism. Morphogenesis can take place also in a mature organism, such as in the normal maintenance of tissue by stem cells or in regeneration of tissues after damage. Cancer is an example of highly abnormal and pathological tissue morphogenesis. Morphogenesis also describes the development of unicellular life forms that do not have an embryonic stage in their life cycle. Morphogenesis is essential for the evolution of new forms.
Morphogenesis is a mechanical process involving forces that generate mechanical stress, strain, and movement of cells, and can be induced by genetic programs according to the spatial patterning of cells within tissues.
History
Some of the earliest ideas and mathematical descriptions on how physical processes and constraints affect biological growth, and hence natural patterns such as the spirals of phyllotaxis, were written by D'Arcy Wentworth Thompson in his 1917 book On Growth and Form and Alan Turing in his The Chemical Basis of Morphogenesis (1952). Where Thompson explained animal body shapes as being created by varying rates of growth in different directions, for instance to create the spiral shell of a snail, Turing correctly predicted a mechanism of morphogenesis, the diffusion of two different chemical signals, one activating and one deactivating growth, to set up patterns of development, decades before the formation of such patterns was observed. The fuller understanding of the mechanisms involved in actual organisms required the discovery of the structure of DNA in 1953, and the development of molecular biology and biochemistry.
Genetic and molecular basis
Several types of molecules are important in morphogenesis. Morphogens are soluble molecules that can diffuse and carry signals that control cell differentiation via concentration gradients. Morphogens typically act through binding to specific protein receptors. An important class of molecules involved in morphogenesis are transcription factor proteins that determine the fate of cells by interacting with DNA. These can be coded for by master regulatory genes, and either activate or deactivate the transcription of other genes; in turn, these secondary gene products can regulate the expression of still other genes in a regulatory cascade of gene regulatory networks. At the end of this cascade are classes of molecules that control cellular behaviors such as cell migration, or, more generally, their properties, such as cell adhesion or cell contractility. For example, during gastrulation, clumps of stem cells switch off their cell-to-cell adhesion, become migratory, and take up new positions within an embryo where they again activate specific cell adhesion proteins and form new tissues and organs. Developmental signaling pathways implicated in morphogenesis include Wnt, Hedgehog, and ephrins.
Cellular basis
At a tissue level, ignoring the means of control, morphogenesis arises because of cellular proliferation and motility. Morphogenesis also involves changes in the cellular structure or how cells interact in tissues. These changes can result in tissue elongation, thinning, folding, invasion or separation of one tissue into distinct layers. The latter case is often referred as cell sorting. Cell "sorting out" consists of cells moving so as to sort into clusters that maximize contact between cells of the same type. The ability of cells to do this has been proposed to arise from differential cell adhesion by Malcolm Steinberg through his differential adhesion hypothesis. Tissue separation can also occur via more dramatic cellular differentiation events during which epithelial cells become mesenchymal (see Epithelial–mesenchymal transition). Mesenchymal cells typically leave the epithelial tissue as a consequence of changes in cell adhesive and contractile properties. Following epithelial-mesenchymal transition, cells can migrate away from an epithelium and then associate with other similar cells in a new location. In plants, cellular morphogenesis is tightly linked to the chemical composition and the mechanical properties of the cell wall.
Cell-to-cell adhesion
During embryonic development, cells are restricted to different layers due to differential affinities. One of the ways this can occur is when cells share the same cell-to-cell adhesion molecules. For instance, homotypic cell adhesion can maintain boundaries between groups of cells that have different adhesion molecules. Furthermore, cells can sort based upon differences in adhesion between the cells, so even two populations of cells with different levels of the same adhesion molecule can sort out. In cell culture cells that have the strongest adhesion move to the center of a mixed aggregates of cells. Moreover, cell-cell adhesion is often modulated by cell contractility, which can exert forces on the cell-cell contacts so that two cell populations with equal levels of the same adhesion molecule can sort out. The molecules responsible for adhesion are called cell adhesion molecules (CAMs). Several types of cell adhesion molecules are known and one major class of these molecules are cadherins. There are dozens of different cadherins that are expressed on different cell types. Cadherins bind to other cadherins in a like-to-like manner: E-cadherin (found on many epithelial cells) binds preferentially to other E-cadherin molecules. Mesenchymal cells usually express other cadherin types such as N-cadherin.
Extracellular matrix
The extracellular matrix (ECM) is involved in keeping tissues separated, providing structural support or providing a structure for cells to migrate on. Collagen, laminin, and fibronectin are major ECM molecules that are secreted and assembled into sheets, fibers, and gels. Multisubunit transmembrane receptors called integrins are used to bind to the ECM. Integrins bind extracellularly to fibronectin, laminin, or other ECM components, and intracellularly to microfilament-binding proteins α-actinin and talin to link the cytoskeleton with the outside. Integrins also serve as receptors to trigger signal transduction cascades when binding to the ECM. A well-studied example of morphogenesis that involves ECM is mammary gland ductal branching.
Cell contractility
Tissues can change their shape and separate into distinct layers via cell contractility. Just as in muscle cells, myosin can contract different parts of the cytoplasm to change its shape or structure. Myosin-driven contractility in embryonic tissue morphogenesis is seen during the separation of germ layers in the model organisms Caenorhabditis elegans, Drosophila and zebrafish. There are often periodic pulses of contraction in embryonic morphogenesis. A model called the cell state splitter involves alternating cell contraction and expansion, initiated by a bistable organelle at the apical end of each cell. The organelle consists of microtubules and microfilaments in mechanical opposition. It responds to local mechanical perturbations caused by morphogenetic movements. These then trigger traveling embryonic differentiation waves of contraction or expansion over presumptive tissues that determine cell type and is followed by cell differentiation. The cell state splitter was first proposed to explain neural plate morphogenesis during gastrulation of the axolotl and the model was later generalized to all of morphogenesis.
Branching morphogenesis
In the development of the lung a bronchus branches into bronchioles forming the respiratory tree. The branching is a result of the tip of each bronchiolar tube bifurcating, and the process of branching morphogenesis forms the bronchi, bronchioles, and ultimately the alveoli.
Branching morphogenesis is also evident in the ductal formation of the mammary gland. Primitive duct formation begins in development, but the branching formation of the duct system begins later in response to estrogen during puberty and is further refined in line with mammary gland development.
Cancer morphogenesis
Cancer can result from disruption of normal morphogenesis, including both tumor formation and tumor metastasis. Mitochondrial dysfunction can result in increased cancer risk due to disturbed morphogen signaling.
Virus morphogenesis
During assembly of the bacteriophage (phage) T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. Phage T4 encoded proteins that determine virion structure include major structural components, minor structural components and non-structural proteins that catalyze specific steps in the morphogenesis sequence. Phage T4 morphogenesis is divided into three independent pathways: the head, the tail and the long tail fibres as detailed by Yap and Rossman.
See also
Bone morphogenetic protein
Collective cell migration
Embryonic development
Pattern formation
Turing pattern
French flag model
Reaction–diffusion system
Neurulation
Gastrulation
Axon guidance
Eye development
Polycystic kidney disease 2
Drosophila embryogenesis
Cytoplasmic determinant
Madin-Darby Canine Kidney cells
Notes
References
Further reading
External links
Artificial Life model of multicellular morphogenesis with autonomously generated gradients for positional information
Turing's theory of morphogenesis validated
Developmental biology
Morphology (biology)
Evolutionary developmental biology |
19967 | https://en.wikipedia.org/wiki/Medium | Medium | Medium may refer to:
Science and technology
Aviation
Medium bomber, a class of war plane
Tecma Medium, a French hang glider design
Communication
Media (communication), tools used to store and deliver information or data
Medium of instruction, a language or other tool used to educate, train, or instruct
Wave physics
Transmission medium, in physics and telecommunications, any material substance which can propagate waves or energy
Active laser medium (also called gain medium or lasing medium), a quantum system that allows amplification of power (gain) of waves passing through (usually by stimulated emission)
Optical medium, in physics, a material through with electromagnetic waves propagate
Excitable medium, a non-linear dynamic system which has the capacity to propagate a wave
Other uses in science and technology
Data storage medium, a storage container in computing
Growth medium (or culture medium), in biotechnology, an object in which microorganisms or cells experience growth
Interplanetary medium, in astronomy, material which fills the solar system
Interstellar medium, in astronomy, the matter and energy content that exists between the stars within a galaxy
Porous medium, in engineering and earth sciences, a material that allows fluid to pass through it, such as sand
Processing medium, in industrial engineering, a material that plays a role in manufacturing processes
Arts, entertainment, and media
Films
The Medium (1921 film), a German silent film
The Medium (1951 film), a film version of the opera directed by Menotti
The Medium (1960 film), an Australian television play
The Medium (1992 film), an English film from Singapore
The Medium (2021 film), a Thai film
Periodicals
The Medium (Rutgers), an entertainment weekly at Rutgers University
The Medium (University of Toronto Mississauga), a student newspaper at the University of Toronto Mississauga
Other arts, entertainment, and media
List of art media (plural: media), materials and techniques used by an artist to produce a work
Medium (TV series), an American television series starring Patricia Arquette about a medium (psychic intermediary) working as a consultant for a district attorney's office
Medium (website), a publishing platform
Medium Productions, a record label
The Medium, a 1946 opera by Gian-Carlo Menotti
Medium (band), an American rock band
The Medium (video game), a 2021 psychological horror video game developed by Bloober Team.
People
Medium, a practitioner of mediumship, the practice of purportedly mediating communication between spirits of the dead and living human beings
Tau (rapper) (born 1986), Polish rapper formerly known as "Medium"
Other uses
Medium, a doneness (gradation of cooked meat)
Medium pace bowling, another name for fast bowling in the sport of cricket
See also
Channel (disambiguation)
Conduit (disambiguation)
Media (disambiguation)
Median (disambiguation)
Meidum, a pyramid in Egypt |
19971 | https://en.wikipedia.org/wiki/MCA | MCA | MCA may refer to:
Astronomy
Mars-crossing asteroid, an asteroid whose orbit crosses that of Mars
Aviation
Minimum crossing altitude, a minimum obstacle crossing altitude for fixes on published airways
Medium Combat Aircraft, a 5th generation fighter aircraft in India's HAL AMCA (Advanced Medium Combat Aircraft) program
Macenta Airport, Guinea (by IATA code)
Biology and chemistry
MacConkey agar, a selective growth medium for bacteria
Monochloroacetic acid, carboxylic acid, manufactured by chlorinating acetic acid
Methylcholanthrene, a carcinogen
Methyl cyanoacrylate, an organic compound
Metabolic control analysis, analysing how the control of fluxes and intermediate concentrations in a metabolic pathway is distributed
Middle cerebral artery, one of the three major blood supplies to the brain
Climate
Medieval Climatic Anomaly (Medieval Warm Period, also Medieval Climate Optimum), a notably warm climatic period in the North Atlantic region from about 950 to 1250.
Companies
MCA Inc., a now defunct company (originally called Music Corporation of America) and its subsidiary companies:
MCA Records
MCA Nashville Records
MCA Home Video, former name of Universal Studios Home Entertainment
MCA Music Inc. (Philippines), a Philippine branch of Universal Music Group which uses the MCA brand due to a trademark issue
Maubeuge Construction Automobile (MCA), a subsidiary of French car manufacturer Renault
Minato Communications Association, a former company name of the Japan Electronics and Information Technology Industries Association
Education
Degrees
Master in Customs Administration, a trade-related graduate degree offered in PMI Colleges (Philippines)
Master of Computer Applications, a three-year master's (postgraduate) degree in Computer Science/Applied Computer Science offered in India
Educational institutions
Maranatha Christian Academy, the former name of National Christian Life College, Marikina, Philippines
Marist College Ashgrove, an Australian School
McIntosh County Academy, a high school in McIntosh County, Georgia, United States
Memphis College of Art, an art school in Tennessee, United States
Morrison Christian Academy, an American school in Taiwan
Professional courses
Microsoft Certified Architect, a certification available from Microsoft
Minnesota Comprehensive Assessments—Series II, a standardized test in Minnesota
Legal
Depository Institutions Deregulation and Monetary Control Act, a US financial statute passed in 1980
Mental Capacity Act 2005, an Act of the Parliament of the United Kingdom applying to England and Wales
Military Commissions Act of 2006, US legislation
Organizations
Maharashtra Chess Association
Malaysian Chinese Association, a political party in Malaysia
Maritime and Coastguard Agency, an agency of the United Kingdom Government
Medal Collectors of America
Medicines Control Agency, which merged with the Medical Devices Agency to become the Medicines and Healthcare products Regulatory Agency
Metal Construction Association
Millennium Challenge Account, a U.S. program for aid to developing countries
Ministry of Corporate Affairs, an Indian government ministry
MultiCultural Aotearoa, a New Zealand political action group
Mumbai Cricket Association, ruling body for cricket in Mumbai
Multicore Association, an industry association regrouping companies and universities interested in multicore computing research.
Museum of Contemporary Art (disambiguation), numerous museums around the world
People
Adam Yauch (1964–2012), a.k.a. "MCA" of the Beastie Boys
Michiel van den Bos (born 1975), a.k.a. "M.C.A.", Dutch composer
Chris Avellone (born 1971), a.k.a. "MCA", Video game designer
Sports
MC Alger, a football club based in Algiers, Algeria
Manitoba Curling Association, Manitoba, Canada
Maharashtra Cricket Association, Pune, India
Maharashtra Cricket Association Stadium, Pune, India
Mumbai Cricket Association, Mumbai, India
Technology
Machine Check Architecture, a method for a CPU to report hardware errors to an operating system
Maximum credible accident, a postulated scenario that a nuclear facility must be able to withstand
Micro Channel architecture, a type of computer bus
Mitsubishi MCA, an emissions control approach for gasoline-powered vehicles during the 1970s
Movable cellular automaton
Multichannel analyzer, a gamma ray spectroscopy system for measuring counts at different voltage levels
Other uses
Medieval Climate Anomaly
Metropolitan Church Association, Methodist denomination in the holiness movement
Motor Coach Age, the magazine of the Motor Bus Society
Multiple correspondence analysis
The Maká language (ISO 639-3 Code) |
19972 | https://en.wikipedia.org/wiki/Magical%20organization | Magical organization | A magical organization or magical order is an organization created for the practice of ceremonial or other forms of occult magic or to further the knowledge of magic among its members. Magical organizations can include hermetic orders, Wiccan covens or Wiccan circles, esoteric societies, arcane colleges, witches' covens, and other groups which may use different terminology and similar though diverse practices.
It is also sometimes difficult to determine if an organization is sincerely practicing magic. For example, The Satanic Temple is perhaps a human rights lobby organization posing as a magical organization.
19th century
The Hermetic Order of the Golden Dawn has been credited with a vast revival of occult literature and practices and was founded in 1887 or 1888 by William Wynn Westcott, Samuel Liddell MacGregor Mathers and William Robert Woodman. The teachings of the Order include ceremonial magic, Enochian magic, Christian mysticism, Qabalah, Hermeticism, the paganism of ancient Egypt, theurgy, and alchemy.
Ordo Aurum Solis, founded in 1897, is a Western mystery tradition group teaching Hermetic Qabalah. Its rituals and system are different from the more popular Golden Dawn, because the group follows the ogdoadic tradition instead of Rosicrucianism.
Ordo Templi Orientis (OTO) was founded by Carl Kellner in 1895.
20th century
A∴A∴ was created in 1907 by Aleister Crowley and teaches "magick" and Thelema. Thelema is a religion shared by several occult organizations. The main text of Thelema is The Book of the Law. Ordo Templi Orientis was reworked by Aleister Crowley after he took control of the Order in the early 1920s. Ecclesia Gnostica Catholica functions as the ecclesiastical arm of OTO.
Builders of the Adytum (or B.O.T.A.) was created in 1922 by Paul Foster Case and was extended by Dr. Ann Davies. It teaches Hermetic Qabalah, astrology and occult tarot.
In 1954, Kenneth Grant began the work of founding the New Isis Lodge, which became operational in 1955. This became the Typhonian Ordo Templi Orientis (TOTO), which was eventually renamed to Typhonian Order.
In 1976, James Lees founded the order O∴A∴A∴ in order to assist others in the pursuit of their own spiritual paths. The work of this order is based in English Qaballa.
During the last two decades of the 20th century, several organizations practicing chaos magic were founded. These include Illuminates of Thanateros, and Thee Temple ov Psychick Youth. These groups rely on the use of sigils. Their main texts include Liber Null (1978) and Psychonaut (1982), now published as a single book.
See also
Coven
Fraternity and sorority
List of Neopagan movements
List of occultists
Religious organization
Rosicrucianism
Secret society
UFO religion
Notes
References
Citations
Works cited
Lists of organizations |
19975 | https://en.wikipedia.org/wiki/Muhammad%20ibn%20Abd%20al-Wahhab | Muhammad ibn Abd al-Wahhab | Muḥammad ibn ‘Abd al-Wahhāb at-Tamīmī (; ; 1703 – 1792) was an Islamic scholar, religious leader, reformer, activist, and theologian from Najd in central Arabia, considered as the eponymous founder of the Wahhabi movement. His prominent students included his sons Ḥusayn, ʿAbdullāh, ʿAlī, and Ibrāhīm, his grandson ʿAbdur-Raḥman ibn Ḥasan, his son-in-law ʿAbdul-ʿAzīz ibn Muḥammad ibn Saʿūd, Ḥamād ibn Nāṣir ibn Muʿammar, and Ḥusayn āl-Ghannām.
The label "Wahhabi" is not claimed by his followers but rather employed by Western scholars as well as his critics. Born to a family of jurists, Ibn ʿAbd al-Wahhab's early education consisted of learning a fairly standard curriculum of orthodox jurisprudence according to the Hanbali school of Islamic law, which was the school most prevalent in his area of birth. He promoted strict adherence to traditional Islamic law, proclaiming the necessity of returning directly to the Quran and Hadith rather than relying on medieval interpretations, and insisted that every Muslim male and female personally read and study the Quran. He opposed taqlid (blind following) and called for the use of ijtihad (independent legal reasoning through research of scripture). He had initial rudimentary training in classical Sunni Muslim tradition, Ibn ʿAbd al-Wahhab gradually became opposed to many popular, yet contested, religious practices such as the visitation to and veneration of the shrines and tombs of Muslim saints, which he felt amounted to heretical religious innovation or even idolatry. His call for social reform in society was based on the key doctrine of tawhid (oneness of God).
Despite his teachings being rejected and opposed by many of the most notable Sunni Muslim scholars of the period, including his own father and brother, Ibn ʿAbd al-Wahhab charted a religio-political pact with Muhammad bin Saud to help him to establish the Emirate of Diriyah, the first Saudi state, and began a dynastic alliance and power-sharing arrangement between their families which continues to the present day in the Kingdom of Saudi Arabia. The Al ash-Sheikh, Saudi Arabia's leading religious family, are the descendants of Ibn ʿAbd al-Wahhab, and have historically led the ulama in the Saudi state, dominating the state's clerical institutions.
Early years
Background
Muhammad Ibn ʿAbd al-Wahhab is generally acknowledged to have been born in 1703 into the sedentary and impoverished Arab clan of Banu Tamim in 'Uyayna, a village in the Najd region of central Arabia. Before the emergence of the Wahhabi movement, there was a very limited history of Islamic education in the area. For this reason, Ibn ʿAbd al-Wahhab had modest access to Islamic education during his youth. Despite this, the area had nevertheless produced several notable jurists of the Hanbali school of orthodox Sunni jurisprudence, which was the school of law most prominently practiced in the area. In fact, Ibn ʿAbd-al-Wahhab's own family "had produced several doctors of the school," with his father, ʿAbd al-Wahhāb, having been the Hanbali jurisconsult of the Najd and his grandfather, Sulaymān, having been a judge of Hanbali law.
Early studies
Ibn ʿAbd-al-Wahhab's early education was taught by his father, and consisted of learning the Quran by heart and studying a rudimentary level of Hanbali jurisprudence and Islamic theology as outlined in the works of Ibn Qudamah (d. 1223), one of the most influential medieval representatives of the Hanbali school, whose works were regarded "as having great authority" in the Najd. The affirmation of Islamic sainthood and the ability of saints to perform miracles (karamat) by the grace of God had become a major aspect of Sunni Muslim belief throughout the Islamic world, being agreed-upon by majority of the classical Islamic scholars. Ibn ʿAbd-al-Wahhab had encountered various excessive beliefs and practices associated with saint-veneration and saint-cults which were prevelant in his area. He probably chose to leave Najd and look elsewhere for studies to see if such beliefs and rituals were as popular in the neighboring places of the Muslim world or the possibility that his home town offered inadequate educational resources. Even today, the reasoning for why he left Najd is unclear.
Pilgrimage to Mecca
After leaving 'Uyayna, Ibn ʿAbd al-Wahhab performed the Greater Pilgrimage in Mecca, where the scholars appear to have held opinions and espoused teachings that were unpalatable to him. After this, he went to Medina, the stay at which seems to have been "decisive in shaping the later direction of his thought." In Medina, he met a Hanbali theologian from Najd named ʿAbd Allāh ibn Ibrāhīm al-Najdī, who had been a supporter of the neo-Hanbali works of Ibn Taymiyyah (d. 1328), the controversial medieval scholar whose teachings had been considered heterodox and misguided on several important points by the vast majority of Sunni Muslim scholars up to that point in history.
Tutelage under Al-Sindhi
Ibn ʿAbd al-Wahhab's teacher, 'Abdallah ibn Ibrahim ibn Sayf, introduced the relatively young man to Mohammad Hayya Al-Sindhi in Medina, who belonged to the Naqshbandi order (tariqa) of Sufism, and recommended him as a student. Muhammad Ibn ʿAbd-al-Wahhab and al-Sindhi became very close, and Ibn ʿAbd-al-Wahhab stayed with him for some time. Muhammad Hayya taught Muhammad Ibn ʿAbd-al-Wahhab to reject popular religious practices associated with walis and their tombs. He also encouraged him to reject rigid imitation (Taqlid) of medieval legal commentaries and develop individual research of scriptures (Ijtihad). Influenced by Al-Sindi's teachings, Ibn 'Abd al-Wahhab became critical of the established Madh'hab system, prompting him to disregard the instruements of Usul al-Fiqh in his intellectual approach. Ibn 'Abd al-Wahhab rarely made use of Fiqh (Islamic jurisprudence) and various legal opinions in his writings, by and large forming views based on his direct understanding of Scriptures.
Apart from his emphasis on hadith studies, aversion for the madhhab system and disregard for technical juristic discussions involving legal principles, Ibn ‘Abd al-Wahhāb’s views on ziyārah (visitations to the shrines of Awliyaa) were also shaped by Al-Sindhi. Sindi encouraged his student to reject folk practices associated with graves and saints. Various themes in Al-Sindi's writings, such as his opposition to erecting tombs and drawing human images, would be revived later by the Wahhabi movement. Sindi instilled in Ibn 'Abd al-Wahhab the belief that practices like beseeching the dead saints constitued apostasy and resembled the customs of the people of Jahiliyya (pre-Islamic era). In a significant encounter between a young Ibn 'Abd al-Wahhab and Al-Sindhi reported by the Najdi historian 'Uthman Ibn Bishr (d. 1288 A.H./ 1871/2 C.E.):"... one day Shaykh Muḥammad [Ibn ‘Abdi’l-Wahhāb] stood by the chamber of the Prophet where people were calling [upon him or supplicating] and seeking help by the Prophet’s chamber, blessings and peace be upon him. He then saw Muḥammad Ḥayāt [al Sindī] and came to him. The shaykh [Ibn ‘Abdi’l-Wahhāb] asked, “What do you say about them?” He [al-Sindī] said, “Verily that in which they are engaged shall be destroyed and their acts are invalid.”"
Journey to Basra
Following his early education in Medina, Ibn ʿAbd-al-Wahhab traveled outside of the Arabian Peninsula, venturing first to Basra which was still an active center of Islamic culture. During his stay in Basra, Ibn 'Abd al-Wahhab studied Hadith and Fiqh under the Islamic scholar Muhammad al-Majmu'i. In Basra, Ibn 'Abd al-Wahhab came into contact with Shi'is and would write a treatise repudiating the theological doctrines of Rafidah, an extreme sect of Shiism.
Early preaching
His leave from Basra marked the end of his education and by the time of his return to 'Uyayna, Ibn 'Abd al-Wahhab had mastered various religious disciplines such as Islamic Fiqh (jurisprudence), theology, hadith sciences and Tasawwuf. His exposure to various practices centered around the cult of saints and grave veneration would eventually propel Ibn 'Abd al-Wahhab to grow critical of Sufi superstitious accretions and practices. Rather than targeting “Sufism” as a phenomenon or a group, Ibn 'Abd al-Wahhab denounced particular practices which he considered sinful.
As a gifted communicator with a talent for breaking down his ideas into shorter units, Ibn 'Abd al-Wahhab entitled his treatises with terms such as qawāʿid (“principles”), masāʾil (“matters”), kalimāt (“phrases”) or uṣūl (“foundations”), simplifying his texts point by point for mass reading. Calling upon the people to follow his call for religious revival (tajdid ) based on following the founding texts and the authoritative practices of the first generations of Muslims, Ibn 'Abd al-Wahhab declared: "I do not - God be blessed - conform to any particular sufi order or faqih, nor follow the course of any speculative theologian (mutakalim) or any other Imam for that matter, not even such dignitaries as ibn al-Qayyim, al-Dhahabi, or ibn Kathir, I summon you only to God, and Only Him as well as observe the path laid by His Prophet, God’s messenger."Ibn ʿAbd al-Wahhab's call gradually began to attract followers, including the ruler of 'Uyayna, Uthman ibn Mu'ammar. Upon returning to Huraymila, where his father had settled, Ibn ʿAbd al-Wahhab wrote his first work on the Unity of god. With Ibn Mu'ammar, Ibn ʿAbd al-Wahhab agreed to support Ibn Mu'ammar's political ambitions to expand his rule "over Najd and possibly beyond", in exchange for the ruler's support for Ibn ʿAbd al-Wahhab's religious teachings. Initially, he condemned popular folk practices prevalent in Najd on doctrinal grounds, without seeking to enforce his views in practical terms. Starting from 1742, Ibn 'Abd al-Wahhab would shift towards an activist stance; and began to implement his reformist ideas. First, he persuaded Ibn Mu'ammar to help him level the grave of Zayd ibn al-Khattab, a companion of Muhammad, whose grave was revered by locals. Secondly, he ordered the cutting down of trees considered sacred by locals, cutting down "the most glorified of all of the trees" himself. Third, he organized the stoning of a woman who confessed to having committed adultery.
These actions gained the attention of Sulaiman ibn Muhammad ibn Ghurayr of the tribe of Bani Khalid, the chief of Al-Hasa and Qatif, who held substantial influence in Najd. Ibn Ghurayr threatened Ibn Mu'ammar by denying him the ability to collect a land tax for some properties that Ibn Mu'ammar owned in Al-Hasa if he did not kill or drive away from Ibn ʿAbd al-Wahhab. Consequently, Ibn Mu'ammar forced Ibn ʿAbd al-Wahhab to leave.
The early Wahhabis had been protected by Ibn Mu'ammar in Uyayna, despite being persecuted in other settlements. As soon as Ibn Mu'ammar disowned them, Wahhabis were subject to excommunication (Takfir); exposing themselves to loss of lives and property. This experience of suffering reminded them of the Mihna against Ahmad Ibn Hanbal and his followers, and shaped the collective Wahhabi memory. As late as 1749, the sharif of Mecca imprisoned those Wahhabis who went to Mecca to perform the Hajj (annual pilgrimage).
Emergence of Saudi state
Pact with Muhammad bin Saud
Upon his expulsion from 'Uyayna, Ibn ʿAbd al-Wahhab was invited to settle in neighboring Diriyah by its ruler Muhammad ibn Saud Al Muqrin. After some time in Diriyah, Ibn ʿAbd al-Wahhab concluded his second and more successful agreement with a ruler. Ibn ʿAbd al-Wahhab and Muhammad bin Saud agreed that, together, they would bring the Arabs of the peninsula back to the "true" principles of Islam as they saw it. According to the anonymous author of Lam al-Shihab (Brilliance of the Meteor), when they first met, Ibn Saud declared:"This oasis is yours, do not fear your enemies. By the name of God, if all Nejd was summoned to throw you out, we will never agree to expel you."
Muhammad ibn ʿAbd al-Wahhab replied:"You are the settlement's chief and wise man. I want you to grant me an oath that you will perform jihad against the unbelievers. In return, you will be imam, leader of the Muslim community and I will be leader in religious matters."
The agreement was confirmed with a mutual oath of loyalty (bay'ah) in 1744. Once Al-Sa'ud made Dir'iyya a safe haven, Wahhabis from other towns took refuge. These included dissenters from Ibn Mu'ammar clan who had sworn allegiance to Ibn 'Abd al-Wahhab. The nucleus of Ibn 'Abd al-Wahhab's supporters all across Najd retreated to Dir'iyyah and formed the vanguard of the insurgency launched by Al-Saud against other towns.
From a person who started his career as a lone activist, Ibn 'Abd al-Wahhab would become the spiritual guide of the nascent Emirate of Muhammad ibn Saud Al-Muqrin. Ibn 'Abd al-Wahhab would be responsible for religious matters and Ibn Saud in charge of political and military issues. This agreement became a "mutual support pact" and power-sharing arrangement between the Aal Saud family, and the Aal ash-Sheikh and followers of Ibn ʿAbd al-Wahhab, which had remained in place for nearly 300 years, providing the ideological impetus to Saudi expansion. Reviving the teachings of Ibn Taymiyya, the Muwaḥḥidūn (Unitarian) movement emphasized strict adherence to Qur'an and Sunnah; while simultaneously championing the conception of an Islamic state based on the model of early Muslim community in Medina. Meanwhile, it's Muslim and Western opponents derogatorily labelled the movement as the "Wahhābiyyah" ( anglicised as "Wahhabism" ).
Emirate of Diriyah (First Saudi State)
The 1744 pact between Muhammad ibn Saud and Muhammad ibn ʿAbd al-Wahhab marked the emergence of the first Saudi state, the Emirate of Diriyah. By offering the Aal-Saud a clearly defined religious mission, the alliance provided the ideological impetus to Saudi expansion. Deducing from his bitter experiences in 'Uyaynah, Ibn 'Abd al-Wahhab had understood the necessity of political backing from a strong Islamic political entity to transform the local socio-religious status quo and also safeguard Wahhabism’s territorial base from external pressure. After consolidating his position in Diriyah, he wrote to the rulers and clerics of other towns; appealing them to embrace his doctrines. While some heeded his calls, others rejected it; accusing him of ignorance or sorcery.
War with Riyadh (1746-1773)
Realising the signifiance of efficient religious preaching (da'wa), Ibn 'Abd al-Wahhab called upon his students to master the path of reasoning and proselytising over warfare to convince other Muslims of their reformist endeavour. Between 1744-1746, Ibn 'Abd al-Wahhab's preaching continued in the same non-violent manner as before and spread widely across the people of Najd. Rulers of various towns across Najd pledged their allegiance to Ibn Suʿūd. This situation changed drastically around 1158/1746; when the powerful anti-Wahhabi chieftain of Riyadh, Dahhām ibn Dawwās (fl. 1187/1773), attacked the town of Manfuha which had pledged allegiance to Diriyah. This would spark a nearly 30-year long between Diriyah and Riyadh, which lasted until 1187/1773, barring some interruptions. First conquering Najd, Muhammad ibn Saud's forces expanded the Wahhabi influence to most of the present-day territory of Saudi Arabia, eradicating various popular practices they viewed as akin to polytheism and propagating the doctrines of ʿAbd al-Wahhab.
Muhammad Ibn ʿAbd al-Wahhāb maintained that the military campaigns of the Emirate of Dirʿiyya were strictly defensive and rebuked his opponents as being the first to initiate Takfir (excommunication). Ibn 'Abd al-Wahhab had defined jihad as an activity that must have a valid religious justification and which can only be declared by an Imam whose purpose must be strictly defensive in nature. Justifying the Wahhabi military campaigns as defensive operations against their enemies, Ibn 'Abd al-Wahhab asserts:"As for warfare, until today, we did not fight anyone, except in defense of our lives and honor. They came to us in our area and did not spare any effort in fighting us. We only initiated fighting against some of them in retaliation for their continued aggression, [The recompense for an evil is an evil like thereof] (42:40)... they are the ones who started declaring us to be unbelievers and fighting us"
Rebellion in Huraymila (1752-1755)
In 1753–4, the Wahhabis were confronted by an alarming number of towns renouncing allegiance and aligning with their opponents. Most prominent amongst these was the town of Huraymila, which had pledged allegiance to Dir'iyah in 1747. However, by 1752, a group of rebels encouraged by Ibn ʿAbd al-Wahhāb’s brother, Sulaymān, had initiated a coup in Huraymila and installed a new ruler that threatened to topple the Wahhābī order. A fierce war between Diriyah and Huraymila began in a magnitude that was unprecedented. Ibn ‘Abd al-Wahhab held a convocation of Wahhabis from all the settlements across Najd. Reviewing the recent desertions and defeats, he encouraged them to hold fast to their faith and recommit to the struggle.
The ensuing battles and the re-capture of Huraymila in 1168/1775, constituted a significant development in Wahhabi expansionist stage. Abd al-Azeez, the son of Muhammad ibn Saud, had emerged as the principal leader of the Wahhabi military operations. Alongside a force of 800 men, accompanied by an additional 200 under the command of the deposed ruler of Huraymila, Abd al Azeez was able to subdue the rebels. More significantly, the rationale behind the campaign was based on Ibn ʿAbd al-Wahhāb’s newly written epistle Mufīd al-mustafīd, which marked a shift from the earlier posture of defensive Jihad to justify a more aggressive one. In the treatise, compiled to justify Jihad pursued by Dir'iyyah and its allies, Ibn 'Abd al-Wahhab excommunicated the inhabitants of Huraymila and declared it as a duty of Wahhabi soldiers to fight them as apostates. He also quoted several Qur'anic verses indicative of offensive forms of jihād.
Capture of Riyadh and Retirement (1773)
The last point of serious threat to the Saudi state was in 1764-1765. During this period, the Ismāʿīlī Shīʿa of Najrān alongside their allied tribe of 'Ujman, combined forces to inflict a major defeat on the Saudis at the Battle of Hair in October 1764, killing around 500 men. The anti-Wahhabi forces allied with the invaders and participated in the combined seige of Dirʿiyya. However, the defenders were able to hold onto their town due to the unexpected departure of the Najranis after a truce concluded with the Saudis. A decade later in 1773-'4, 'Abd al-Azeez had conquered Riyadh and secured the entirety of al-ʿĀriḍ, after its chieftain Dahham ibn Dawwas fled. By 1776/7, Sulayman ibn Abd al-Wahhab had surrendered. The capture of Riyadh marked the point at which Muhammad Ibn ‘Abd al-Wahhab delegated all affairs of governing to 'Abd al-Azeez, withdrew from public life and devoted himself to teaching, preaching and worshipping. Meanwhile, 'Abd al-Azeez would proceed with his military campaigns, conquering towns like Sudayr (1196/1781), al-Kharj (1199/1784), etc. Opposition in towns to the North like al-Qaṣīm was stamped out by 1196/1781, and the rebels in ʿUnayza were subdued by 1202/1787. Further north, the town of Ḥāʾil, was captured in 1201/1786 and by the 1780s; Wahhābīs were able to establish their jurisdiction over most of Najd.
Death
After his departure from public affairs, Ibn 'Abd al-Wahhab would remain a consultant to 'Abd al-Azeez, who followed his recommendations. However, he withdrew from any active military and political activities of the Emirate and devoted himself to educational endeavours, preaching and worship. His last major activity in state affairs was in 1202/1787; when he called on the people to give bay'ah (allegiance) to Suʿūd, ʿAbd al-ʿAzīz’s son, as heir apparent. Muhammad ibn 'Abd al-Wahhab fell ill and died in June 1792 C.E or 1206 A.H in the lunar month of Dhul-Qa'dah, at the age of eighty-nine. He would be buried in an unmarked grave at al-Turayf in al-Dir‘iyya.
Ibn 'Abd al-Wahhab left behind four daughters and six sons, many of whom became clerics of greater or lesser distinction. A clear separation of roles between the Saudi family and the Wahhabi clerics had begun to emerge during the interval between Ibn 'Abd al-Wahhab's retirement from front-line politics in 1773 and his death in 1792. Although the Aal Ash-Shaykhs did not engage in politics, they comprised a significant part of the designating group of notables who gave allegiance (bay'ah) to a new ruler and acclaimed his accession. After Ibn 'Abd al-Wahhab, his son 'Abd Allah, recognised by his critics as moderate and fair-minded, would succeed him as the dominant Wahhabi cleric. The Wahhabi cause would flourish for more than two decades after Ibn 'Abd al-Wahhab's death; until the defeat of the First Saudi State in the Ottoman-Saudi war. 'Abd Allah would spend his last days as an exile in Cairo, having witnessed the destruction of Dirʿiyya and the execution of his talented son Sulayman ibn 'Abd Allah in 1818.
Family
According to academic publications such as the Encyclopædia Britannica while in Baghdad, Ibn ʿAbd al-Wahhab married an affluent woman. When she died, he inherited her property and wealth. Muhammad ibn ʿAbd al-Wahhab had six sons; Hussain (died 1809), Abdullah (1751–1829), Hassan, Ali (died 1829), Ibrahim and Abdulaziz who died in his youth. Four of his sons, Hussain, Abdullah, Ali and Ibrahim, established religious schools close to their home in Diriyah and taught the young students from Yemen, Oman, Najd and other parts of Arabia at their majlis. One of their pupils was Husayn Ibn Abu Bakr Ibn Ghannam, a well-known Hanbali scholar. Though al-Uthaymin writes about Ibn Ghannam that he was a Maliki scholar from al-Ahsa.
The descendants of Ibn ʿAbd al-Wahhab, the Al ash-Sheikh, have historically led the ulama in the Saudi state, dominating the state's religious institutions. Within Saudi Arabia, the family is held in prestige similar to the Saudi royal family, with whom they share power, and has included several religious scholars and officials. The arrangement between the two families is based on the Al Saud maintaining the Al ash-Sheikh's authority in religious matters and upholding and propagating the Salafi doctrine. In return, the Al ash-Sheikh support the Al Saud's political authority thereby using its religious-moral authority to legitimize the royal family's rule.
Views
On Tawhid
Muhammad Ibn ʿAbd al-Wahhab sought to revive and purify Islam from what he perceived as non-Islamic popular religious beliefs and practices by returning to what, he believed, were the fundamental principles of the Islamic religion. His works were generally short, full of quotations from the Qur'an and Hadith, such as his main and foremost theological treatise, Kitāb at-Tawḥīd (; "The Book of Oneness"). He taught that the primary doctrine of Islam was the uniqueness and oneness of God (tawhid), and denounced those religious beliefs and practices widespread amongst the people of Najd. He believed that much of Najd had descended into superstitious folk religion akin to the period of Jahiliyya (pre-Islamic era) and denounced their beliefs as polytheism (shirk). He associated such practices with the culture of Taqlid (imitation to established customs) adored by pagan-cults of Jahiliyya era. Based on the doctrine of Tawhid espoused in Kitab al-Tawhid, the followers of Ibn 'Abd al-Wahhab referred themselves by the designation "Al-Muwahhidun" (Unitarians).
The "core" of Ibn ʿAbd al-Wahhab's teaching is found in Kitāb at-Tawḥīd, a theological treatise which draws from material in the Qur'an and the recorded doings and sayings of the Islamic prophet Muhammad in the Hadith literature. It preaches that worship in Islam includes conventional acts of worship such as the five daily prayers (Salat); fasting (Sawm); supplication (Dua); seeking protection or refuge (Istia'dha); seeking help (Ist'ana and Istighatha) of Allah.
Traditionally, many Muslims throughout history had held the view that declaring the testimony of faith is sufficient in becoming a Muslim. Ibn ʿAbd al-Wahhab did not agree with this. He asserted that an individual who believed in existence intercessors or intermediaries alongside God was guilty of shirk (polytheism or idolatry). This was the major difference between him and his opponents, and led him to accuse his adversaries who engaged in these religious practices to be apostates (a practice known in Islamic jurisprudence as takfir) and idolaters (mushrikin). Another major doctrine of Ibn 'Abd al-Wahhab was the concept known as Al-'Udhr bil Jahl ( excuse of ignorance), wherein any ignorant person unaware of core Islamic teachings is excused by default until clarification. As per this doctrine, those who fell into beliefs of shirk (polytheism) or kufr (disbelief) cannot be excommunicated until they have direct access to Scriptural evidences and get the opportunity to understand their mistakes and retract. If not, their affairs are to be delegated only to God. Hence, he believed that education and dialogue was the only effective path for the successful implementation of reforms.
Rejecting the allegations of his detractors who accused him of ex-communicating whoever didnt follow his doctrines, Ibn 'Abd al-Wahhab maintained that he only advocated orthodox Sunni doctrines. In a letter addressed to the Iraqi scholar Abdul Rahman Al-Suwaidi who had sought clarification over the rumours spread against his mission, Ibn 'Abd al-Wahhab explains:"I am a man of social standing in my village and the people respect my word. This led some chieftains to reject my call, because I called them to what contradicts the traditions they were raised to uphold.... the chieftains directed their criticism and enmity towards our enjoining Tawheed and forbidding Shirk... Among the false accusations they propagated, ... is the claim that I accuse all Muslims, except my followers, of being Kuffar (Unbelievers)... This is truly incredible. How can any sane person accept such accusations? Would a Muslim say these things? I declare that I renounce, before Allah, these statements that only a mad person would utter. In short, what I was accused of calling to, other than enjoining Tawheed and forbidding Shirk, is all false."
On Taqlid
Muhammad ibn 'Abd al-Wahhab was highly critical of the practice of Taqlid ( blind-following), which in his view, deviated people away from Qur'an and Sunnah. He also advocated for Ijtihad of qualified scholars in accordance with the teachings of Qur'an and Hadith. In his legal writings, Ibn 'Abd al-Wahhab referred to a number of sources- Qur'an, hadith , opinions of companions, Salaf as well as the treatises of the 4 schools of thought. Ibn 'Abd al-Wahhab argued that Qur'an condemned blind emulation of forefathers and nowhere did it stipulate scholarly credentials for a person to refer to it directly. His advocacy of Ijtihad and harsh denunciation of Taqlid arose widespread condemnation from Sufi orthodoxy in Najd and beyond, compelling him to express many of his legal verdicts ( fatwas ) discreetly, using convincing juristic terms. He differed from Hanbali school in various points of law and in some cases, also departed from the positions of the 4 schools. In his treatise Usul al-Sittah (Six Foundations), Ibn 'Abd al-Wahhab vehemently rebuked his detractors for raising the description of Mujtahids to what he viewed as humanely unattainable levels. He condemned the establishment clergy as a class of oppressors who ran a "tyranny of wordly possessions" by exploiting the masses to make money out of their religious activities. The teachings of Medinan hadith scholar Muhammad Hayat as Sindi highly influenced the anti-taqlid views of Ibn 'Abd al Wahhab.
Muhammad Ibn Abd al-Wahhab opposed partisanship to madhabs (legal schools) and didn't consider it obligatory to follow a particular madhab. Rather, in his view, the obligation is to follow Qur'an and the Sunnah. Referring to the classical scholars Ibn Taymiyya and Ibn Qayyim, ibn 'Abd al-Wahhab condemned the popular practice prevalent amongst his contemporary scholars to blindfollow latter-day legal works and urged Muslims to take directly from Qur'an and Sunnah. He viewed it as a duty upon every Muslim, laymen and scholar, male & female, to seek knowledge directly from the sources. Radically departing from both Ibn Taymiyya and Ibn Qayyim, Ibn 'Abd al-Wahhab viewed the entirety of the prevalent mad'hab system of jurisprudence (Fiqh) as a fundamentally corrupt institution, seeking a radical reform of scholarly institutions and preached the obligation of all Muslims to directly refer to the foundational texts of revelation. He advocated a form of scholarly authority based upon the revival of the practice of ittiba, i.e., laymen following the scholars only after seeking evidences. The prevalent legal system was, in his view, a "factory for the production of slavish emulators" symbolic of Muslim decline.
On the Nature of Nubuwwah (Prophethood)
Muhammad Ibn 'Abd al-Wahhab elucidated his concept on the nature of Prophethood in his book Mukhtaṣar sīrat al-Rasūl ("Abridgement of the life of the Prophet"), an extensive biographical work on Prophet Muhammad. Mukhtaṣar was written with the purpose of explaining Muhammad's role in universal history by undermining certain prophetologic conceptions that had come to prominence among Sunnī religious circles during the twelfth Islamic century. These included negating those concepts and beliefs that bestowed the Prophet with mystical attributes that elevated Muhammad beyond the status of ordinary humans. In his introduction to Mukhtasar, Ibn 'Abd al-Wahhab asserts that every Prophet came with the mission of upholding Tawhid and prohibiting shirk. Ibn 'Abd al-Wahhab further tries to undermine the belief in the pre-existence of Muḥammad as a divine light preceding all other creation, a salient concept that served as an aspect of Prophetic devotion during the eleventh Islamic century. Additionally, Ibn ʿAbd al-Wahhāb omitted mentioning other episodes narrated in various sirah (Prophetic biography) works such as trees and stones allegedly expressing veneration for Muḥammad, purification of Muhammad's heart by angels, etc. which suggested that Muḥammad possessed characteristics that transcend those of ordinary humans.
Ibn 'Abd al-Wahhab adhered to Ibn Taymiyya's understanding of the concept of Isma (infallibility) which insisted that ʿiṣma does not prevent prophets from commiting minor sins or speaking false things. This differed from the alternative understanding of Sunni theologians like Fakhr al-Dīn al-Rāzi, Qāḍī ʿIyāḍ, etc. who had emphasised the complete independence of the Prophet from any form of error or sin. Following Ibn Taymiyya, Muhammad ibn 'Abd al-Wahhab affirmed the incident of qiṣṣat al-gharānīq (the "story of cranes" or "Satanic Verses") which demonstrated that Muhammad was afflicted by "Satanic interference". This idea of Ibn Taymiyya had been recently revived in the circles of Kurdish hadith scholar Ibrāhīm al-Kūrānī (1025/1616–1101/1686); whose son Abūl-Ṭāhir al-Kūrānī was the teacher of Muḥammad Ḥayāt al-Sindi, the master of Ibn 'Abd al-Wahhab. Using this concept to explain Tawhid al-ulūhiyya (Oneness of Worship), Ibn 'Abd al-Wahhab rejected the idea that anybody could act as intercessor between God and man by employing the Qurʾānic verses related to the event. He also used these and other similar incidents to undermine the belief regarding prophets being completely free from sin, error, or Satanic afflictions.
Furthermore, Ibn 'Abd al-Wahhab had given little importance to Prophetic miracles in his Mukhtaṣar. Although he hadn't denied miracles as an expression of Divine Omnipotence so long as they are attested by Qur'an or authentic hadith, Al-Mukhtasar represented an open protest against the exuberance of miracles that characterised later biographies of Muḥammad. In Ibn 'Abd al-Wahhab's view, miracles are of little significance in the life of Muḥammad in comparison to that of the previous prophets, since central to his prophethood were the institutionalisation of Jihād and the ḥudud punishments. Contrary to prevalent religious beliefs, Muḥammad was not portrayed as the central purpose of creation in the historical conception of Mukhtaṣar. Instead, he has a function within creation and for the created beings. Rather than being viewed as an extraordinary performer of miracles, Muhammad should instead be upheld as a model of emulation. By depriving the person of Muḥammad of all supernatural aspects not related to Wahy (revelation) and Divine intervention, Ibn 'Abd al-Wahhab also re-inforced his rejection of beliefs and practices related to cult of saints and veneration of graves. Thus, Ibn ʿAbd al-Wahhāb’s conception of history emphasised the necessity to follow the role-model of Muḥammad and re-establish the Islamic order.
Influence on Salafism
Ibn ʿAbd al-Wahhab's movement is known today as Wahhabism (). The designation of his doctrine as Wahhābiyyah actually derives from his father's name, ʿAbd al-Wahhab. Many adherents consider the label "Wahhabism" as a derogatory term coined by his opponents, and prefer it to be known as the Salafi movement. Modern scholars of Islamic studies point out that "Salafism" is a term applied to several forms of puritanical Islam in various parts of the world, while Wahhabism refers to the specific Saudi school, which is seen as a more strict form of Salafism. However, modern scholars remark that Ibn 'Abd al-Wahhab's followers adopted the term "Salafi" as a self-designation much later. His early followers denominated themselves as Ahl al-Tawhid and al-Muwahhidun ("Unitarians" or "those who affirm/defend the unity of God"), and were labeled "Wahhabis" by their opponents.
Salafiyya movement was not directly connected to Ibn 'Abd al-Wahhab's movement in Najd. According to professor Abdullah Saeed, Ibn ʿAbd al-Wahhab should rather be considered as one of the "precursors" of the modern Salafiyya movement since he called for a return to the pristine purity of the early eras of Islam by adhering to the Qur'an and the Sunnah, rejection of the blind following (Taqlid) of earlier scholars and advocating for Ijtihad. Scholars like Adam J. Silverstein consider Wahhabi movement as "the most influential expression of Salafism of the Islamist sort, both for its role in shaping (some might say: 'creating') modern Islamism, and for disseminating salafi ideas widely across the Muslim world."
On Islamic Revival
As a young scholar in Medina, Muhammad Ibn 'Abd al-Wahhab was profoundly influenced by the revivalist doctrines taught by his teachers Muhammad Hayyat ibn Ibrahim al-Sindhi and Abdullah Ibn Ibrahim Ibn Sayf. Much of the Wahhabi teachings such as opposition to saint-cults, radical denunciation of blind-following medieval commentaries, adherence to Scriptures and other revivalist thoughts came from Muhammad Hayyat. Ibn Abd al-Wahhab's revivalist efforts were based on a strong belief in Tawhid (Oneness of Allah) and a firm adherence to the Sunnah. His reformative efforts left exemplary marks on contemporary Islamic scholarship. Viewing Blind adherence ( Taqlid ) as an obstacle to the progress of Muslims, he dedicated himself to educating the masses for them to be vanguards of Islam. According to Ibn Abd al-Wahhab, the degradation and lagging behind of Muslims was due to their neglect of the teachings of Islam, emphasizing that progress could be achieved only by firmly adhering to Islam. He also campaigned against popular Sufi practices associated with istigatha, myths and superstitions.
On Sufism
Ibn ʿAbd al-Wahhab praised Tasawwuf. He stated the popular saying: "From among the wonders is to find a Sufi who is a faqih and a scholar who is an ascetic (zahid)". He described Tasawwuf as "the science of the deeds of the heart, which is known as the science of Suluk", and considered it as an important branch of Islamic religious sciences.
At the end of his treatise, Al-Hadiyyah al-Suniyyah, Ibn ʿAbd al-Wahhab's son 'Abd Allah speaks positively on the practice of tazkiah (purification of the inner self). 'Abd Allah Ibn ʿAbd al-Wahhab ends his treatise saying:
We do not negate the way of the Sufis and the purification of the inner self from the vices of those sins connected to the heart and the limbs as long as the individual firmly adheres to the rules of Shari‘ah and the correct and observed way. However, we will not take it on ourselves to allegorically interpret (ta’wil) his speech and his actions. We only place our reliance on, seek help from, beseech aid from and place our confidence in all our dealings in Allah Most High. He is enough for us, the best trustee, the best mawla and the best helper. May Allah send peace on our master Muhammad, his family and companions.
On Social Reforms
Muhammad ibn 'Abd al-Wahhab concerned himself with the social reformation of his people. As an 18th-century reformer, Muhammad ibn 'Abd al Wahhab called for the re-opening of Ijtihad by qualified persons through strict adherence to Scriptures in reforming society. His thoughts reflected the major trends apparent in the 18th-century Islamic reform movements. Unlike other reform movements which were restricted to da'wa, Ibn 'Abd al-Wahhab was also able to transform his movement into a successful Islamic state. Thus, his teachings had a profound influence on majority of Islamic reform-revivalist movements since the 18th century. Numerous significant socio-economic reforms would be advocated by the Imam during his lifetime. His reforms touched over various fields such as aqeeda, ibaadat (ritual acts of worship), muamalaat (social interactions), etc. In the affairs of mu'amalat, he harshly rebuked the practice of leaving endowments to prevent the rightful heirs (particularly the females) from receiving their deserved inheritance. He also objected to various forms of riba (usury) as well as the practice of presenting judges with gifts, which according to him, was nothing more than bribing. He also opposed and brought an end to numerous un-Islamic taxes that were forced upon the people.
The legal writings of Ibn 'Abd al-Wahhab reflected a general concern of female welfare and gender justice. In line with this approach, Ibn 'Abd al-Wahhab denounced the practice of instant triple talaq, counting it as only a single talaq (regardless of the number of pronouncements). The outlawing of triple talaq is considered to be one of the most significant reforms across the Islamic World in the 20th and 21st centuries. Following a balanced approach in issues of gender, Ibn 'Abd al-Wahhab advocated moderation between men and women in social interactions as well as spirituality. According to Ibn 'Abd al-Wahhab, women has a place in society with both rights and responsibility, with the society being obliged to respect her status and protect her. He also condemned forced marriages and declared any marriage contracted without the consent of a woman (be it minor, virgin or non-virgin) to be "invalid". This too was a significant reform as well as a break from the four Sunni schools which allowed the wali (ward/guardian) to compel minor daughters into marriage without consent. Ibn 'Abd al-Wahhab also stipulated the permission of the guardian as a condition in marriage (in line with traditional Hanbali, Shafi'i and Maliki schools). Nevertheless, as a practical jurist, Ibn 'Abd al-Wahhab allowed guardians to delegate the right to contract marriages to women herself, after which his permission cannot be denied. He also allowed women the right to stipulate favourable conditions for her in the marriage contract. Ibn 'Abd al-Wahhab also defended the woman's right to divorce through Khul' for various reasons, including in cases wherein she despised her husband. He also prohibited the killing of women, children and various non-combatants such as monks, elderly, blind, shaykhs, slaves and peasants in warfare.
On Muslim saints
Ibn ʿAbd al-Wahhab strongly condemned the veneration of Muslim saints (Which he described as worship) or associating divinity to beings other than God, labeling it as shirk. Despite his great aversion to venerating the saints after their earthly passing and seeking their intercession, it should nevertheless be noted that Muhammad ibn ʿAbd al-Wahhab did not deny the existence of saints as such; on the contrary, he acknowledged that "the miracles of saints (karāmāt al-awliyāʾ) are not to be denied, and their right guidance by God is acknowledged" when they acted properly during their life. Muhammad ibn Abd al-Wahhab opposing the practice of the pilgrimage of the saint's tombs as it is considered as Bidʻah (heresy), such as the practice of the pilgrimage towards a tomb believed belong to a companion of the Prophet named Dhiraar ibn al-Azwar in the valley of Ghobaira.
On Non-Muslims
According to the political scientist Dore Gold, Muhammad ibn ʿAbd al-Wahhab presented a strong anti-Christian and anti-Judaic stance in his main theological treatise Kitāb at-Tawḥīd, describing the followers of both Christian and Jewish faiths as sorcerers who believe in devil-worship, and by citing a hadith attributed to the Islamic prophet Muhammad he stated that capital punishment for the sorcerer is "that he be struck with the sword". Ibn ʿAbd al-Wahhab asserted that both the Christian and Jewish religions had improperly made the graves of their prophet into places of worship and warned Muslims not to imitate this practice. Ibn ʿAbd al-Wahhab concluded that "The ways of the People of the Book are condemned as those of polytheists."
However, Western scholar Natana J. DeLong-Bas defended the position of Muhammad ibn ʿAbd al-Wahhab, stating that
despite his at times vehement denunciations of other religious groups for their supposedly heretical beliefs, Ibn Abd al Wahhab never called for their destruction or death … he assumed that these people would be punished in the Afterlife …"According to Vahid Hussein Ranjbar, "Muhammad ibn ʿAbd al-Wahhab saw it as his mission to restore a more purer and original form of the faith of Islam". In accordance with the his own theology, which upheld a strict doctrine of tawhid (oneness of God), Ibn ʿAbd al-Wahhab condemned the veneration of any personality other than God and sought the demolition of the tombs of Muslim saints (awliya). Those who didn't adhere to his interpretation of monotheism were considered disbelieving polytheists (including Sufi and Shia Muslims), Christians, Jews, and other Non-Muslims. He also advocated for a literalist interpretation of the Quran and its laws.
Reception
By contemporaries
The doctrines of Ibn ʿAbd al-Wahhab were criticized by a number of Islamic scholars during his lifetime, accusing him of disregarding Islamic history, monuments, traditions and the sanctity of Muslim life. His critics were mainly ulama from his homeland, the Najd region of central Arabia, which was directly affected by the growth of the Wahhabi movement, based in the cities of Basra, Mecca, and Medina. The early opponents of Ibn ʿAbd al-Wahhab classified his doctrine as a "Kharijite sectarian heresy".
On the other hand, Ibn ʿAbd al-Wahhāb and his supporters held that they were the victims of aggressive warfare; accusing their opponents of starting the pronouncements of Takfir (excommunication) and maintained that the military operations of Emirate of Dirʿiyya were strictly defensive. The memory of the unprovoked military offensive launched by Dahhām ibn Dawwās (fl. 1187/1773), the powerful chieftain of Riyadh, on Diriyya in 1746 was deeply engrained in the Wahhabi tradition. Early Wahhabi chronicler Ibn Ghannām states in his book Tarikh an-Najd (History of Najd) that Ibn ʿAbd al-Wahhāb did not order the use of violence until his enemies excommunicated him and deemed his blood licit:“He gave no order to spill blood or to fight against the majority of the heretics and the misguided until they started ruling that he and his followers were to be killed and excommunicated.”
By 1802, the Ottoman empire had officially began to wage religious campaigns against the Wahhabis, issuing tracts condemning them as Kharijites. In contrast, Ibn ʿAbd al-Wahhab profoundly despised the "decorous, arty tobacco-smoking, music happy, drum pounding, Egyptian and Ottoman nobility who traveled across Arabia to pray at Mecca each year", and intended to either subjugate them to his doctrine or overthrow them. A handful of Arabian Hanbalis participated on the Ottoman side of the controversy. Muhammad ibn 'Abdullah ibn Humayd's 19th century biographical dictionary sheds light on those Hanbali scholars. However, the reliability of his biography itself is disputed for its inherent biases, which portrays Ibn ʿAbd al-Wahhab and his followers as heretics. It also misrepresents many Najdi Hanbali scholars as on the side of Ottoman Hanbalis.
Ibn Humayd's maternal lineage, Al-Turki, was of some local renown for its religious scholars, including two men who opposed the Wahhabi movement. One of them, named Ibn Muhammad, compared Ibn ʿAbd al-Wahhab with Musaylimah. He also accused Ibn ʿAbd al-Wahhab of wrongly declaring fellow Muslims to be infidels based on a misguided reading of Quranic passages and prophetic traditions (Hadith), and of wrongly declaring all scholars as infidels who did not agree with his "deviant innovation". In contrast to this anti-Wahhabi family tradition, Ibn Humayd's early education included extensive studies under two Wahhabi Shaykhs, both praised in his biographical dictionary. He then travelled to Damascus and Mecca, wherein he attended lessons of men known for strong anti-Wahhabi convictions. Ibn Humayd's compatibility with Ottoman religious outlook made him eligible for the post of Ottoman Mufti in Mecca.
Another Hanbali scholar whom Ibn Humayd portrays as a central figure in rejecting Ibn ʿAbd al-Wahhab's doctrine was Ibn Fayruz Al-Tamimi al-Ahsai (1729/30 – 1801/02). Ibn Fayruz publicly repudiated Ibn ʿAbd al-Wahhab's teachings when he sent an envoy to him. Ibn Fayruz then wrote to Sultan Abdul Hamid I and requested Ottoman assistance to subjugate Ibn ʿAbd al-Wahhab's followers, whom he referred to as the "seditious Kharijites" of Najd. The Wahhabis, in turn, came to view him as one of their worst enemies and an exemplar of idolatry.
According to Ibn Humayd, Ibn ʿAbd al-Wahhab's father criticized his son for his unwillingness to specialize in jurisprudence and disagreed with his doctrine and declared that he would be the cause of wickedness. Similarly his brother, Sulayman ibn 'Abd al-Wahhab, wrote one of the first treatises refuting the Wahhabi doctrine, The Divine Thunderbolts in Refutation of Wahhabism (Al-Šawā'iq Al-Ilāhiyya fī Al-radd 'alā Al-Wahhābiyya), affirming that Muhammad was ill-educated and intolerant, and classing his views as fringe and fanatical. Sulayman's first anti-Wahhabi treatise was followed by a second book, The Unmistakable Judgment in the Refutation of Muhammad ibn 'Abd al-Wahhab (Faṣl al-Ḫiṭāb fī Al-radd 'alā Muḥammad ibn ‘Abd al-Wahhāb). Both Muhammad ibn ʿAbd al-Wahhab's father and brother disagreed with him and didn't share his doctrinal statements because they considered his doctrine, and the way he intended to impose it in Arabia, too extreme and intolerant. The Arabian historian Ahmad ibn al-Zayni Dahlan, Shaykh al-Islām and Grand Mufti of the Shafi'i madhab in Mecca, recorded the account of the dispute between Muhammad ibn ʿAbd al-Wahhab and his brother Sulayman, reporting that:
Later reports claim that Sulayman repented and joined the cause of his brother. However, there is a difference of opinion concerning his repentance. Ibn Ghannam, the earliest Najdi chronicler, specifically states that he repented from his previous position and joined his brother in Diriyah. Ibn Bishr simply states that he moved to Diriyah with his family and remained there while receiving a stipend, which may or may not be a sign that he had changed his views. A letter attributed to Sulayman states that he repented from his earlier views.
The Mufti of Mecca, Ahmad ibn al-Zayni Dahlan, wrote an anti-Wahhabi treatise, in which he listed the religious practices that the Najdi Hanbalis considered idolatrous: visiting the tomb of Muhammad, seeking the intercession of saints, venerating Muhammad and obtaining the blessings of saints. He also accused Ibn ʿAbd al-Wahhab of not adhering to the Hanbali school and that he was deficient in learning. However, Ibn ʿAbd al-Wahhab believed that visiting the tomb of Muhammad was a righteous deed, referring to it as "among the best of deeds" while condemning its excesses. The medieval theologians Ibn Taymiyyah and Ibn Qayyim, who inspired Ibn 'Abd al-Wahhab, had issued Fatwas declaring the visitations to the tomb of Prophet Muhammad to be haram (forbidden); which would lead to their imprisonment.
In response, the British Indian Ahl-i Hadith scholar Muhammad Bashir Sahsawani (1834-1908 C.E) wrote the treatise Sayaanah al-Insaan an Waswaswah al-Shaikh Dahlaan in order to refute Dahlan. Sahsawani stated that he met more than one scholar of the followers of Ibn ʿAbd al-Wahhab and read many of their books; and did not find any evidence for the false claim that they declared "non-Wahhabis" disbelievers.
Islamic scholar Muhammad Rashid Rida (d. 1935 C.E/ 1354 A.H) in his introduction to al-Sahsawani's refutation of Dahlan, described Ibn ʿAbd al-Wahhab as a mujaddid repelling the innovations and deviations in Muslim life. Through his Al-Manar magazine, Rashid Rida greatly contributed to the spread of Ibn ʿAbd al-Wahhab's teachings in the Islamic world. He was a strong supporter of Ibn Taymiyyah and scholars of Najd, publishing works in his magazine entitled Majmooah al-Rasaail wa al-Masaail al-Najdiyyah and al-Wahhaabiyoon wa al-Hijaaz. Rida notes that given Dahlan's position in Mecca, and availability there of the works of Ibn ʿAbd al-Wahhab, he must have simply chosen to write otherwise. Rida also argued that Dahlan simply wrote what he heard from people, and criticised him for not verifying reports and seeking out the writings of Ibn ʿAbd al-Wahhab. He condemned Dahlan for his ignorance and his sanctioning of acts of kufr and shirk; based on his reinterpretation of Islamic texts.
Rashid Rida contended that Ibn 'Abd al-Wahhab was a victim of persecution by the combined opression of three forces: i) the power of state and its rulers ii) power of hypocritical scholars and iii) power of tyrannical commoners. Fiercely rebuking his opponents, Rashid Rida declared: "The best weapon they brandished against him was that he contradicted the majority of Muslims. Who were the majority of Muslims Muhammad Ibn Abdul Wahhab contradicted in his Da'wah? They were Bedouins of the desert, worse than the people of Jahiliyyah, intent on looting and theft. They allowed shedding the blood of Muslims and non-Muslims, just to earn a living. They took their tyrants as judges in every matter and denied many aspects of Islam on which there is consensus [especially among scholars], matters in which no Muslim can claim ignorance."
Ali Bey el Abbassi, a Spanish explorer who was in Mecca in 1803, shortly after the Wahhabi conquest of Hejaz, presented a starkly different view of the Wahhabis. He was surprised to find that they were fairly moderate, reasonable and civilized. He further observed that, rather than engaging in rampant violence and destruction, the Wahhabis were actually quite orderly and peaceful. Puzzled by the contradiction between popular image and reality, Ali Bey examined the historical record for clues. He found an important difference between the reign of Muhammad ibn Saud Al Muqrin, when Ibn ʿAbd al-Wahhab was active in political life, and that of his son, Abdulaziz bin Muhammad Al Saud, when Ibn 'Abd al-Wahhab withdrew from active political activity. Ali Bey noted that Muhammad Ibn Saud had supported the teachings of Ibn 'Abd al-Wahhab but did not use a "convert or die" approach to gaining adherents. This practice was used only during the reign of 'Abd Al-Azeez bin Muhammad Al Saud, who made selective use of Ibn ʿAbd al-Wahhab's teachings for the purpose of acquiring wealth and property for state consolidation—a contention supported by Ibn Bishr's chronicle. Ali Bey declared that he "discovered much reason and moderation among the Wahhabites to whom I spoke, and from whom I obtained the greater part of the information which I have given concerning their nation."
British diplomat Harford Jones-Brydges, who was stationed in Basra in 1784 attributed the popular hysteria about the Wahhabis to a different cause. Unlike Ottoman depictions, Brydges believed that Ibn ʿAbd al-Wahhab's doctrine was in keeping with the teachings of Quran, was "perfectly orthodox", "consonant to the purest and best interpretations of that volume", and that Ottomans feared its spread precisely on that basis.
The Egyptian historian and Azhari Islamic scholar Abd al-Rahman al-Jabarti (1753–1825 C.E) was very influenced and impressed by Ibn ʿAbd al-Wahhab and his movement. He spread his thought in Egypt and saw in the Wahhabi doctrines a great potential for Islamic revival. Al-Jabarti encountered scholars of Wahhabis in Egypt in 1814, and despite all the negative things heard in popular discourse, he was highly impressed by them. He found them to be friendly and articulate, knowledgeable and well versed in historical events and curiosities. Al-Jabarti stated that Wahhabis were "modest men of good morals, well trained in oratory, in the principles of religion, the branches of fiqh, and the disagreements of the Schools of Law. In all this they were extraordinary.” He described Ibn ʿAbd al-Wahhab as a man who "summoned men to Qur'an and the Prophet's Sunna, bidding them to abandon innovations in worship". On doctrinal matters, Al-Jabarti emphasized that the beliefs of Ibn ʿAbd al-Wahhab were part of Orthodox Sunni Islam and stated that Wahhabis did not bring anything new.
Moroccan military leader 'Abd al-Karim al-Khattabi (1882-1963 C.E) praised Ibn 'Abd al-Wahhab's reform endeavour as a "promising voice" that sparked spiritual and intellectual Awakening across the Islamic World.
Prominent Syrian Hanbali scholar 'Abd al-Qadir ibn Badran (1864-1927 C.E/ 1280-1346 A.H) praised the efforts of Muhammad ibn 'Abd al-Wahhab in his treatise Al-Madkhal ila Madhhab il-Imam Ahmad ibn Hanbal (An Introduction to the Madhab of Imam Ahmad ibn Hanbal), writing: "When he [i.e, Ibn 'Abd al Wahhab] learned the narrations and the Sunnah and became expertised in the madhab of Ahmad; he began supporting the Truth, fighting bid'ah and resisting what illiterates have made part of this monotheistic religion and Sharia of moderation. Some people supported him and made their worship solely to The One God following his path, which was to establish pure Tawhid, call sincerely to monotheism and direct worship in all of its forms solely to The Creator of creation alone. Some people resisted him; they were used to rigidity in following what their forefathers did and they armoured themselves with laziness instead of seeking the truth."
Modern reception
Ibn ʿAbd al-Wahhab is accepted by Salafi scholars as an authority and source of reference. Salafi scholars Rashid Rida and 'Abd al-Aziz ibn Baz considered him a mujaddid. 20th-century Albanian scholar Nasiruddin Albani referred to Ibn ʿAbd al-Wahhab's activism as Najdi dawah. According to the 20th-century Austro-Hungarian scholar Muhammad Asad, all modern Islamic Renaissance movements took inspiration from the spiritual impetus set in motion in the 18th-century by Muhammad Ibn ʿAbd al-Wahhab. Rashid Ahmad Gangohi, one of the founders of the Deobandi school, also praised Ibn ʿAbd al-Wahhab. Hence, the contemporary ulema of Deoband mostly respect him while being critical of the Salafi movement.
Islamic scholar Yusuf Al-Qārādawī praised Muhammad ibn 'Abd al-Wahhab as a Mujaddid (religious reviver) of the Arabian Peninsula who defended the purity of Tawhid from various superstitions and polytheistic beliefs. Praising Ibn 'Abd al-Wahhab's efforts, Muhammad Rashīd Ridá wrote:"Muhammad bin Abd al-Wahhab al-Najdi was one of those Mujaddids, [who] called for the upholding of Tawhid and the sincerity of worship to God alone with what He legislated in His Book and on the tongue of His Messenger, the Seal of the Prophets; ... abandoning heresies and sins, establishing the abandoned rituals of Islam, and venerating its violated sanctities."
In his book "Saviours of the Islamic Spirit", Islamic scholar Abul Hasan Ali Nadwi (1913-1999 C.E) acclaimed Ibn 'Abd al-Wahhab as a "great reformer" who called his people to Tawhid, revived injunctions based on Qur'an and Sunnah and eradicated superstitious rites prevelent amongst the illiterate masses of Central Arabia. Nadwi compared his movement to that of the contemporay South Asian Islamic revivalist Shah Waliullah Dehlawi (1703-1762 C.E/ 1114-1176 A.H) who had expounded similar ideas such as differentiating between Tawhid-i-Uluhiyyat (Oneness of Worship) and Tawhid-i-Rububiyat (Oneness of Lordship) and promotion of strict adherence to Qur'an and Hadith. In Nadwi's opinion, Ibn 'Abd al-Wahhab was able to make outstanding efforts with far-reaching impact compared to other contemporary reformers since he played the role of a revolutionary reformer whose initiatives were implemented through a newly established Islamic state and thus his movement was highly pertinent for the people of his time.
In 2010, Prince Salman bin Abdulaziz, at the time serving as the governor of Riyadh, said that the doctrine of Muhammad Ibn ʿAbd al-Wahhab was pure Islam, and said regarding his works:"I dare anyone to bring a single alphabetical letter from the Sheikh's books that goes against the book of Allah and the teachings of his prophet, Muhammad."
Western Reception
In the 21st century Western security discourse, Muhammad Ibn 'Abd al-Wahhab's movement, Wahhabism, is often associated with various Jihadi movements across the Islamic World. According to various Western analysts, the Islamist terrorist organization Al-Qaeda has been influenced by the Wahhabi doctrine. Other scholars note that the ideology of Al-Qaeda is Salafi jihadism that emerged as a synthesis of the Qutbist doctrine with Salafism. The Taliban in Afghanistan was often conflated with Wahhabis in the early 2000s; however, the Taliban emerged from the Deobandi school rather than the Wahhabi movement. According to other sources, Salafis are fundamentally opposed to the ideology of Al-Qaeda. According to various scholars, the ideology of Islamic State, another Islamic terrorist organization, has also been inspired by Wahhabi doctrines, alongside Salafism, Qutbism, and Salafi jihadism.
During the Post-9/11 period, when the FBI listed al-Qaeda as "the number one terrorist threat to the United States", US journalist Lulu Schwartz, and former U.S. Senator and Republican politician Jon Kyl asserted during the hearing before the Subcommittee on Terrorism, Technology, and Homeland Security of the U.S. Senate in June 2003 that "Wahhabism is the source of the overwhelming majority of terrorist atrocities in today's world". Their recommendations would become influential in the 21st century US foreign policy:
Meanwhile, contemporary Western historians and researchers have taken a more nuanced approach on the history and evolution of the Muwahhidun movement; pointing out the discrepancy between the Ibn 'Abd al-Wahhab's teachings, some of his later followers and the actions of contemporary militant Jihadist groups. Western scholars like Michael Ryan assert that Ibn 'Abd al-Wahhab's reformist teachings were a rationalist enterprise that sought to eradicate superstitions widespread in the context of tribal rivalry within the Arabian Peninsula. Moreover, the regional background of Ibn 'Abd al-Wahhab’s intellectual efforts in the chaotic context of the 18th-century Arabian Peninsula had been distinct from the 21st century global Jihad ideology of organisations like Al-Qaeda or IS. Consequently, his scholarly heirs, including the prestigious Aal al-Shaykhs constitute the primary ideological nemesis of groups such as Al-Qaeda. Since the Saudi population overwhelmingly prefers their traditional religious institutions and scholars to Bin Laden's claims to revolutionary Jihadi-Salafism; Al-Qaeda harshly attacks these mainstream Saudi clerics with much theological vitriol.
Contemporary recognition
The national mosque of Qatar is named after him. The "Imam Muhammad ibn Abd al-Wahhab Mosque" was opened in 2011, with the Emir of Qatar presiding over the occasion. The mosque has the capacity to host a congregation of 30,000 people. In 2017, there was a request published in the Saudi Arabian newspaper Okaz signed by 200 descendants of Ibn 'Abd al-Wahhab that the name of the mosque be changed, because according to their statement "it does not carry its true Salafi path", even though most Qataris adhere to Wahhabism.
The Turaif district in Diriyah, the capital of the First Saudi state, was declared a UNESCO World Heritage Site in 2010. In 2011, Saudi Arabia announced its plans for large-scale development of Ibn 'Abd al-Wahhab's domain Diriyah; to establish a national cultural site in Diriyah and turn it into a major tourist attraction. Other features in the area include the Sheikh Muhammad bin Abdul Wahab Foundation, which is planned to include a light and sound presentation located near the Mosque of Sheikh Mohammad bin Abdulwahab.
Works
Risālah Aslu Dīn Al-Islām wa Qā'idatuhu
Kitab al-Quran (The book of Allah)
Kitab at-Tawhid (The Book of the Oneness of God)
Kashf ush-Shubuhaat (Clarification of the Doubts)
Al-Usool-uth-Thalaatha (The Three Fundamental Principles)
Al Qawaaid Al 'Arbaa (The Four Foundations)
Al-Usool us Sittah (The Six Fundamental Principles)
Nawaaqid al Islaam (Nullifiers of Islam)
Adab al-Mashy Ila as-Salaa (Manners of Walking to the Prayer)
Usul al-Iman (Foundations of Faith)
Fada'il al-Islam (Excellent Virtues of Islam)
Fada'il al-Qur'an (Excellent Virtues of the Qur'an)
Majmu'a al-Hadith 'Ala Abwab al-Fiqh (Compendium of the Hadith on the Main Topics of the Fiqh)
Mukhtasar al-Iman (Abridgement of the Faith; i.e. the summarised version of a work on Faith)
Mukhtasar al-Insaf wa'l-Sharh al-Kabir (Abridgement of the Equity and the Great Explanation)
Mukhtasar Seerat ar-Rasul (Summarised Biography of the Prophet)
Kitaabu l-Kabaair (The Book of Great Sins)
Kitabu l-Imaan (The Book of Trust)
Al-Radd 'ala al-Rafida (The Refutation of the Rejectionists)
See also
Ibn Taymiyyah
Wahhabi Movement
Emirate of Diriyah
International propagation of Salafism and Wahhabism
Sources
Two of the earliest sources for the biography of Muhammad ibn 'Abd al-Wahhab and early history of the Wahhabi movement have been documented by its followers:
Wahhabi chronicler and scholar Ibn Ghannam's Rawdhat al-Afkar wal-Afham or Tarikh Najd (History of Najd) and Husain ibn Ghannam (d. 1811), an alim from al-Hasa was the only historian to have observed the beginnings of Ibn ʿAbd al-Wahhab's movement first-hand. His chronicle ends at the year 1797.
Najdi Historian Ibn Bishr's Unwan al-Majd fi Tarikh Najd (The Glorious History of Najd). Ibn Bishr's chronicle, which stops at the year 1854, was written a generation later than Ibn Ghannam's but is considered valuable partly because Ibn Bishr was a native of Najd and because he adds many details to Ibn Ghannam's account.
A third account, covering Arabian history between 1730s to 1817 is Lam' al-Shihab (The Brilliance of the Meteor) written by an anonymous author who respectfully disapproved of Ibn ʿAbd al-Wahhab's movement, regarding it as a bid'ah (heresy). It is also commonly cited in Orientalist circles because it is considered to be a relatively objective and unofficial treatment of the subject. However, unlike Ibn Ghannam and Ibn Bishr, its author did not live in Najd and his work is contains various tales, apocryphal and legendary materials concerning the details of Ibn ʿAbd al-Wahhab's life.
Notes
References
Bibliography
Further reading
Valentine, S. R., "Force & Fanaticism: Wahhabism in Saudi Arabia and Beyond", Hurst & Co, London, 2015,
Online
Muḥammad ibn ʿAbd al-Wahhāb: Muslim theologian, in Encyclopædia Britannica Online, by The Editors of Encyclopædia Britannica, Parul Jain, Satyavrat Nirala and Adam Zeidan
External links
Biodata at MuslimScholars.info
1703 births
1792 deaths
18th-century Arabs
18th-century Muslim scholars of Islam
Anti-Ottomanism
Arab Sunni Muslim scholars of Islam
Critics of Shia Islam
Hanbalis
Mujaddid
18th-century Muslim theologians
Salafis
Anti-Shi'ism |
19977 | https://en.wikipedia.org/wiki/Maine | Maine | Maine () is a state in the New England region of the United States, bordered by New Hampshire to the west; the Gulf of Maine to the southeast; and the Canadian provinces of New Brunswick and Quebec to the northeast and northwest, respectively. Maine is the 12th-smallest by area, the 9th-least populous, the 13th-least densely populated, and the most rural of the 50 U.S. states. It is also the northeasternmost among the contiguous United States, the northernmost state east of the Great Lakes, the only state whose name consists of a single syllable, and the only state to border exactly one other US state. The most populous city in Maine is Portland, while its capital is Augusta.
Maine has traditionally been known for its jagged, rocky Atlantic Ocean and bayshore coastlines; smoothly contoured mountains; heavily forested interior; picturesque waterways; and its wild lowbush blueberries and seafood cuisine, especially lobster and clams. In more recent years, coastal and Down East Maine, especially in the vicinity of Portland, have emerged as an important center for the creative economy, which is also bringing gentrification.
For thousands of years after the glaciers retreated during the last Ice Age, indigenous peoples were the only inhabitants of the territory that is now Maine. At the time of European arrival, several Algonquian-speaking peoples inhabited the area. The first European settlement in the area was by the French in 1604 on Saint Croix Island, founded by Pierre Dugua, Sieur de Mons. The first English settlement was the short-lived Popham Colony, established by the Plymouth Company in 1607. A number of English settlements were established along the coast of Maine in the 1620s, although the rugged climate and conflict with the local indigenous people caused many to fail.
As Maine entered the 18th century, only a half dozen European settlements had survived. Loyalist and Patriot forces contended for Maine's territory during the American Revolution. During the War of 1812, the largely undefended eastern region of Maine was occupied by British forces with the goal of annexing it to Canada via the Colony of New Ireland, but returned to the United States following failed British offensives on the northern border, mid-Atlantic and south which produced a peace treaty that restored the pre-war boundaries. Maine was part of the Commonwealth of Massachusetts until 1820 when it voted to secede from Massachusetts to become a separate state. On March 15, 1820, under the Missouri Compromise, it was admitted to the Union as the 23rd state.
Name
There is no definitive explanation for the origin of the name "Maine", but the most likely is that early explorers named it after the former province of Maine in France. Whatever the origin, the name was fixed for English settlers in 1665 when the English King's Commissioners ordered that the "Province of Maine" be entered from then on in official records. The state legislature in 2001 adopted a resolution establishing Franco-American Day, which stated that the state was named after the former French province of Maine.
Other theories mention earlier places with similar names or claim it is a nautical reference to the mainland. Captain John Smith, in his "Description of New England" (1614) laments the lack of exploration: "Thus you may see, of this 2000. miles more then halfe is yet vnknowne to any purpose: no not so much as the borders of the Sea are yet certainly discouered. As for the goodnes and true substances of the Land, wee are for most part yet altogether ignorant of them, vnlesse it bee those parts about the Bay of Chisapeack and Sagadahock: but onely here and there wee touched or haue seene a little the edges of those large dominions, which doe stretch themselues into the Maine, God doth know how many thousand miles;"
Note that his description of the mainland of North America is "the Maine". The word "main" was a frequent shorthand for the word "mainland" (as in "The Spanish Main")
Attempts to uncover the history of the name of Maine began with James Sullivan's 1795 "History of the District of Maine." He made the unsubstantiated claim that the Province of Maine was a compliment to the queen of Charles I, Henrietta Maria, who once "owned" the Province of Maine in France. Maine historians quoted this until the 1845 biography of that queen by Agnes Strickland established that she had no connection to the province; further, King Charles I married Henrietta Maria in 1625, three years after the name Maine first appeared on the charter. A new theory put forward by Carol B. Smith Fisher in 2002 postulated that Sir Ferdinando Gorges chose the name in 1622 to honor the village where his ancestors first lived in England, rather than the province in France. "MAINE" appears in the Domesday Book of 1086 in reference to the county of Dorset, which is today Broadmayne, just southeast of Dorchester.
The view generally held among British place name scholars is that Mayne in Dorset is Brythonic, corresponding to modern Welsh "maen", plural "main" or "meini". Some early spellings are: MAINE 1086, MEINE 1200, MEINES 1204, MAYNE 1236. Today the village is known as Broadmayne, which is primitive Welsh or Brythonic, "main" meaning rock or stone, considered a reference to the many large sarsen stones still present around Little Mayne farm, half a mile northeast of Broadmayne village.
The first known record of the name appears in an August 10, 1622, land charter to Sir Ferdinando Gorges and Captain John Mason, English Royal Navy veterans, who were granted a large tract in present-day Maine that Mason and Gorges "intend to name the Province of Maine". Mason had served with the Royal Navy in the Orkney Islands, where the chief island is called Mainland, a possible name derivation for these English sailors. In 1623, the English naval captain Christopher Levett, exploring the New England coast, wrote: "The first place I set my foote upon in New England was the Isle of Shoals, being Ilands in the sea, above two Leagues from the Mayne." Initially, several tracts along the coast of New England were referred to as Main or Maine (cf. the Spanish Main). A reconfirmed and enhanced April 3, 1639, charter, from England's King CharlesI, gave Sir Ferdinando Gorges increased powers over his new province and stated that it "shall forever hereafter, be called and named the PROVINCE OR COUNTIE OF MAINE, and not by any other name or names whatsoever..." Maine is the only U.S. state whose name has only one syllable.
History
The original inhabitants of the territory that is now Maine were Algonquian-speaking Wabanaki peoples, including the Passamaquoddy, Maliseet, Penobscot, Androscoggin, and Kennebec. During the later King Philip's War, many of these peoples would merge in one form or another to become the Wabanaki Confederacy, aiding the Wampanoag of Massachusetts and the Mahican of New York. Afterwards, many of these people were driven from their natural territories, but most of Maine's tribes continued, unchanged, until the American Revolution. Before this point, however, most of these people were considered separate nations. Many had adapted to living in permanent, Iroquois-inspired settlements, while those along the coast tended to be semi-nomadic—traveling from settlement to settlement on a yearly cycle. They would usually winter inland and head to the coasts by summer.
European contact with what is now called Maine may have started around 1200 CE when Norwegians are believed to have interacted with the native Penobscot in present-day Hancock County, most likely through trade. If confirmed, this would make Maine the site of the earliest European landfall in the entire US. About 200 years earlier, from the settlements in Iceland and Greenland, Norwegians first identified America and attempted to settle areas such as Newfoundland, but failed to establish a permanent settlement. Archeological evidence suggests that Norwegians in Greenland returned to North America for several centuries after the initial discovery to trade and collect timber, with the most relevant evidence being the Maine Penny, an 11th-century Norwegian coin found at a Native American dig site in 1954.
The first European confirmed settlement in modern-day Maine was in 1604 on Saint Croix Island, led by French explorer Pierre Dugua, Sieur de Mons. His party included Samuel de Champlain, noted as an explorer. The French named the entire area Acadia, including the portion that later became the state of Maine. The Plymouth Company established the first English settlement in Maine at the Popham Colony in 1607, the same year as the settlement at Jamestown, Virginia. The Popham colonists returned to Britain after 14 months.
The French established two Jesuit missions: one on Penobscot Bay in 1609, and the other on Mount Desert Island in 1613. The same year, Claude de La Tour established Castine. In 1625, Charles de Saint-Étienne de la Tour erected Fort Pentagouet to protect Castine. The coastal areas of eastern Maine first became the Province of Maine in a 1622 land patent. The part of western Maine north of the Kennebec River was more sparsely settled and was known in the 17th century as the Territory of Sagadahock. A second settlement was attempted in 1623 by English explorer and naval Captain Christopher Levett at a place called York, where he had been granted by King Charles I of England. It also failed.
The 1622 patent of the Province of Maine was split at the Piscataqua River into the Province of New Hampshire to the south and New Somersetshire to the north. A disputed 1630 patent split off the area around present-day Saco as Lygonia. Justifying its actions with a 1652 geographic survey that showed an overlapping patent, the Massachusetts Bay Colony had seized New Somersetshire and Lygonia by force by 1658. The Territory of Sagadahock between the Kennebec River and St. Croix River notionally became Cornwall County, Province of New York under a 1664 grant from Charles II of England to his brother James, at the time the Duke of York. Some of this land was claimed by New France as part of Acadia. All of the English settlements in the Massachusetts Bay Colony and the Province of New York became part of the Dominion of New England in 1686. All of present-day Maine was unified as York County, Massachusetts under a 1691 royal patent for the Province of Massachusetts Bay.
Central Maine was formerly inhabited by the Androscoggin tribe of the Abenaki nation, also known as Arosaguntacook. They were driven out of the area in 1690 during King William's War. They were relocated to St. Francis, Canada, which was destroyed by Rogers' Rangers in 1759, and is now Odanak. The other Abenaki tribes suffered several severe defeats, particularly during Dummer's War, with the capture of Norridgewock in 1724 and the defeat of the Pequawket in 1725, which significantly reduced their numbers. They finally withdrew to Canada, where they were settled at Bécancour and Sillery, and later at St. Francis, along with other refugee tribes from the south.
Maine was much fought over by the French, English, and allied natives during the 17th and 18th centuries, who conducted raids against each other, taking captives for ransom or, in some cases, adoption by Native American tribes. A notable example was the early 1692 Abenaki raid on York, where about 100 English settlers were killed and another estimated 80 taken hostage. The Abenaki took captives taken during raids of Massachusetts in Queen Anne's War of the early 1700s to Kahnewake, a Catholic Mohawk village near Montreal, where some were adopted and others ransomed.
After the British defeated the French in Acadia in the 1740s, the territory from the Penobscot River east fell under the nominal authority of the Province of Nova Scotia, and together with present-day New Brunswick formed the Nova Scotia county of Sunbury, with its court of general sessions at Campobello. American and British forces contended for Maine's territory during the American Revolution and the War of 1812, with the British occupying eastern Maine in both conflicts via the Colony of New Ireland. The territory of Maine was confirmed as part of Massachusetts when the United States was formed following the Treaty of Paris ending the revolution, although the final border with British North America was not established until the Webster–Ashburton Treaty of 1842.
Maine was physically separate from the rest of Massachusetts. Long-standing disagreements over land speculation and settlements led to Maine residents and their allies in Massachusetts proper forcing an 1807 vote in the Massachusetts Assembly on permitting Maine to secede; the vote failed. Secessionist sentiment in Maine was stoked during the War of 1812 when Massachusetts pro-British merchants opposed the war and refused to defend Maine from British invaders. In 1819, Massachusetts agreed to permit secession, sanctioned by voters of the rapidly growing region the following year.
Statehood and Missouri Compromise
Formal secession from Massachusetts and admission of Maine as the 23rd state occurred on March 15, 1820, as part of the Missouri Compromise, which geographically limited the spread of slavery and enabled the admission to statehood of Missouri the following year, keeping a balance between slave and free states.
Maine's original state capital was Portland, Maine's largest city, until it was moved to the more central Augusta in 1832. The principal office of the Maine Supreme Judicial Court remains in Portland.
The 20th Maine Volunteer Infantry Regiment, under the command of Colonel Joshua Lawrence Chamberlain, prevented the Union Army from being flanked at Little Round Top by the Confederate Army during the Battle of Gettysburg.
Four U.S. Navy ships have been named USS Maine, most famously the armored cruiser , whose sinking by an explosion on February 15, 1898 precipitated the Spanish–American War.
Geography
To the south and east is the Gulf of Maine, and to the west is the state of New Hampshire. The Canadian province of New Brunswick is to the north and northeast, and the province of Québec is to the northwest. Maine is the northernmost state in New England and the largest, accounting for almost half of the region's entire land area. Maine is the only state to border exactly one other American state (New Hampshire).
Maine is the easternmost state in the United States both in its extreme points and in its geographic center. The town of Lubec is the easternmost organized settlement in the United States. Its Quoddy Head Lighthouse is also the closest place in the United States to Africa and Europe. Estcourt Station is Maine's northernmost point, as well as the northernmost point in New England. (For more information see extreme points of the United States.)
Maine's Moosehead Lake is the largest lake wholly in New England, since Lake Champlain is located between Vermont, New York and Québec. A number of other Maine lakes, such as South Twin Lake, are described by Thoreau in The Maine Woods (1864). Mount Katahdin is the northern terminus of the Appalachian Trail, which extends southerly to Springer Mountain, Georgia, and the southern terminus of the new International Appalachian Trail which, when complete, will run to Belle Isle, Newfoundland and Labrador.
Machias Seal Island and North Rock, off the state's Downeast coast, are claimed by both Canada and the American town of Cutler, and are within one of four areas between the two countries whose sovereignty is still in dispute, but it is the only one of the disputed areas containing land. Also in this easternmost area in the Bay of Fundy is the Old Sow, the largest tidal whirlpool in the Western Hemisphere.
Maine is the least densely populated U.S. state east of the Mississippi River. It is called the Pine Tree State; over 80% of its total land is forested or unclaimed, the most forest cover of any U.S. state. In the wooded areas of the interior lies much uninhabited land, some of which does not have formal political organization into local units (a rarity in New England). The Northwest Aroostook unorganized territory in the northern part of the state, for example, has an area of and a population of 10, or one person for every .
Maine is in the temperate broadleaf and mixed forests biome. The land near the southern and central Atlantic coast is covered by the mixed oaks of the Northeastern coastal forests. The remainder of the state, including the North Woods, is covered by the New England–Acadian forests.
Maine has almost of ocean coastline (and of tidal coastline). West Quoddy Head in Lubec is the easternmost point of land in the 48 contiguous states. Along the famous rock-bound coast of Maine are lighthouses, beaches, fishing villages, and thousands of offshore islands, including the Isles of Shoals which straddle the New Hampshire border. There are jagged rocks and cliffs and many bays and inlets. Inland are lakes, rivers, forests, and mountains. This visual contrast of forested slopes sweeping down to the sea has been summed up by American poet Edna St. Vincent Millay of Rockland and Camden, in "Renascence":
Geologists describe this type of landscape as a "drowned coast", where a rising sea level has invaded former land features, creating bays out of valleys and islands out of mountain tops. A rise in land elevation due to the melting of heavy glacier ice caused a slight rebounding effect of underlying rock; this land rise, however, was not enough to eliminate all the effect of the rising sea level and its invasion of former land features.
Much of Maine's geomorphology was created by extended glacial activity at the end of the last ice age. Prominent glacial features include Somes Sound and Bubble Rock, both part of Acadia National Park on Mount Desert Island. Carved by glaciers, Somes Sound is considered to be the only fjord on the eastern seaboard and reaches depths of . The extreme depth and steep drop-off allow large ships to navigate almost the entire length of the sound. These features also have made it attractive for boat builders, such as the prestigious Hinckley Yachts.
Bubble Rock, a glacial erratic, is a large boulder perched on the edge of Bubble Mountain in Acadia National Park. By analyzing the type of granite, geologists discovered that glaciers carried Bubble Rock to its present location from near Lucerne, away. The Iapetus Suture runs through the north and west of the state, being underlain by the ancient Laurentian terrane, and the south and east underlain by the Avalonian terrane.
Acadia National Park is the only national park in New England. Areas under the protection and management of the National Park Service include:
Acadia National Park near Bar Harbor
Appalachian National Scenic Trail
Maine Acadian Culture in St. John Valley
Roosevelt Campobello International Park on Campobello Island in New Brunswick, Canada, operated by both the U.S. and Canada, just across the Franklin Delano Roosevelt Bridge from Lubec
Saint Croix Island International Historic Site at Calais
Katahdin Woods and Waters National Monument
Lands under the control of the state of Maine include:
Maine State Parks
Maine Wildlife Management Areas (WMA)
Climate
Maine has a humid continental climate (Köppen climate classification Dfb), with warm and sometimes humid summers, and long, cold and very snowy winters. Winters are especially severe in the Northern and Western parts of Maine, while coastal areas are moderated slightly by the Atlantic Ocean, resulting in marginally milder winters and cooler summers than inland regions. Daytime highs are generally in the range throughout the state in July, with overnight lows in the high 50s°F (around 15°C). January temperatures range from highs near on the southern coast to overnight lows averaging below in the far north.
The state's record high temperature is , set in July 1911, at North Bridgton.
Precipitation in Maine is evenly distributed year-round, but with a slight summer maximum in northern/northwestern Maine and a slight late-fall or early-winter maximum along the coast due to "nor'easters" or intense cold-season rain and snowstorms. In coastal Maine, the late spring and summer months are usually driest—a rarity across the Eastern United States. Maine has fewer days of thunderstorms than any other state east of the Rockies, with most of the state averaging fewer than twenty days of thunderstorms a year. Tornadoes are rare in Maine, with the state averaging fewer than four per year, although this number is increasing. Most severe thunderstorms and tornadoes occur in the Sebago Lakes & Foothills region of the state. Maine rarely sees the effect of tropical cyclones, as they tend to pass well east and south or are greatly weakened by the time they reach Maine.
In January 2009, a new record low temperature for the state was set at Big Black River of , tying the New England record.
Annual precipitation varies from in Presque Isle to in Acadia National Park.
Demographics
Population
The United States Census Bureau estimates that the population of Maine was 1,344,212 on July 1, 2019, a 1.19% increase since the 2010 United States census. At the 2020 census, 1,362,359 people lived in the state. The state's population density is 41.3 people per square mile, making it the least densely populated state east of the Mississippi River. As of 2010, Maine was also the most rural state in the Union, with only 38.7% of the state's population living within urban areas. As explained in detail under "Geography", there are large tracts of uninhabited land in some remote parts of the interior of the state, particularly in the North Maine Woods.
The mean population center of Maine is located in Kennebec County, just east of Augusta. The Greater Portland metropolitan area is the most densely populated with nearly 40% of Maine's population. This area spans three counties and includes many farms and wooded areas; the 2016 population of Portland proper was 66,937.
Maine has experienced a very slow rate of population growth since the 1990 census; its rate of growth (0.57%) since the 2010 census ranks 45th of the 50 states. The modest population growth in the state has been concentrated in the southern coastal counties; with more diverse populations slowly moving into these areas of the state. However, the northern, more rural areas of the state have experienced a slight decline in population in recent years.
According to the 2010 Census, Maine has the highest percentage of non-Hispanic whites of any state, at 94.4% of the total population. In 2011, 89.0% of all births in the state were to non-Hispanic white parents. Maine also has the second-highest residential senior population.
The table below shows the racial composition of Maine's population as of 2016.
According to the 2016 American Community Survey, 1.5% of Maine's population were of Hispanic or Latino origin (of any race): Mexican (0.4%), Puerto Rican (0.4%), Cuban (0.1%), and other Hispanic or Latino origin (0.6%). The five largest ancestry groups were: English (20.7%), Irish (17.3%), French (15.7%), German (8.1%), and American (7.8%).
People citing that they are American are of overwhelmingly English descent, but have ancestry that has been in the region for so long (often since the 17th century) that they choose to identify simply as Americans.
Maine has the highest percentage of French Americans of any state. Most of them are of Canadian origin, but in some cases have been living there since prior to the American Revolutionary War. There are particularly high concentrations in the northern part of Maine in Aroostook County, which is part of a cultural region known as Acadia that goes over the border into New Brunswick. Along with the Acadian population in the north, many French came from Quebec as immigrants between 1840 and 1930.
The upper Saint John River valley area was once part of the so-called Republic of Madawaska, before the frontier was decided in the Webster-Ashburton Treaty of 1842. Over a quarter of the population of Lewiston, Waterville, and Biddeford are Franco-American. Most of the residents of the Mid Coast and Down East sections are chiefly of British heritage. Smaller numbers of various other groups, including Irish, Italian and Polish, have settled throughout the state since the late 19th and early 20th century immigration waves.
Birth data
Note: Births in table do not sum to 100% because Hispanics are counted both by their ethnicity and by their race.
Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race.
Language
Maine does not have an official language, but the most widely spoken language in the state is English. The 2000 Census reported 92.25% of Maine residents aged five and older spoke only English at home. French-speakers are the state's chief linguistic minority; census figures show that Maine has the highest percentage of people speaking French at home of any state: 5.28% of Maine households are French-speaking, compared with 4.68% in Louisiana, which is the second highest state. Although rarely spoken, Spanish is the third-most-common language in Maine, after English and French.
Religion
According to the Association of Religion Data Archives (ARDA), the religious affiliations of Maine in 2010 were:
Protestant 37%
Evangelical Protestant 4%
Unclaimed 31%
Catholic Church 28%
Other religions 1.7%
Non-Christian religions include Hinduism, Islam, Buddhism and Baháʼí.
The Catholic Church was the largest religious institution with 202,106 members, the United Methodist Church had 28,329 members, the United Church of Christ had 22,747 members
In 2010, a study named Maine as the least religious state in the United States.
Economy
Total employment 2016
511,936
Total employer establishments 2016
41,178
The Bureau of Economic Analysis estimates that Maine's total gross state product for 2010 was $52 billion. Its per capita personal income for 2007 was $33,991, 34th in the nation. , Maine's unemployment rate is 3.0%
Maine's agricultural outputs include poultry, eggs, dairy products, cattle, wild blueberries, apples, maple syrup, and maple sugar. Aroostook County is known for its potato crops. Commercial fishing, once a mainstay of the state's economy, maintains a presence, particularly lobstering and groundfishing. While lobster is the main seafood focus for Maine, the harvest of both oysters and seaweed are on the rise. In 2015, 14% of the Northeast's total oyster supply came from Maine. In 2017, the production of Maine's seaweed industry was estimated at $20 million per year. The shrimp industry of Maine is on a government-mandated hold. With an ever-decreasing Northern shrimp population, Maine fishermen are no longer allowed to catch and sell shrimp. The hold began in 2014 and is expected to continue until 2021. Western Maine aquifers and springs are a major source of bottled water.
Maine's industrial outputs consist chiefly of paper, lumber and wood products, electronic equipment, leather products, food products, textiles, and bio-technology. Naval shipbuilding and construction remain key as well, with Bath Iron Works in Bath and Portsmouth Naval Shipyard in Kittery.
Brunswick Landing, formerly Naval Air Station Brunswick, is also in Maine. Formerly a large support base for the U.S. Navy, the BRAC campaign initiated the Naval Air Station's closing, despite a government-funded effort to upgrade its facilities. The former base has since been changed into a civilian business park, as well as a new satellite campus for Southern Maine Community College.
Maine is the number one U.S. producer of low-bush blueberries (Vaccinium angustifolium). Preliminary data from the USDA for 2012 also indicate Maine was the largest blueberry producer of the major blueberry producing states, with 91,100,000 lbs. This data includes both low (wild), and high-bush (cultivated) blueberries: Vaccinium corymbosum. The largest toothpick manufacturing plant in the United States used to be located in Strong, Maine. The Strong Wood Products plant produced 20 million toothpicks a day. It closed in May 2003.
Tourism and outdoor recreation play a major and increasingly important role in Maine's economy. The state is a popular destination for sport hunting (particularly deer, moose and bear), sport fishing, snowmobiling, skiing, boating, camping and hiking, among other activities. Concomitantly with the tourist and recreation-oriented economy, Maine has developed a burgeoning creative economy, most notably centered in the Greater Portland vicinity.
Historically, Maine ports played a key role in national transportation. Beginning around 1880, Portland's rail link and ice-free port made it Canada's principal winter port, until the aggressive development of Halifax, Nova Scotia, in the mid-20th century. In 2013, 12,039,600 short tons passed into and out of Portland by sea, which places it 45th of U.S. water ports. Portland International Jetport has been expanded, providing the state with increased air traffic from carriers such as JetBlue and Southwest Airlines.
Maine has very few large companies that maintain headquarters in the state, and that number has fallen due to consolidations and mergers, particularly in the pulp and paper industry. Some of the larger companies that do maintain headquarters in Maine include Covetrus in Portland, Fairchild Semiconductor in South Portland, IDEXX Laboratories in Westbrook, Hannaford Bros. Co. in Scarborough; TD Bank in Portland and L.L.Bean in Freeport. Maine is also the home of the Jackson Laboratory, the world's largest non-profit mammalian genetic research facility and the world's largest supplier of genetically purebred mice.
Taxation
Maine has an income tax structure containing two brackets, 6.5 and 7.95 percent of personal income. Before July 2013 Maine had four brackets: 2, 4.5, 7, and 8.5 percent. Maine's general sales tax rate is 5.5 percent. The state also levies charges of nine percent on lodging and prepared food and ten percent on short-term auto rentals. Commercial sellers of blueberries, a Maine staple, must keep records of their transactions and pay the state 1.5 cents per pound ($1.50 per 100 pounds) of the fruit sold each season. All real and tangible personal property located in the state of Maine is taxable unless specifically exempted by statute. The administration of property taxes is handled by the local assessor in incorporated cities and towns, while property taxes in the unorganized territories are handled by the State Tax Assessor.
Shipbuilding
Maine has a long-standing tradition of being home to many shipbuilding companies. In the 18th and 19th centuries, Maine was home to many shipyards that produced wooden sailing ships. The main function of these ships was to transport either cargos or passengers overseas. One of these yards was located in Pennellville Historic District in what is now Brunswick, Maine. This yard, owned by the Pennell family, was typical of the many family-owned shipbuilding companies of the time period. Other such examples of shipbuilding families were the Skolfields and the Morses. During the 18th and 19th centuries, wooden shipbuilding of this sort made up a sizable portion of the economy.
Transportation
Airports
Maine receives passenger jet service at its two largest airports, the Portland International Jetport in Portland, and the Bangor International Airport in Bangor. Both are served daily by many major airlines to destinations such as New York, Atlanta, and Orlando. Essential Air Service also subsidizes service to a number of smaller airports in Maine, bringing small turboprop aircraft to regional airports such as the Augusta State Airport, Hancock County-Bar Harbor Airport, Knox County Regional Airport, and the Northern Maine Regional Airport at Presque Isle. These airports are served by regional providers such as Cape Air with Cessna 402s, and CommutAir with Embraer ERJ 145 aircraft.
Many smaller airports are scattered throughout Maine, serving only general aviation traffic. The Eastport Municipal Airport, for example, is a city-owned public-use airport with 1,200 general aviation aircraft operations each year from single-engine and ultralight aircraft.
Highways
Interstate95 (I-95) travels through Maine, as well as its easterly branch I-295 and spurs I-195, I-395 and the unsigned I-495 (the Falmouth Spur). In addition, U.S. Route1 (US1) starts in Fort Kent and travels to Florida. The eastern terminus of the eastern section of US2 starts in Houlton, near the New Brunswick, Canada border to Rouses Point, New York, at US11. US2A connects Old Town and Orono, primarily serving the University of Maine campus. US201 and US202 flow through the state. US2, Maine State Route 6 (SR6), and SR9 are often used by truckers and other motorists of the Maritime Provinces en route to other destinations in the United States or as a short cut to Central Canada.
Rail
Passenger
The Downeaster passenger train, operated by Amtrak, provides passenger service between Brunswick and Boston's North Station, with stops in Freeport, Portland, Old Orchard Beach, Saco, and Wells. The Downeaster makes five daily trips.
Freight
Freight service throughout the state is provided by a handful of regional and shortline carriers: Pan Am Railways (formerly known as Guilford Rail System), which operates the former Boston & Maine and Maine Central railroads; St. Lawrence and Atlantic Railroad; Maine Eastern Railroad; Central Maine and Quebec Railway; and New Brunswick Southern Railway.
Law and government
The Maine Constitution structures Maine's state government, composed of three co-equal branches—the executive, legislative, and judicial branches. The state of Maine also has three Constitutional Officers (the Secretary of State, the State Treasurer, and the State Attorney General) and one Statutory Officer (the State Auditor).
The legislative branch is the Maine Legislature, a bicameral body composed of the Maine House of Representatives, with 151 members, and the Maine Senate, with 35 members. The Legislature is charged with introducing and passing laws.
The executive branch is responsible for the execution of the laws created by the Legislature and is headed by the Governor of Maine (currently Janet Mills). The Governor is elected every four years; no individual may serve more than two consecutive terms in this office. The current attorney general of Maine is Aaron Frey. As with other state legislatures, the Maine Legislature can by a two-thirds majority vote from both the House and Senate override a gubernatorial veto. Maine is one of seven states that do not have a lieutenant governor.
The judicial branch is responsible for interpreting state laws. The highest court of the state is the Maine Supreme Judicial Court. The lower courts are the District Court, Superior Court and Probate Court. All judges except for probate judges serve full-time, are nominated by the Governor and confirmed by the Legislature for terms of seven years. Probate judges serve part-time and are elected by the voters of each county for four-year terms.
In a 2020 study, Maine was ranked as the 14th easiest state for citizens to vote in.
Counties
Maine is divided into political jurisdictions designated as counties. Since 1860 there have been 16 counties in the state, ranging in size from .
Politics
State and local politics
In state general elections, Maine voters tend to accept independent and third-party candidates more frequently than most states. Maine has had two independent governors recently (James B. Longley, 1975–1979 and current U.S. Senator Angus King, 1995–2003). Maine state politicians, Democrats and Republicans alike, are noted for having more moderate views than many in the national wings of their respective parties.
Maine is an alcoholic beverage control state.
On May 6, 2009, Maine became the fifth state to legalize same-sex marriage; however, the law was repealed by voters on November 3, 2009. On November 6, 2012, Maine, along with Maryland and Washington, became the first state to legalize same-sex marriage at the ballot box.
Federal politics
In the 1930s, Maine was one of very few states which retained Republican sentiments. In the 1936 presidential election, Franklin D. Roosevelt received the electoral votes of every state other than Maine and Vermont; these were the only two states in the nation that never voted for Roosevelt in any of his presidential campaigns, though Maine was closely fought in 1940 and 1944. In the 1960s, Maine began to lean toward the Democrats, especially in presidential elections. In 1968, Hubert Humphrey became just the second Democrat in half a century to carry Maine, perhaps because of the presence of his running mate, Maine Senator Edmund Muskie, although the state voted Republican in every presidential election in the 1970s and 1980s.
Since 1969, two of Maine's four electoral votes have been awarded based on the winner of the statewide election; the other two go to the highest vote-getter in each of the state's two congressional districts. Every other state except Nebraska gives all its electoral votes to the candidate who wins the popular vote in the state at large, without regard to performance within districts. Maine split its electoral vote for the first time in 2016, with Donald Trump's strong showing in the more rural central and northern Maine allowing him to capture one of the state's four votes in the Electoral College.
Ross Perot achieved a great deal of success in Maine in the presidential elections of 1992 and 1996. In 1992, as an independent candidate, Perot came in second to Democrat Bill Clinton, despite the long-time presence of the Bush family summer home in Kennebunkport. In 1996, as the nominee of the Reform Party, Perot did better in Maine than in any other state.
Maine has voted for Democratic Bill Clinton twice, Al Gore in 2000, John Kerry in 2004, and Barack Obama in 2008 and 2012. In 2016, Republican Donald Trump won one of Maine's electoral votes with Democratic opponent Hillary Clinton winning the other three. Although Democrats have mostly carried the state in presidential elections in recent years, Republicans have largely maintained their control of the state's U.S. Senate seats, with Edmund Muskie, William Hathaway and George J. Mitchell being the only Maine Democrats serving in the U.S. Senate in the past fifty years.
In the 2010 midterm elections, Republicans made major gains in Maine. They captured the governor's office as well as majorities in both chambers of the state legislature for the first time since the early 1970s. However, in the 2012 elections Democrats managed to recapture both houses of Maine Legislature.
Maine's U.S. senators are Republican Susan Collins and Independent Angus King. The governor is Democrat Janet Mills. The state's two members of the United States House of Representatives are Democrats Chellie Pingree and
Jared Golden.
Maine is the first state to have introduced ranked-choice voting in federal elections.
Municipalities
Organized municipalities
An organized municipality has a form of elected local government which administers and provides local services, keeps records, collects licensing fees, and can pass locally binding ordinances, among other responsibilities of self-government. The governmental format of most organized towns and plantations is the town meeting, while the format of most cities is the council-manager form. the organized municipalities of Maine consist of 23 cities, 431 towns, and 34 plantations. Collectively these 488 organized municipalities cover less than half of the state's territory. Maine also has three Reservations: Indian Island, Indian Township Reservation, and Pleasant Point Indian Reservation.
The largest municipality in Maine, by population, is the city of Portland (pop. 66,318).
The smallest city by population is Eastport (pop. 1,331).
The largest town by population is Brunswick (pop. 20,278).
The smallest town by population is Frye Island, a resort town which reported zero year-round population in the 2000 Census; one plantation, Glenwood Plantation, also reported a permanent population of zero.
In the 2000 census, the smallest town aside from Frye Island was Centerville with a population of 26, but since that census, Centerville voted to disincorporate and therefore is no longer a town. The next smallest town with a population listed in that census is Beddington (pop. 50 at the 2010 census).
The largest municipality by land area is the town of Allagash, at .
The smallest municipality by land area is Monhegan Island, at . The smallest municipality by area that is not an island is Randolph, at .
Unorganized territory
Unorganized territory (UT) has no local government. Administration, services, licensing, and ordinances are handled by the state government as well as by respective county governments who have townships within each county's bounds. The unorganized territory of Maine consists of more than 400 townships (towns are incorporated, townships are unincorporated), plus many coastal islands that do not lie within any municipal bounds. The UT land area is slightly over half the entire area of the State of Maine. Year-round residents in the UT number approximately 9,000 (about 1.3% of the state's total population), with many more people staying there only seasonally. Only four of Maine's sixteen counties (Androscoggin, Cumberland, Waldo and York) are entirely incorporated, although a few others are nearly so, and most of the unincorporated area is in the vast and sparsely populated Great North Woods of Maine.
Most populous cities and towns
The most populous cities and towns as of the Census Bureau's 2017 estimates were (population in parentheses):
Portland (66,882)
Lewiston (36,221)
Bangor (31,903)
South Portland (25,483)
Auburn (23,033)
Biddeford (21,488)
Sanford (21,028)
Brunswick (20,278)
Saco (19,485)
Scarborough (18,919)
Westbrook (18,730)
Augusta (18,594)
Throughout Maine, many municipalities, although each separate governmental entities, nevertheless form portions of a much larger population base. There are many such population clusters throughout Maine, but some examples from the municipalities appearing in the above listing are:
Portland, South Portland, Cape Elizabeth, Westbrook, Scarborough, and Falmouth
Lewiston and Auburn
Bangor, Orono, Brewer, Old Town, and Hampden
Biddeford, Saco and Old Orchard Beach
Brunswick and Topsham
Waterville, Winslow, Fairfield, and Oakland
Presque Isle and Caribou
Education
There are thirty institutions of higher learning in Maine. These institutions include the University of Maine, which is the oldest, largest and only research university in the state. UMaine was founded in 1865 and is the state's only land grant and sea grant college. The University of Maine is located in the town of Orono and is the flagship of Maine. There are also branch campuses in Augusta, Farmington, Fort Kent, Machias, and Presque Isle.
Bowdoin College is a liberal arts college founded in 1794 in Brunswick, making it the oldest institution of higher learning in the state. Colby College in Waterville was founded in 1813 making it the second oldest college in Maine. Bates College in Lewiston was founded in 1855 making it the third oldest institution in the state and the oldest coeducational college in New England. The three colleges collectively form the Colby-Bates-Bowdoin Consortium and are ranked among the best colleges in the United States; often placing in the top 10% of all liberal arts colleges.
Maine's per-student public expenditure for elementary and secondary schools was 21st in the nation in 2012, at $12,344.
The collegiate system of Maine also includes numerous baccalaureate colleges such as: the Maine Maritime Academy (MMA), Unity College, and Thomas College. There is only one medical school in the state, (University of New England's College of Osteopathic Medicine) and only one law school (The University of Maine School of Law).
Private schools in Maine are funded independently of the state and its furthered domains. Private schools are less common than public schools. A large number of private elementary schools with under 20 students exist, but most private high schools in Maine can be described as "semi-private".
Culture
Agriculture
Maine was a center of agriculture before it achieved statehood. Prior to colonization, Wabanaki nations farmed large crops of corn and other produce in southern Maine. The state is a major producer of potatoes, wild blueberries, apples, maple syrup and sweet corn. Dairy products and chicken's eggs are other major industries.
Maine has many vegetable farms and other small, diversified farms. In the 1960s and 1970s, the book "Living the Good Life" by Helen Nearing and Scott Nearing caused many young people to move to Maine and engage in small-scale farming and homesteading. These back-to-the-land migrants increased the population of some counties.
Maine has a smaller number of commodity farms and confined animal feeding operations.
Maine is home to the Maine Organic Farmers and Gardeners Association and had 535 certified organic farms in 2019.
Food
Since the 1980s, the state has gotten a reputation for its local food and restaurant meals. Portland was named Bon Appetit magazine's Restaurant City of the Year in 2018. In 2018, HealthIQ.com named Maine the 3rd most vegan state.
Sports teams
Professional
Maine Celtics, basketball, NBA G League
Portland Sea Dogs, minor league baseball, Double-A Northeast
Maine Mariners, ice hockey, ECHL
Non-professional
Portland Phoenix FC, soccer, Premier Developmental League
Maine Roller Derby, roller derby, Women's Flat Track Derby Association
NCAA
Maine Black Bears
State symbols
Adapted from the Maine facts site.
State berry: Wild blueberry
State bird: Black-capped chickadee
State cat: Maine Coon
State dessert: Blueberry pie made with wild Maine blueberries
State fish: Land-locked salmon
State flower: White Pinecone and Tassel
State fossil: Pertica quadrifaria
State gemstone: Tourmaline
State herb: Wintergreen
State insect: European honey bee
State mammal: Moose
State Crustacean: Lobster
State soft drink: Moxie
State soil: Chesuncook soil series
State song: "State of Maine Song"
State treat: Whoopie pie
State tree: Eastern White Pine
State vessel: Arctic exploration schooner Bowdoin
State motto: Dirigo ("I lead")
People from Maine
A citizen of Maine is known as a "Mainer", though the term is often reserved for those whose roots in Maine go back at least three generations. The term "Downeaster" may be applied to residents of the northeast coast of the state. The term "Mainiac" is considered by some to be derogatory, but is embraced with pride by others, and is used for a variety of organizations and for events such as the YMCA Mainiac Sprint Triathlon & Duathlon.
See also
Index of Maine-related articles
Outline of Maine
References
Notes
Citations
External links
State government
Maine government
Maine Office of Tourism Search for tourism-related businesses
Visit Maine (agriculture) Maine fairs, festivals, etc.—Agricultural Dept.
U.S. government
Maine State Guide, from the Library of Congress
U.S. EIA Energy Profile for Maine—economic, environmental and energy data
U.S. Geological Survey Real-time, geographic, and other scientific resources of Maine
U.S. Dept. of Agriculture Maine State Facts—agricultural
U.S. Census Bureau Quick facts on Maine
Portland Magazine Editorial on Maine news, events, and people
Information
Maine Historical Society
Old USGS maps of Maine.
1860 Map of Maine by Mitchell.
1876 Panoramic Birdseye View of Portland by Warner at LOC.,
States of the United States
New England states
Northeastern United States
States and territories established in 1820
States of the East Coast of the United States
1820 establishments in the United States
Contiguous United States |
19978 | https://en.wikipedia.org/wiki/Montana | Montana | Montana () is a state in the Mountain West subregion of the Western United States. It is bordered by Idaho to the west; North Dakota and South Dakota to the east; Wyoming to the south; and by the Canadian provinces of Alberta, British Columbia, and Saskatchewan to the north. It is the fourth-largest state by area, the seventh-least populous state, and the third-least densely populated state. Its state capital is Helena. The western half of Montana contains numerous mountain ranges, while the eastern half is characterized by western prairie terrain and badlands, with smaller mountain ranges found throughout the state. In all, 77 named ranges are part of the Rocky Mountains.
Montana has no official nickname but several unofficial ones, most notably "Big Sky Country", "The Treasure State", "Land of the Shining Mountains", and "The Last Best Place". The economy is primarily based on agriculture, including ranching and cereal grain farming. Other significant economic resources include oil, gas, coal, mining, and lumber. The health care, service, and government sectors are also significant to the state's economy. Montana's fastest-growing sector is tourism; nearly 13 million annual tourists visit Glacier National Park, Yellowstone National Park, Beartooth Highway, Flathead Lake, Big Sky Resort, and other attractions.
Etymology
The name Montana comes from the Spanish word montaña, which in turn comes from the Latin word montanea, meaning "mountain" or more broadly "mountainous country". Montaña del Norte was the name given by early Spanish explorers to the entire mountainous region of the west. The name Montana was added in 1863 to a bill by the United States House Committee on Territories (chaired at the time by James Ashley of Ohio) for the territory that would become Idaho Territory.
The name was changed by representatives Henry Wilson (Massachusetts) and Benjamin F. Harding (Oregon), who complained Montana had "no meaning". When Ashley presented a bill to establish a temporary government in 1864 for a new territory to be carved out of Idaho, he again chose Montana Territory. This time, Rep. Samuel Cox, also of Ohio, objected to the name. Cox complained the name was a misnomer given most of the territory was not mountainous and a Native American name would be more appropriate than a Spanish one. Other names such as Shoshone were suggested, but the Committee on Territories decided that they had discretion to choose the name, so the original name of Montana was adopted.
History
Various indigenous peoples lived in the territory of the present-day state of Montana for thousands of years. Historic tribes encountered by Europeans and settlers from the United States included the Crow in the south-central area, the Cheyenne in the southeast, the Blackfeet, Assiniboine, and Gros Ventres in the central and north-central area, and the Kootenai and Salish in the west. The smaller Pend d'Oreille and Kalispel tribes lived near Flathead Lake and the western mountains, respectively. A part of southeastern Montana was used as a corridor between the Crows and the related Hidatsas in North Dakota.
As part of the Missouri River watershed, all of the land in Montana east of the Continental Divide was part of the Louisiana Purchase in 1803. Subsequent to and particularly in the decades following the Lewis and Clark Expedition, European, Canadian and American traders operated a fur trade, trading with indigenous peoples, in both eastern and western portions of what would become Montana. Though the increased interaction between fur traders and indigenous peoples frequently proved to be a profitable partnership, conflicts broke out when indigenous interests were threatened, such as the conflict between American trappers and the Blackfeet. Indigenous peoples in the region were also decimated by diseases introduced by fur traders to which they had no immunity. The trading post Fort Raymond (1807–1811) was constructed in Crow Indian country in 1807. Until the Oregon Treaty of 1846, land west of the continental divide was disputed between the British and U.S. governments and was known as the Oregon Country. The first permanent settlement by Euro-Americans in what today is Montana was St. Mary's, established in 1841 near present-day Stevensville. In 1847, Fort Benton was built as the uppermost fur-trading post on the Missouri River. In the 1850s, settlers began moving into the Beaverhead and Big Hole valleys from the Oregon Trail and into the Clark's Fork valley.
The first gold discovered in Montana was at Gold Creek near present-day Garrison in 1852. Gold rushes to the region commenced in earnest starting in 1862. A series of major mineral discoveries in the western part of the state found gold, silver, copper, lead, and coal (and later oil) which attracted tens of thousands of miners to the area. The richest of all gold placer diggings was discovered at Alder Gulch, where the town of Virginia City was established. Other rich placer deposits were found at Last Chance Gulch, where the city of Helena now stands, Confederate Gulch, Silver Bow, Emigrant Gulch, and Cooke City. Gold output between 1862 and 1876 reached $144 million, after which silver became even more important. The largest mining operations were at Butte, with important silver deposits and expansive copper deposits.
Montana territory
Before the creation of Montana Territory (1864–1889), areas within present-day Montana were part of the Oregon Territory (1848–1859), Washington Territory (1853–1863), Idaho Territory (1863–1864), and Dakota Territory (1861–1864). Montana Territory became one of the territories of the United States on May 26, 1864. The first territorial capital was located at Bannack. Sidney Edgerton served as the first territorial governor. The capital moved to Virginia City in 1865 and to Helena in 1875. In 1870, the non-Indian population of the Montana Territory was 20,595. The Montana Historical Society, founded on February 2, 1865, in Virginia City, is the oldest such institution west of the Mississippi (excluding Louisiana). In 1869 and 1870 respectively, the Cook–Folsom–Peterson and the Washburn–Langford–Doane Expeditions were launched from Helena into the Upper Yellowstone region. The extraordinary discoveries and reports from these expeditions led to the creation of Yellowstone National Park in 1872.
Conflicts
As settlers began populating Montana from the 1850s through the 1870s, disputes with Native Americans ensued, primarily over land ownership and control. In 1855, Washington Territorial Governor Isaac Stevens negotiated the Hellgate treaty between the United States government and the Salish, Pend d'Oreille, and Kootenai people of western Montana, which established boundaries for the tribal nations. The treaty was ratified in 1859. While the treaty established what later became the Flathead Indian Reservation, trouble with interpreters and confusion over the terms of the treaty led Whites to believe the Bitterroot Valley was opened to settlement, but the tribal nations disputed those provisions. The Salish remained in the Bitterroot Valley until 1891.
The first U.S. Army post established in Montana was Camp Cooke in 1866, on the Missouri River, to protect steamboat traffic to Fort Benton. More than a dozen additional military outposts were established in the state. Pressure over land ownership and control increased due to discoveries of gold in various parts of Montana and surrounding states. Major battles occurred in Montana during Red Cloud's War, the Great Sioux War of 1876, and the Nez Perce War and in conflicts with Piegan Blackfeet. The most notable were the Marias Massacre (1870), Battle of the Little Bighorn (1876), Battle of the Big Hole (1877), and Battle of Bear Paw (1877). The last recorded conflict in Montana between the U.S. Army and Native Americans occurred in 1887 during the Battle of Crow Agency in the Big Horn country. Indian survivors who had signed treaties were generally required to move onto reservations.
Simultaneously with these conflicts, bison, a keystone species and the primary protein source that Native people had survived on for many centuries, were being destroyed. Experts estimate than around 13 million bison roamed Montana in 1870. In 1875, General Philip Sheridan pleaded to a joint session of Congress to authorize the slaughtering of bison herds to deprive the Indians of their source of food. By 1884, commercial hunting had brought bison to the verge of extinction; only about 325 bison remained in the entire United States.
Cattle ranching
Cattle ranching has been central to Montana's history and economy since Johnny Grant began wintering cattle in the Deer Lodge Valley in the 1850s and traded cattle fattened in fertile Montana valleys with emigrants on the Oregon Trail. Nelson Story brought the first Texas Longhorn cattle into the territory in 1866. Granville Stuart, Samuel Hauser, and Andrew J. Davis started a major open-range cattle operation in Fergus County in 1879. The Grant-Kohrs Ranch National Historic Site in Deer Lodge is maintained today as a link to the ranching style of the late 19th century. Operated by the National Park Service, it is a working ranch.
Railroads
Tracks of the Northern Pacific Railroad (NPR) reached Montana from the west in 1881 and from the east in 1882. However, the railroad played a major role in sparking tensions with Native American tribes in the 1870s. Jay Cooke, the NPR president, launched major surveys into the Yellowstone valley in 1871, 1872, and 1873, which were challenged forcefully by the Sioux under chief Sitting Bull. These clashes, in part, contributed to the Panic of 1873, a financial crisis that delayed the construction of the railroad into Montana. Surveys in 1874, 1875, and 1876 helped spark the Great Sioux War of 1876. The transcontinental NPR was completed on September 8, 1883, at Gold Creek.
In 1881, the Utah and Northern Railway, a branch line of the Union Pacific, completed a narrow-gauge line from northern Utah to Butte. A number of smaller spur lines operated in Montana from 1881 into the 20th century, including the Oregon Short Line, Montana Railroad, and Milwaukee Road.
Tracks of the Great Northern Railroad (GNR) reached eastern Montana in 1887 and when they reached the northern Rocky Mountains in 1890, the GNR became a significant promoter of tourism to Glacier National Park region. The transcontinental GNR was completed on January 6, 1893, at Scenic, Washington and is known as the Hi Line, being the northern most transcontinental rail line in the United States.
Statehood
Under Territorial Governor Thomas Meagher, Montanans held a constitutional convention in 1866 in a failed bid for statehood. A second constitutional convention held in Helena in 1884 produced a constitution ratified 3:1 by Montana citizens in November 1884. For political reasons, Congress did not approve Montana statehood until February 1889 and President Grover Cleveland signed an omnibus bill granting statehood to Montana, North Dakota, South Dakota, and Washington once the appropriate state constitutions were crafted. In July 1889, Montanans convened their third constitutional convention and produced a constitution accepted by the people and the federal government. On November 8, 1889, President Benjamin Harrison proclaimed Montana the union's 41st state. The first state governor was Joseph K. Toole. In the 1880s, Helena (the state capital) had more millionaires per capita than any other United States city.
Homesteading
The Homestead Act of 1862 provided free land to settlers who could claim and "prove-up" of federal land in the Midwest and western United States. Montana did not see a large influx of immigrants from this act because 160 acres were usually insufficient to support a family in the arid territory. The first homestead claim under the act in Montana was made by David Carpenter near Helena in 1868. The first claim by a woman was made near Warm Springs Creek by Gwenllian Evans, the daughter of Deer Lodge Montana pioneer, Morgan Evans. By 1880, farms were in the more verdant valleys of central and western Montana, but few were on the eastern plains.
The Desert Land Act of 1877 was passed to allow settlement of arid lands in the west and allotted to settlers for a fee of $.25 per acre and a promise to irrigate the land. After three years, a fee of one dollar per acre would be paid and the settler would own the land. This act brought mostly cattle and sheep ranchers into Montana, many of whom grazed their herds on the Montana prairie for three years, did little to irrigate the land and then abandoned it without paying the final fees. Some farmers came with the arrival of the Great Northern and Northern Pacific Railroads throughout the 1880s and 1890s, though in relatively small numbers.
In the early 1900s, James J. Hill of the Great Northern began to promote settlement in the Montana prairie to fill his trains with settlers and goods. Other railroads followed suit. In 1902, the Reclamation Act was passed, allowing irrigation projects to be built in Montana's eastern river valleys. In 1909, Congress passed the Enlarged Homestead Act that expanded the amount of free land from per family and in 1912 reduced the time to "prove up" on a claim to three years. In 1916, the Stock-Raising Homestead Act allowed homesteads of 640 acres in areas unsuitable for irrigation. This combination of advertising and changes in the Homestead Act drew tens of thousands of homesteaders, lured by free land, with World War I bringing particularly high wheat prices. In addition, Montana was going through a temporary period of higher-than-average precipitation. Homesteaders arriving in this period were known as "Honyockers", or "scissorbills". Though the word "honyocker", possibly derived from the ethnic slur "hunyak", was applied in a derisive manner at homesteaders as being "greenhorns", "new at his business", or "unprepared", most of these new settlers had farming experience, though many did not.
However, farmers faced a number of problems. Massive debt was one. Also, most settlers were from wetter regions, unprepared for the dry climate, lack of trees, and scarce water resources. In addition, small homesteads of fewer than were unsuited to the environment. Weather and agricultural conditions are much harsher and drier west of the 100th meridian. Then, the droughts of 1917–1921 proved devastating. Many people left, and half the banks in the state went bankrupt as a result of providing mortgages that could not be repaid. As a result, farm sizes increased while the number of farms decreased.
By 1910, homesteaders filed claims on over five million acres, and by 1923, over 93 million acres were farmed. In 1910, the Great Falls land office alone had more than a thousand homestead filings per month, and at the peak of 1917–1918 it had 14,000 new homesteads each year. Significant drops occurred following the drought in 1919.
Montana and World War I
As World War I broke out, Jeannette Rankin, the first woman in the United States to be a member of Congress, voted against the United States' declaration of war. Her actions were widely criticized in Montana, where support for the war and patriotism was strong. In 1917–18, due to a miscalculation of Montana's population, about 40,000 Montanans, 10% of the state's population, volunteered or were drafted into the armed forces. This represented a manpower contribution to the war that was 25% higher than any other state on a per capita basis. Around 1500 Montanans died as a result of the war and 2437 were wounded, also higher than any other state on a per capita basis. Montana's Remount station in Miles City provided 10,000 cavalry horses for the war, more than any other Army post in the country. The war created a boom for Montana mining, lumber, and farming interests, as demand for war materials and food increased.
In June 1917, the U.S. Congress passed the Espionage Act of 1917, which was extended by the Sedition Act of 1918. In February 1918, the Montana legislature had passed the Montana Sedition Act, which was a model for the federal version. In combination, these laws criminalized criticism of the U.S. government, military, or symbols through speech or other means. The Montana Act led to the arrest of more than 200 individuals and the conviction of 78, mostly of German or Austrian descent. More than 40 spent time in prison. In May 2006, then-Governor Brian Schweitzer posthumously issued full pardons for all those convicted of violating the Montana Sedition Act.
The Montanans who opposed U.S. entry into the war included immigrant groups of German and Irish heritage, as well as pacifist Anabaptist people such as the Hutterites and Mennonites, many of whom were also of Germanic heritage. In turn, pro-War groups formed, such as the Montana Council of Defense, created by Governor Samuel V. Stewart and local "loyalty committees".
War sentiment was complicated by labor issues. The Anaconda Copper Company, which was at its historic peak of copper production, was an extremely powerful force in Montana, but it also faced criticism and opposition from socialist newspapers and unions struggling to make gains for their members. In Butte, a multiethnic community with a significant European immigrant population, labor unions, particularly the newly formed Metal Mine Workers' Union, opposed the war on grounds it mostly profited large lumber and mining interests. In the wake of ramped-up mine production and the Speculator Mine disaster in June 1917, Industrial Workers of the World organizer Frank Little arrived in Butte to organize miners. He gave some speeches with inflammatory antiwar rhetoric. On August 1, 1917, he was dragged from his boarding house by masked vigilantes, and hanged from a railroad trestle, considered a lynching. Little's murder and the strikes that followed resulted in the National Guard being sent to Butte to restore order. Overall, anti-German and antilabor sentiment increased and created a movement that led to the passage of the Montana Sedition Act the following February. In addition, the Council of Defense was made a state agency with the power to prosecute and punish individuals deemed in violation of the Act. The council also passed rules limiting public gatherings and prohibiting the speaking of German in public.
In the wake of the legislative action in 1918, emotions rose. U.S. Attorney Burton K. Wheeler and several district court judges who hesitated to prosecute or convict people brought up on charges were strongly criticized. Wheeler was brought before the Council of Defense, though he avoided formal proceedings, and a district court judge from Forsyth was impeached. Burnings of German-language books and several near-hangings occurred. The prohibition on speaking German remained in effect into the early 1920s. Complicating the wartime struggles, the 1918 influenza epidemic claimed the lives of more than 5,000 Montanans. The suppression of civil liberties that occurred led some historians to dub this period "Montana's Agony".
Depression era
An economic depression began in Montana after World War I and lasted through the Great Depression until the beginning of World War II. This caused great hardship for farmers, ranchers, and miners. The wheat farms in eastern Montana make the state a major producer; the wheat has a relatively high protein content, thus commands premium prices.
Montana and World War II
By the time the U.S. entered World War II on December 8, 1941, many Montanans had enlisted in the military to escape the poor national economy of the previous decade. Another 40,000-plus Montanans entered the armed forces in the first year following the declaration of war, and more than 57,000 joined up before the war ended. These numbers constituted about ten percent of the state's population, and Montana again contributed one of the highest numbers of soldiers per capita of any state. Many Native Americans were among those who served, including soldiers from the Crow Nation who became Code Talkers. At least 1,500 Montanans died in the war. Montana also was the training ground for the First Special Service Force or "Devil's Brigade", a joint U.S-Canadian commando-style force that trained at Fort William Henry Harrison for experience in mountainous and winter conditions before deployment. Air bases were built in Great Falls, Lewistown, Cut Bank, and Glasgow, some of which were used as staging areas to prepare planes to be sent to allied forces in the Soviet Union. During the war, about 30 Japanese Fu-Go balloon bombs were documented to have landed in Montana, though no casualties nor major forest fires were attributed to them.
In 1940, Jeannette Rankin was again elected to Congress. In 1941, as she had in 1917, she voted against the United States' declaration of war after the Japanese attack on Pearl Harbor. Hers was the only vote against the war, and in the wake of public outcry over her vote, Rankin required police protection for a time. Other pacifists tended to be those from "peace churches" who generally opposed war. Many individuals claiming conscientious objector status from throughout the U.S. were sent to Montana during the war as smokejumpers and for other forest fire-fighting duties.
In 1942, the US Army established Camp Rimini near Helena for the purpose of training sled dogs in winter weather.
Other military
During World War II, the planned battleship USS Montana was named in honor of the state but it was never completed. Montana is the only one of the first 48 states lacking a completed battleship being named for it. Alaska and Hawaii have both had nuclear submarines named after them. Montana is the only state in the union without a modern naval ship named in its honor. However, in August 2007, Senator Jon Tester asked that a submarine be christened USS Montana. Secretary of the Navy Ray Mabus announced on September 3, 2015, that Virginia Class attack submarine SSN-794 will become the second commissioned warship to bear the name.
Cold War Montana
In the post-World War II Cold War era, Montana became host to U.S. Air Force Military Air Transport Service (1947) for airlift training in C-54 Skymasters and eventually, in 1953 Strategic Air Command air and missile forces were based at Malmstrom Air Force Base in Great Falls. The base also hosted the 29th Fighter Interceptor Squadron, Air Defense Command from 1953 to 1968. In December 1959, Malmstrom AFB was selected as the home of the new Minuteman I intercontinental ballistic missile. The first operational missiles were in place and ready in early 1962. In late 1962, missiles assigned to the 341st Strategic Missile Wing played a major role in the Cuban Missile Crisis. When the Soviets removed their missiles from Cuba, President John F. Kennedy said the Soviets backed down because they knew he had an "ace in the hole", referring directly to the Minuteman missiles in Montana. Montana eventually became home to the largest ICBM field in the U.S. covering .
Geography
Montana is one of the eight Mountain States, located in the north of the region known as the Western United States. It borders North Dakota and South Dakota to the east. Wyoming is to the south, Idaho is to the west and southwest, and the Canadian provinces of British Columbia, Alberta, and Saskatchewan are to the north, making it the only state to border three Canadian provinces.
With an area of , Montana is slightly larger than Japan. It is the fourth-largest state in the United States after Alaska, Texas, and California, and the largest landlocked state.
Topography
The state's topography is roughly defined by the Continental Divide, which splits much of the state into distinct eastern and western regions. Most of Montana's hundred or more named mountain ranges are in the state's western half, most of which is geologically and geographically part of the northern Rocky Mountains. The Absaroka and Beartooth ranges in the state's south-central part are technically part of the Central Rocky Mountains. The Rocky Mountain Front is a significant feature in the state's north-central portion, and isolated island ranges that interrupt the prairie landscape common in the central and eastern parts of the state. About 60 percent of the state is prairie, part of the northern Great Plains.
The Bitterroot Mountains—one of the longest continuous ranges in the Rocky Mountain chain from Alaska to Mexico—along with smaller ranges, including the Coeur d'Alene Mountains and the Cabinet Mountains, divide the state from Idaho. The southern third of the Bitterroot range blends into the Continental Divide. Other major mountain ranges west of the divide include the Cabinet Mountains, the Anaconda Range, the Missions, the Garnet Range, the Sapphire Mountains, and the Flint Creek Range.
The divide's northern section, where the mountains rapidly give way to prairie, is part of the Rocky Mountain Front. The front is most pronounced in the Lewis Range, located primarily in Glacier National Park. Due to the configuration of mountain ranges in Glacier National Park, the Northern Divide (which begins in Alaska's Seward Peninsula) crosses this region and turns east in Montana at Triple Divide Peak. It causes the Waterton River, Belly, and Saint Mary rivers to flow north into Alberta, Canada. There they join the Saskatchewan River, which ultimately empties into Hudson Bay.
East of the divide, several roughly parallel ranges cover the state's southern part, including the Gravelly Range, Madison Range, Gallatin Range, Absaroka Mountains, and Beartooth Mountains. The Beartooth Plateau is the largest continuous land mass over high in the continental United States. It contains the state's highest point, Granite Peak, high. North of these ranges are the Big Belt Mountains, Bridger Mountains, Tobacco Roots, and several island ranges, including the Crazy Mountains and Little Belt Mountains.
Between many mountain ranges are several rich river valleys. The Big Hole Valley, Bitterroot Valley, Gallatin Valley, Flathead Valley, and Paradise Valley have extensive agricultural resources and multiple opportunities for tourism and recreation.
East and north of this transition zone are the expansive and sparsely populated Northern Plains, with tableland prairies, smaller island mountain ranges, and badlands. The isolated island ranges east of the Divide include the Bear Paw Mountains, Bull Mountains, Castle Mountains, Crazy Mountains, Highwood Mountains, Judith Mountains, Little Belt Mountains, Little Rocky Mountains, the Pryor Mountains, Little Snowy Mountains, Big Snowy Mountains, Sweet Grass Hills, and—in the state's southeastern corner near Ekalaka—the Long Pines. Many of these isolated eastern ranges were created about 120 to 66 million years ago when magma welling up from the interior cracked and bowed the earth's surface here.
The area east of the divide in the state's north-central portion is known for the Missouri Breaks and other significant rock formations. Three buttes south of Great Falls are major landmarks: Cascade, Crown, Square, Shaw, and Buttes. Known as laccoliths, they formed when igneous rock protruded through cracks in the sedimentary rock. The underlying surface consists of sandstone and shale. Surface soils in the area are highly diverse, and greatly affected by the local geology, whether glaciated plain, intermountain basin, mountain foothills, or tableland. Foothill regions are often covered in weathered stone or broken slate, or consist of uncovered bare rock (usually igneous, quartzite, sandstone, or shale). The soil of intermountain basins usually consists of clay, gravel, sand, silt, and volcanic ash, much of it laid down by lakes which covered the region during the Oligocene 33 to 23 million years ago. Tablelands are often topped with argillite gravel and weathered quartzite, occasionally underlain by shale. The glaciated plains are generally covered in clay, gravel, sand, and silt left by the proglacial Lake Great Falls or by moraines or gravel-covered former lake basins left by the Wisconsin glaciation 85,000 to 11,000 years ago. Farther east, areas such as Makoshika State Park near Glendive and Medicine Rocks State Park near Ekalaka contain some of the most scenic badlands regions in the state.
The Hell Creek Formation in Northeast Montana is a major source of dinosaur fossils. Paleontologist Jack Horner of the Museum of the Rockies in Bozeman brought this formation to the world's attention with several major finds.
Rivers, lakes and reservoirs
Montana has thousands of named rivers and creeks, of which are known for "blue-ribbon" trout fishing. Montana's water resources provide for recreation, hydropower, crop and forage irrigation, mining, and water for human consumption.
Montana is one of few geographic areas in the world whose rivers form parts of three major watersheds (i.e. where two continental divides intersect). Its rivers feed the Pacific Ocean, the Gulf of Mexico, and Hudson Bay. The watersheds divide at Triple Divide Peak in Glacier National Park. If Hudson Bay is considered part of the Arctic Ocean, Triple Divide Peak is the only place on Earth with drainage to three different oceans.
Pacific Ocean drainage basin
All waters in Montana west of the divide flow into the Columbia River. The Clark Fork of the Columbia (not to be confused with the Clarks Fork of the Yellowstone River) rises near Butte and flows northwest to Missoula, where it is joined by the Blackfoot River and Bitterroot River. Farther downstream, it is joined by the Flathead River before entering Idaho near Lake Pend Oreille. The Pend Oreille River forms the outflow of Lake Pend Oreille. The Pend Oreille River joined the Columbia River, which flows to the Pacific Ocean—making the long Clark Fork/Pend Oreille (considered a single river system) the longest river in the Rocky Mountains. The Clark Fork discharges the greatest volume of water of any river exiting the state. The Kootenai River in northwest Montana is another major tributary of the Columbia.
Gulf of Mexico drainage basin
East of the divide the Missouri River, which is formed by the confluence of the Jefferson, Madison, and Gallatin Rivers near Three Forks, flows due north through the west-central part of the state to Great Falls. From this point, it then flows generally east through fairly flat agricultural land and the Missouri Breaks to Fort Peck reservoir. The stretch of river between Fort Benton and the Fred Robinson Bridge at the western boundary of Fort Peck Reservoir was designated a National Wild and Scenic River in 1976. The Missouri enters North Dakota near Fort Union, having drained more than half the land area of Montana (). Nearly one-third of the Missouri River in Montana lies behind 10 dams: Toston, Canyon Ferry, Hauser, Holter, Black Eagle, Rainbow, Cochrane, Ryan, Morony, and Fort Peck. Other major Montana tributaries of the Missouri include the Smith, Milk, Marias, Judith, and Musselshell Rivers. Montana also claims the disputed title of possessing the world's shortest river, the Roe River, just outside Great Falls. Through the Missouri, these rivers ultimately join the Mississippi River and flow into the Gulf of Mexico.
Hell Roaring Creek begins in southern Montana, and when combined with the Red Rock, Beaverhead, Jefferson, Missouri, and Mississippi River, is the longest river in North America and the fourth longest river in the world.
The Yellowstone River rises on the Continental Divide near Younts Peak in Wyoming's Teton Wilderness. It flows north through Yellowstone National Park, enters Montana near Gardiner, and passes through the Paradise Valley to Livingston. It then flows northeasterly across the state through Billings, Miles City, Glendive, and Sidney. The Yellowstone joins the Missouri in North Dakota just east of Fort Union. It is the longest undammed, free-flowing river in the contiguous United States, and drains about a quarter of Montana (). Major tributaries of the Yellowstone include the Boulder, Stillwater, Clarks Fork, Bighorn, Tongue, and Powder Rivers.
Hudson Bay drainage basin
The Northern Divide turns east in Montana at Triple Divide Peak, causing the Waterton, Belly, and Saint Mary Rivers to flow north into Alberta. There they join the Saskatchewan River, which ultimately empties into Hudson Bay.
Lakes and reservoirs
Montana has some 3,000 named lakes and reservoirs, including Flathead Lake, the largest natural freshwater lake in the western United States. Other major lakes include Whitefish Lake in the Flathead Valley and Lake McDonald and St. Mary Lake in Glacier National Park. The largest reservoir in the state is Fort Peck Reservoir on the Missouri river, which is contained by the second largest earthen dam and largest hydraulically filled dam in the world. Other major reservoirs include Hungry Horse on the Flathead River; Lake Koocanusa on the Kootenai River; Lake Elwell on the Marias River; Clark Canyon on the Beaverhead River; Yellowtail on the Bighorn River, Canyon Ferry, Hauser, Holter, Rainbow; and Black Eagle on the Missouri River.
Flora and fauna
Vegetation of the state includes lodgepole pine, ponderosa pine, Douglas fir, larch, spruce, aspen, birch, red cedar, hemlock, ash, alder, rocky mountain maple and cottonwood trees. Forests cover about 25% of the state. Flowers native to Montana include asters, bitterroots, daisies, lupins, poppies, primroses, columbine, lilies, orchids, and dryads. Several species of sagebrush and cactus and many species of grasses are common. Many species of mushrooms and lichens are also found in the state.
Montana is home to diverse fauna including 14 amphibian, 90 fish, 117 mammal, 20 reptile, and 427 bird species. Additionally, more than 10,000 invertebrate species are present, including 180 mollusks and 30 crustaceans. Montana has the largest grizzly bear population in the lower 48 states. Montana hosts five federally endangered species–black-footed ferret, whooping crane, least tern, pallid sturgeon, and white sturgeon and seven threatened species including the grizzly bear, Canadian lynx, and bull trout. Since re-introduction the gray wolf population has stabilized at about 900 animals, and they have been delisted as endangered. The Montana Department of Fish, Wildlife and Parks manages fishing and hunting seasons for at least 17 species of game fish, including seven species of trout, walleye, and smallmouth bass and at least 29 species of game birds and animals including ring-neck pheasant, grey partridge, elk, pronghorn antelope, mule deer, whitetail deer, gray wolf, and bighorn sheep.
Protected lands
Montana contains Glacier National Park, "The Crown of the Continent"; and parts of Yellowstone National Park, including three of the park's five entrances. Other federally recognized sites include the Little Bighorn National Monument, Bighorn Canyon National Recreation Area, and Big Hole National Battlefield. The Bison Range is managed by the Confederated Salish and Kootenai Tribes and the American Prairie is owned and operated by a non-profit organization.
Federal and state agencies administer approximately , or 35 percent of Montana's land. The U.S. Department of Agriculture Forest Service administers of forest land in ten National Forests. There are approximately of wilderness in 12 separate wilderness areas that are part of the National Wilderness Preservation System established by the Wilderness Act of 1964. The U.S. Department of the Interior Bureau of Land Management controls of federal land. The U.S. Department of the Interior Fish and Wildlife Service administers of 1.1 million acres of National Wildlife Refuges and waterfowl production areas in Montana. The U.S. Department of the Interior Bureau of Reclamation administers approximately of land and water surface in the state. The Montana Department of Fish, Wildlife and Parks operate approximately of state parks and access points on the state's rivers and lakes. The Montana Department of Natural Resources and Conservation manages of School Trust Land ceded by the federal government under the Land Ordinance of 1785 to the state in 1889 when Montana was granted statehood. These lands are managed by the state for the benefit of public schools and institutions in the state.
Areas managed by the National Park Service include:
Big Hole National Battlefield near Wisdom
Bighorn Canyon National Recreation Area near Fort Smith
Glacier National Park
Grant-Kohrs Ranch National Historic Site at Deer Lodge
Lewis and Clark National Historic Trail
Little Bighorn Battlefield National Monument near Crow Agency
Nez Perce National Historical Park
Yellowstone National Park
Climate
Montana is a large state with considerable variation in geography, topography and elevation, and the climate is equally varied. The state spans from below the 45th parallel (the line equidistant between the equator and North Pole) to the 49th parallel, and elevations range from under to nearly above sea level. The western half is mountainous, interrupted by numerous large valleys. Eastern Montana comprises plains and badlands, broken by hills and isolated mountain ranges, and has a semiarid, continental climate (Köppen climate classification BSk). The Continental Divide has a considerable effect on the climate, as it restricts the flow of warmer air from the Pacific from moving east, and drier continental air from moving west. The area west of the divide has a modified northern Pacific Coast climate, with milder winters, cooler summers, less wind, and a longer growing season. Low clouds and fog often form in the valleys west of the divide in winter, but this is rarely seen in the east.
Average daytime temperatures vary from in January to in July. The variation in geography leads to great variation in temperature. The highest observed summer temperature was at Glendive on July 20, 1893, and Medicine Lake on July 5, 1937. Throughout the state, summer nights are generally cool and pleasant. Extreme hot weather is less common above . Snowfall has been recorded in all months of the year in the more mountainous areas of central and western Montana, though it is rare in July and August.
The coldest temperature on record for Montana is also the coldest temperature for the contiguous United States. On January 20, 1954, was recorded at a gold mining camp near Rogers Pass. Temperatures vary greatly on cold nights, and Helena, to the southeast had a low of only on the same date, and an all-time record low of . Winter cold spells are usually the result of cold continental air coming south from Canada. The front is often well defined, causing a large temperature drop in a 24-hour period. Conversely, air flow from the southwest results in "chinooks". These steady (or more) winds can suddenly warm parts of Montana, especially areas just to the east of the mountains, where temperatures sometimes rise up to for 10 days or longer.
Loma is the site of the most extreme recorded temperature change in a 24-hour period in the United States. On January 15, 1972, a chinook wind blew in and the temperature rose from .
Average annual precipitation is , but great variations are seen. The mountain ranges block the moist Pacific air, holding moisture in the western valleys, and creating rain shadows to the east. Heron, in the west, receives the most precipitation, . On the eastern (leeward) side of a mountain range, the valleys are much drier; Lonepine averages , and Deer Lodge of precipitation. The mountains can receive over , for example the Grinnell Glacier in Glacier National Park gets . An area southwest of Belfry averaged only over a 16-year period. Most of the larger cities get of snow each year. Mountain ranges can accumulate of snow during a winter. Heavy snowstorms may occur from September through May, though most snow falls from November to March.
The climate has become warmer in Montana and continues to do so. The glaciers in Glacier National Park have receded and are predicted to melt away completely in a few decades. Many Montana cities set heat records during July 2007, the hottest month ever recorded in Montana. Winters are warmer, too, and have fewer cold spells. Previously, these cold spells had killed off bark beetles, but these are now attacking the forests of western Montana. The warmer winters in the region have allowed various species to expand their ranges and proliferate. The combination of warmer weather, attack by beetles, and mismanagement has led to a substantial increase in the severity of forest fires in Montana. According to a study done for the U.S. Environmental Protection Agency by the Harvard School of Engineering and Applied Science, parts of Montana will experience a 200% increase in area burned by wildfires and an 80% increase in related air pollution.
The table below lists average temperatures for the warmest and coldest month for Montana's seven largest cities. The coldest month varies between December and January depending on location, although figures are similar throughout.
Antipodes
Montana is one of only two contiguous states (along with Colorado) that are antipodal to land. The Kerguelen Islands are antipodal to the Montana–Saskatchewan–Alberta border. No towns are precisely antipodal to Kerguelen, though Chester and Rudyard are close.
Cities and towns
Montana has 56 counties and a total of 364 "places" as defined by the United States Census Bureau; the latter comprising 129 incorporated places and 235 census-designated places. The incorporated places are made up of 52 cities, 75 towns, and two consolidated city-counties.
Montana has one city, Billings, with a population over 100,000; and three cities with populations over 50,000: Missoula, Great Falls and Bozeman. The state also has five Micropolitan Statistical Areas, centered on Bozeman, Butte, Helena, Kalispell and Havre.
Collectively all of these areas (excluding Havre) are known informally as the "big seven", as they are consistently the seven largest communities in the state (their rank order in terms of population is Billings, Missoula, Great Falls, Bozeman, Butte, Helena and Kalispell, according to the 2010 U.S. Census). Based on 2013 census numbers, they contain 35 percent of Montana's population, and the counties in which they are located are home to 62 percent of the state's population.
The geographic center of population of Montana is in sparsely populated Meagher County, in the town of White Sulphur Springs.
Demographics
The United States Census Bureau states that the population of Montana was 1,085,407 on April 1, 2020, an 9.7% increase since the 2010 United States census. The 2010 census put Montana's population at 989,415. During the first decade of the new century, growth was mainly concentrated in Montana's seven largest counties, with the highest percentage growth in Gallatin County, which had a 32% increase in its population from 2000 to 2010. The city having the largest percentage growth was Kalispell, with 40.1%, and the city with the largest increase in actual residents was Billings, with an increase in population of 14,323 from 2000 to 2010.
On January 3, 2012, the Census and Economic Information Center (CEIC) at the Montana Department of Commerce estimated Montana had hit the one million population mark sometime between November and December 2011.
According to the 2020 census, 88.9% of the population was White (87.8% non-Hispanic White), 6.7% American Indian and Alaska Native, 4.1% Hispanics and Latinos of any race, 0.9% Asian, 0.6% Black or African American, 0.1% Native Hawaiian and other Pacific Islander, and 2.8% from two or more races. The largest European ancestry groups in Montana as of 2010 were: German (27.0%), Irish (14.8%), English (12.6%), Norwegian (10.9%), French (4.7%), and Italian (3.4%).
Intrastate demographics
Montana has a larger Native American population, both numerically and as a percentage, than most U.S. states. Ranked 45th in population (by the 2010 Census) it is 19th in native people, who are 6.5% of the state's population—the sixth-highest percentage of all fifty. Of Montana's 56 counties, Native Americans constitute a majority in three: Big Horn, Glacier, and Roosevelt. Other counties with large Native American populations include Blaine, Cascade, Hill, Missoula, and Yellowstone Counties. The state's Native American population grew by 27.9% between 1980 and 1990 (at a time when Montana's entire population rose 1.6%), and by 18.5 percent between 2000 and 2010.
As of 2009, almost two-thirds of Native Americans in the state live in urban areas. Of Montana's 20 largest cities, Polson (15.7%), Havre (13.0%), Great Falls (5.0%), Billings (4.4%), and Anaconda (3.1%) had the greatest percentages of Native American residents in 2010. Billings (4,619), Great Falls (2,942), Missoula (1,838), Havre (1,210), and Polson (706) have the most Native Americans living there. The state's seven reservations include more than 12 distinct Native American ethnolinguistic groups.
While the largest European-American population in Montana overall is German (which may also include Austrian and Swiss, among other groups), pockets of significant Scandinavian ancestry are prevalent in some of the farming-dominated northern and eastern prairie regions, parallel to nearby regions of North Dakota and Minnesota. Farmers of Irish, Scots, and English roots also settled in Montana. The historically mining-oriented communities of western Montana such as Butte have a wider range of European-American ethnicity; Finns, Eastern Europeans and especially Irish settlers left an indelible mark on the area, as well as people originally from British mining regions such as Cornwall, Devon, and Wales. The nearby city of Helena, also founded as a mining camp, had a similar mix in addition to a small Chinatown. Many of Montana's historic logging communities originally attracted people of Scottish, Scandinavian, Slavic, English, and Scots-Irish descent.
The Hutterites, an Anabaptist sect originally from Switzerland, settled here, and today Montana is second only to South Dakota in U.S. Hutterite population, with several colonies spread across the state. Beginning in the mid-1990s, the state also had an influx of Amish, who moved to Montana from the increasingly urbanized areas of Ohio and Pennsylvania.
Montana's Hispanic population is concentrated in the Billings area in south-central Montana, where many of Montana's Mexican-Americans have been in the state for generations. Great Falls has the highest percentage of African-Americans in its population, although Billings has more African-American residents than Great Falls.
The Chinese in Montana, while a low percentage today, have been an important presence. About 2000–3000 Chinese miners were in the mining areas of Montana by 1870, and 2500 in 1890. However, public opinion grew increasingly negative toward them in the 1890s, and nearly half of the state's Asian population left the state by 1900. Today, the Missoula area has a large Hmong population and the nearly 3,000 Montanans who claim Filipino ancestry are the largest Asian-American group in the state.
In the 2015 United States census estimates, Montana had the second-highest percentage of U.S. military veterans of another state. Only the state of Alaska had a higher percentage with Alaska having roughly 14 percent of its population over 18 being veterans and Montana having roughly 12 percent of its population over 18 being veterans.
Native Americans
About 66,000 people of Native American heritage live in Montana. Stemming from multiple treaties and federal legislation, including the Indian Appropriations Act (1851), the Dawes Act (1887), and the Indian Reorganization Act (1934), seven Indian reservations, encompassing 11 federally recognized tribal nations, were created in Montana. A 12th nation, the Little Shell Chippewa is a "landless" people headquartered in Great Falls; it is recognized by the state of Montana, but not by the U.S. government. The Blackfeet nation is headquartered on the Blackfeet Indian Reservation (1851) in Browning, Crow on the Crow Indian Reservation (1868) in Crow Agency, Confederated Salish and Kootenai and Pend d'Oreille on the Flathead Indian Reservation (1855) in Pablo, Northern Cheyenne on the Northern Cheyenne Indian Reservation (1884) at Lame Deer, Assiniboine and Gros Ventre on the Fort Belknap Indian Reservation (1888) in Fort Belknap Agency, Assiniboine and Sioux on the Fort Peck Indian Reservation (1888) at Poplar, and Chippewa-Cree on the Rocky Boy's Indian Reservation (1916) near Box Elder. Approximately 63% of all Native people live off the reservations, concentrated in the larger Montana cities, with the largest concentration of urban Indians in Great Falls. The state also has a small Métis population and 1990 census data indicated that people from as many as 275 different tribes lived in Montana.
Montana's Constitution specifically reads, "the state recognizes the distinct and unique cultural heritage of the American Indians and is committed in its educational goals to the preservation of their cultural integrity." It is the only state in the U.S. with such a constitutional mandate. The Indian Education for All Act was passed in 1999 to provide funding for this mandate and ensure implementation. It mandates that all schools teach American Indian history, culture, and heritage from preschool through college. For kindergarten through 12th-grade students, an "Indian Education for All" curriculum from the Montana Office of Public Instruction is available free to all schools. The state was sued in 2004 because of lack of funding, and the state has increased its support of the program. South Dakota passed similar legislation in 2007, and Wisconsin was working to strengthen its own program based on this model—and the current practices of Montana's schools. Each Indian reservation in the state has a fully accredited tribal college. The University of Montana "was the first to establish dual admission agreements with all of the tribal colleges and as such it was the first institution in the nation to actively facilitate student transfer from the tribal colleges."
Birth data
Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number.
Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race.
Languages
English is the official language in the state of Montana, as it is in many U.S. states. According to the 2000 Census, 94.8% of the population aged five and older speak English at home. Spanish is the language next most commonly spoken at home, with about 13,040 Spanish-language speakers in the state (1.4% of the population) in 2011. Also, 15,438 (1.7% of the state population) were speakers of Indo-European languages other than English or Spanish, 10,154 (1.1%) were speakers of a Native American language, and 4,052 (0.4%) were speakers of an Asian or Pacific Islander language. Other languages spoken in Montana (as of 2013) include Assiniboine (about 150 speakers in Montana and Canada), Blackfoot (about 100 speakers), Cheyenne (about 1,700 speakers), Plains Cree (about 100 speakers), Crow (about 3,000 speakers), Dakota (about 18,800 speakers in Minnesota, Montana, Nebraska, North Dakota, and South Dakota), German Hutterite (about 5,600 speakers), Gros Ventre (about 10 speakers), Kalispel-Pend d'Oreille (about 64 speakers), Kutenai (about six speakers), and Lakota (about 6,000 speakers in Minnesota, Montana, Nebraska, North Dakota, South Dakota). The United States Department of Education estimated in 2009 that 5,274 students in Montana spoke a language at home other than English. These included a Native American language (64%), German (4%), Spanish (3%), Russian (1%), and Chinese (less than 0.5%).
Religion
According to the Pew Forum, the religious affiliations of the people of Montana are: Protestant 47%, Catholic 23%, LDS (Mormon) 5%, Jehovah's Witness 2%, Buddhist 1%, Jewish 0.5%, Muslim 0.5%, Hindu 0.5% and nonreligious at 20%.
The largest denominations in Montana as of 2010 were the Catholic Church with 127,612 adherents, the Church of Jesus Christ of Latter-day Saints with 46,484 adherents, Evangelical Lutheran Church in America with 38,665 adherents, and nondenominational Evangelical Protestant with 27,370 adherents.
Economy
, the U.S. Bureau of Economic Analysis estimated Montana's state product was $51.91 billion (47th in the nation) and per capita personal income was $41,280 (37th in the nation).
Total employment: 371,239 ()
Total employer establishments: 38,720 ()
Montana is a relative hub of beer microbrewing, ranking third in the nation in number of craft breweries per capita in 2011. Significant industries exist for lumber and mineral extraction; the state's resources include gold, coal, silver, talc, and vermiculite. Ecotaxes on resource extraction are numerous. A 1974 state severance tax on coal (which varied from 20 to 30%) was upheld by the Supreme Court of the United States in Commonwealth Edison Co. v. Montana, 453 U.S. 609 (1981).
Tourism is also important to the economy, with more than ten million visitors a year to Glacier National Park, Flathead Lake, the Missouri River headwaters, the site of the Battle of Little Bighorn, and three of the five entrances to Yellowstone National Park.
Montana's personal income tax contains seven brackets, with rates ranging from 1.0 to 6.9 percent. Montana has no sales tax*, and household goods are exempt from property taxes. However, property taxes are assessed on livestock, farm machinery, heavy equipment, automobiles, trucks, and business equipment. The amount of property tax owed is not determined solely by the property's value. The property's value is multiplied by a tax rate, set by the Montana Legislature, to determine its taxable value. The taxable value is then multiplied by the mill levy established by various taxing jurisdictions—city and county government, school districts, and others.
In the 1980s the absence of a sales tax became economically deleterious to communities bound to the state's tourism industry, as the revenue from income and property taxes provided by residents was grossly insignificant in regards to paying for the impact of non-residential travel—especially road repair. In 1985, the Montana Legislature passed a law allowing towns with fewer than 5,500 residents and unincorporated communities with fewer than 2,500 to levy a resort tax if more than half the community's income came from tourism. The resort tax is a sales tax that applies to hotels, motels and other lodging and camping facilities; restaurants, fast-food stores, and other food service establishments; taverns, bars, night clubs, lounges, or other public establishments that serve alcohol; as well as destination ski resorts or other destination recreational facilities.
It also applies to "luxuries"- defined by law as any item normally sold to the public or to transient visitors or tourists that does not include food purchased unprepared or unserved, medicine, medical supplies and services, appliances, hardware supplies and tools, or any necessities of life. Approximately 12.2 million non-residents visited Montana in 2018, and the population was estimated to be 1.06 million. This extremely disproportionate ratio of residents paying taxes vs. non-residents using state-funded services and infrastructure makes Montana's resort tax crucial in order to safely maintain heavily used roads and highways, as well as protect and preserve state parks.
, the state's unemployment rate is 3.5%.
Education
Colleges and universities
The Montana University System consists of:
Dawson Community College
Flathead Valley Community College
Miles Community College
Montana State University Bozeman
Gallatin College Montana State University Bozeman
Montana State University Billings
City College at Montana State University Billings Billings
Montana State University-Northern Havre
Great Falls College Montana State University Great Falls
University of Montana Missoula
Missoula College University of Montana Missoula
Montana Tech of the University of Montana Butte
Highlands College of Montana Tech Butte
University of Montana Western Dillon
Helena College University of Montana Helena
Bitterroot College University of Montana Hamilton
Tribal colleges in Montana include:
Aaniiih Nakoda College Harlem
Blackfeet Community College Browning
Chief Dull Knife College Lame Deer
Fort Peck Community College Poplar
Little Big Horn College Crow Agency
Salish Kootenai College Pablo
Stone Child College Box Elder
Four private colleges are in Montana:
Carroll College
Rocky Mountain College
University of Providence
Apollos University
Schools
The Montana Territory was formed on April 26, 1864, when the U.S. passed the Organic Act. Schools started forming in the area before it was officially a territory as families started settling into the area. The first schools were subscription schools that typically met in the teacher's home. The first formal school on record was at Fort Owen in Bitterroot valley in 1862. The students were Indian children and the children of Fort Owen employees. The first school term started in early winter and lasted only until February 28. Classes were taught by Mr. Robinson. Another early subscription school was started by Thomas Dimsdale in Virginia City in 1863. In this school students were charged $1.75 per week. The Montana Territorial Legislative Assembly had its inaugural meeting in 1864. The first legislature authorized counties to levy taxes for schools, which set the foundations for public schooling. Madison County was the first to take advantage of the newly authorized taxes and it formed the first public school in Virginia City in 1886. The first school year was scheduled to begin in January 1866, but severe weather postponed its opening until March. The first school year ran through the summer and did not end until August 17. One of the first teachers at the school was Sarah Raymond. She was a 25-year-old woman who had traveled to Virginia City via wagon train in 1865. To become a certified teacher, Raymond took a test in her home and paid a $6 fee in gold dust to obtain a teaching certificate. With the help of an assistant teacher, Mrs. Farley, Raymond was responsible for teaching 50 to 60 students each day out of the 81 students enrolled at the school. Sarah Raymond was paid $125 per month, and Mrs. Farley was paid $75 per month. No textbooks were used in the school. In their place was an assortment of books brought by various emigrants. Sarah quit teaching the following year, but she later became the Madison County superintendent of schools.
Culture
Many well-known artists, photographers and authors have documented the land, culture and people of Montana in the last 130 years. Painter and sculptor Charles Marion Russell, known as "the cowboy artist", created more than 2,000 paintings of cowboys, Native Americans, and landscapes set in the Western United States and in Alberta, Canada. The C. M. Russell Museum Complex in Great Falls, Montana, houses more than 2,000 Russell artworks, personal objects, and artifacts.
Pioneering feminist author, film-maker, and media personality Mary MacLane attained international fame in 1902 with her memoir of three months in her life in Butte, The Story of Mary MacLane. She referred to Butte throughout the rest of her career and remains a controversial figure there for her mixture of criticism and love for Butte and its people.
Evelyn Cameron, a naturalist and photographer from Terry documented early 20th-century life on the Montana prairie, taking startlingly clear pictures of everything around her: cowboys, sheepherders, weddings, river crossings, freight wagons, people working, badlands, eagles, coyotes and wolves.
Many notable Montana authors have documented or been inspired by life in Montana in both fiction and non-fiction works. Pulitzer Prize winner Wallace Earle Stegner from Great Falls was often called "The Dean of Western Writers". James Willard Schultz ("Apikuni") from Browning is most noted for his prolific stories about Blackfeet life and his contributions to the naming of prominent features in Glacier National Park.
Major cultural events
Montana hosts numerous arts and cultural festivals and events every year. Major events include:
Bozeman was once known as the "Sweet Pea capital of the nation" referencing the prolific edible pea crop. To promote the area and celebrate its prosperity, local business owners began a "Sweet Pea Carnival" that included a parade and queen contest. The annual event lasted from 1906 to 1916. Promoters used the inedible but fragrant and colorful sweet pea flower as an emblem of the celebration. In 1977 the "Sweet Pea" concept was revived as an arts festival rather than a harvest celebration, growing into a three-day event that is one of the largest festivals in Montana.
Montana Shakespeare in the Parks has been performing free, live theatrical productions of Shakespeare and other classics throughout Montana and the Northwest region since 1973. The organization is an outreach endeavor that is part of the College of Arts & Architecture at Montana State University, Bozeman. The Montana Shakespeare Company is based in Helena.
Since 1909, the Crow Fair and Rodeo, near Hardin, has been an annual event every August in Crow Agency and is the largest Northern Native American gathering, attracting nearly 45,000 spectators and participants. Since 1952, North American Indian Days has been held every July in Browning.
Lame Deer hosts the annual Northern Cheyenne Powwow.
Sports
Professional sports
There are no major league sports franchises in Montana due to the state's relatively small and dispersed population, but a number of minor league teams play in the state. Baseball is the minor-league sport with the longest heritage in the state and Montana is home to three independent teams, all members of the Pioneer League: the Billings Mustangs, Great Falls Voyagers, and Missoula Osprey.
College sports
All of Montana's four-year colleges and universities field intercollegiate sports teams. The two largest schools, the University of Montana and Montana State University, are members of the Big Sky Conference and have enjoyed a strong athletic rivalry since the early twentieth century. Six of Montana's smaller four-year schools are members of the Frontier Conference. One is a member of the Great Northwest Athletic Conference.
Other sports
A variety of sports are offered at Montana high schools. Montana allows the smallest—"Class C"—high schools to utilize six-man football teams, dramatized in the independent 2002 film The Slaughter Rule.
There are junior ice hockey teams in Montana, three of which are affiliated with the North American 3 Hockey League: the Bozeman Icedogs, Great Falls Americans, and Helena Bighorns.
Olympic competitors
Ski jumping champion and United States Skiing Hall of Fame inductee Casper Oimoen was captain of the U.S. Olympic team at the 1936 Winter Olympics while he was a resident of Anaconda. He placed thirteenth that year, and had previously finished fifth at the 1932 Winter Olympics.
Montana has produced two U.S. champions and Olympic competitors in men's figure skating, both from Great Falls: John Misha Petkevich, lived and trained in Montana before entering college, competed in the 1968 and 1972 Winter Olympics. Scott Davis, also from Great Falls, competed at the 1994 Winter Olympics.
Missoulian Tommy Moe won Olympic gold and silver medals at the 1994 Winter Olympics in downhill skiing and super G, the first American skier to win two medals at any Winter Olympics.
Eric Bergoust, also of Missoula, won an Olympic gold medal in freestyle aerial skiing at the 1998 Winter Olympics, also competing in 1994, 2002 and 2006 Olympics plus winning 13 World Cup titles.
Sporting achievements
Montanans have been a part of several major sporting achievements:
In 1889, Spokane became the first and only Montana horse to win the Kentucky Derby. For this accomplishment, the horse was admitted to the Montana Cowboy Hall of Fame in 2008.
In 1904 a basketball team of young Native American women from Fort Shaw, after playing undefeated during their previous season, went to the Louisiana Purchase Exposition held in St. Louis in 1904, defeated all challenging teams and were declared to be world champions.
In 1923, the controversial Jack Dempsey vs. Tommy Gibbons fight for the heavyweight boxing championship, won by Dempsey, took place in Shelby.
Outdoor recreation
Montana provides year-round outdoor recreation opportunities for residents and visitors. Hiking, fishing, hunting, watercraft recreation, camping, golf, cycling, horseback riding, and skiing are popular activities.
Fishing and hunting
Montana has been a destination for its world-class trout fisheries since the 1930s. Fly fishing for several species of native and introduced trout in rivers and lakes is popular for both residents and tourists throughout the state. Montana is the home of the Federation of Fly Fishers and hosts many of the organization's annual conclaves. The state has robust recreational lake trout and kokanee salmon fisheries in the west, walleye can be found in many parts of the state, while northern pike, smallmouth and largemouth bass fisheries as well as catfish and paddlefish can be found in the waters of eastern Montana. Robert Redford's 1992 film of Norman Mclean's novel, A River Runs Through It, was filmed in Montana and brought national attention to fly fishing and the state. Fishing makes up a sizeable component of Montana's total tourism economic output: in 2017, nonresidents generated $4.7 billion in economic output, of which, $1.3 billion was generated by visitor groups participating in guided fishing experiences.
There are fall bow and general hunting seasons for elk, pronghorn antelope, whitetail deer and mule deer. A random draw grants a limited number of permits for moose, mountain goats and bighorn sheep. There is a spring hunting season for black bear and limited hunting of bison that leave Yellowstone National Park has been allowed. Current law allows both hunters and trappers specified numbers ("limits") of wolves and mountain lions. Trapping of assorted fur-bearing animals is allowed in certain seasons and many opportunities exist for migratory waterfowl and upland bird hunting. The Rocky Mountain Elk Foundation, which protects wildlife habitat and promotes hunting heritage, was founded in Montana.
Winter sports
Both downhill skiing and cross-country skiing are popular in Montana, which has 15 developed downhill ski areas open to the public, including:
Bear Paw Ski Bowl near Havre
Big Sky Resort in Big Sky
Blacktail Mountain Ski Area near Lakeside
Bridger Bowl Ski Area near Bozeman
Discovery Ski Area near Philipsburg
Great Divide Ski Area near Helena
Lookout Pass Ski and Recreation Area off Interstate 90 at the Montana-Idaho border
Lost Trail Powder Mountain near Darby
Maverick Mountain Ski Area near Dillon
Montana Snowbowl near Missoula
Red Lodge Mountain Resort near Red Lodge
Showdown Ski Area near White Sulphur Springs
Teton Pass Ski Area near Choteau
Turner Mountain Ski Resort near Libby
Whitefish Mountain Resort near Whitefish
Big Sky Resort and Whitefish Mountain Resort are destination resorts, while the remaining areas do not have overnight lodging at the ski area, though several host restaurants and other amenities.
Montana also has millions of acres open to cross-country skiing on nine of its national forests and in Glacier National Park. In addition to cross-country trails at most of the downhill ski areas, there are also 13 private cross-country skiing resorts. Yellowstone National Park also allows cross-country skiing.
Snowmobiling is popular in Montana, which boasts over 4,000 miles of trails and frozen lakes available in winter. There are 24 areas where snowmobile trails are maintained, most also offering ungroomed trails. West Yellowstone offers a large selection of trails and is the primary starting point for snowmobile trips into Yellowstone National Park, where "oversnow" vehicle use is strictly limited, usually to guided tours, and regulations are in considerable flux.
Snow coach tours are offered at Big Sky, Whitefish, West Yellowstone and into Yellowstone National Park. Equestrian skijoring has a niche in Montana, which hosts the World Skijoring Championships in Whitefish as part of the annual Whitefish Winter Carnival.
Health
Montana does not have a Trauma I hospital but does have Trauma II hospitals in Missoula, Billings, and Great Falls. In 2013, AARP The Magazine named the Billings Clinic one of the safest hospitals in the United States.
Montana is ranked as the least obese state in the U.S., at 19.6%, according to the 2014 Gallup Poll.
Montana has the 4th highest suicide rate of any state in the US as of 2021.
Media
As of 2010, Missoula is the 166th largest media market in the United States as ranked by Nielsen Media Research, while Billings is 170th, Great Falls is 190th, the Butte-Bozeman area 191st, and Helena is 206th. There are 25 television stations in Montana, representing each major U.S. network. As of August 2013, there are 527 FCC-licensed FM radio stations broadcast in Montana, with 114 such AM stations.
During the age of the Copper Kings, each Montana copper company had its own newspaper. This changed in 1959 when Lee Enterprises bought several Montana newspapers. Montana's largest circulating daily city newspapers are the Billings Gazette (circulation 39,405), Great Falls Tribune (26,733), and Missoulian (25,439).
Transportation
Railroads have been an important method of transportation in Montana since the 1880s. Historically, the state was traversed by the main lines of three east–west transcontinental routes: the Milwaukee Road, the Great Northern, and the Northern Pacific. Today, the BNSF Railway is the state's largest railroad, its main transcontinental route incorporating the former Great Northern main line across the state. Montana RailLink, a privately held Class II railroad, operates former Northern Pacific trackage in western Montana.
In addition, Amtrak's Empire Builder train runs through the north of the state, stopping in Libby, Whitefish, West Glacier, Essex, East Glacier Park, Browning, Cut Bank, Shelby, Havre, Malta, Glasgow, and Wolf Point.
Bozeman Yellowstone International Airport is the busiest airport in the state of Montana, surpassing Billings Logan International Airport in the spring of 2013. Montana's other major airports include Missoula International Airport, Great Falls International Airport, Glacier Park International Airport, Helena Regional Airport, Bert Mooney Airport and Yellowstone Airport. Eight smaller communities have airports designated for commercial service under the Essential Air Service program.
Historically, U.S. Route 10 was the primary east–west highway route across Montana, connecting the major cities in the southern half of the state. Still, the state's most important east–west travel corridor, the route is today served by Interstate 90 and Interstate 94 which roughly follow the same route as the Northern Pacific. U.S. Routes 2 and 12 and Montana Highway 200 also traverse the entire state from east to west.
Montana's only north–south Interstate Highway is Interstate 15. Other major north–south highways include U.S. Routes 87, 89, 93 and 191.
Montana and South Dakota are the only states to share a land border that is not traversed by a paved road. Highway 212, the primary paved route between the two, passes through the northeast corner of Wyoming between Montana and South Dakota.
Law and government
Constitution
Montana is governed by a constitution. The first constitution was drafted by a constitutional convention in 1889, in preparation for statehood. Ninety percent of its language came from an 1884 constitution which was never acted upon by Congress for national political reasons. The 1889 constitution mimicked the structure of the United States Constitution, as well as outlining almost the same civil and political rights for citizens. However, the 1889 Montana constitution significantly restricted the power of state government, the legislature was much more powerful than the executive branch, and the jurisdiction of the District Courts very specifically described. Montana voters amended the 1889 constitution 37 times between 1889 and 1972. In 1914, Montana granted women the vote. In 1916, Montana became the first state to elect a woman, Progressive Republican Jeannette Rankin, to Congress.
In 1971, Montana voters approved the call for a state constitutional convention. A new constitution was drafted, which made the legislative and executive branches much more equal in power and which was much less prescriptive in outlining powers, duties, and jurisdictions. The draft included an expanded, more progressive list of civil and political rights, extended these rights to children for the first time, transferred administration of property taxes to the counties from the state, implemented new water rights, eliminated sovereign immunity, and gave the legislature greater power to spend tax revenues. The constitution was narrowly approved, 116,415 to 113,883, and declared ratified on June 20, 1972. Three issues that the constitutional convention was unable to resolve were submitted to voters simultaneously with the proposed constitution. Voters approved the legalization of gambling, a bicameral legislature, and retention of the death penalty.
The 1972 constitution has been amended 31 times as of 2015. Major amendments include establishment of a reclamation trust (funded by taxes on natural resource extraction) to restore mined land (1974); restoration of sovereign immunity, when such immunity has been approved by a two-thirds vote in each house (1974); establishment of a 90-day biennial (rather than annual) legislative session (1974); establishment of a coal tax trust fund, funded by a tax on coal extraction (1976); conversion of the mandatory decennial review of county government into a voluntary one, to be approved or disallowed by residents in each county (1978); conversion of the provision of public assistance from a mandatory civil right to a non-fundamental legislative prerogative (1988); a new constitutional right to hunt and fish (2004); a prohibition on gay marriage (2004); and a prohibition on new taxes on the sale or transfer of real property (2010). In 1992, voters approved a constitutional amendment implementing term limits for certain statewide elected executive branch offices (governor, lieutenant governor, secretary of state, state auditor, attorney general, superintendent of public instruction) and for members of the Montana Legislature. Extensive new constitutional rights for victims of crime were approved in 2016.
The 1972 constitution requires that voters determine every 20 years whether to hold a new constitutional convention. Voters turned down a new convention in 1990 (84 percent no) and again in 2010 (58.6 percent no).
Executive
Montana has three branches of state government: legislative, executive, and judicial. The executive branch is headed by an elected governor. The governor is Greg Gianforte, a Republican elected in 2020. There are also nine other statewide elected offices in the executive branch: Lieutenant Governor, Attorney General, Secretary of State, State Auditor (who also serves as Commissioner of Securities and Insurance), and Superintendent of Public Instruction. There are five public service commissioners, who are elected on a regional basis. (The Public Service Commission's jurisdiction is statewide.)
There are 18 departments and offices which make up the executive branch: Administration; Agriculture; Auditor (securities and insurance); Commerce; Corrections; Environmental Quality; Fish, Wildlife & Parks; Justice; Labor and Industry; Livestock; Military Affairs; Natural Resources and Conservation; Public Health and Human Services; Revenue; State; and Transportation. Elementary and secondary education are overseen by the Office of Public Instruction (led by the elected superintendent of public instruction), in cooperation with the governor-appointed Board of Public Education. Higher education is overseen by a governor-appointed Board of Regents, which in turn appoints a commissioner of higher education. The Office of the Commissioner of Higher Education acts in an executive capacity on behalf of the regents and oversees the state-run Montana University System.
Independent state agencies not within a department or office include the Montana Arts Council, Montana Board of Crime Control, Montana Historical Society, Montana Public Employees Retirement Administration, Commissioner of Political Practices, the Montana Lottery, Office of the State Public Defender, Public Service Commission, the Montana School for the Deaf and Blind, the Montana State Fund (which operates the state's unemployment insurance, worker compensation, and self-insurance operations), the Montana State Library, and the Montana Teachers Retirement System.
Montana is an alcoholic beverage control state. It is an equitable distribution and no-fault divorce state. It is one of five states to have no sales tax.
Legislative
The Montana Legislature is bicameral and consists of the 50-member Montana Senate and the 100-member Montana House of Representatives. The legislature meets in the Montana State Capitol in Helena in odd-numbered years for 90 days, beginning the first weekday of the year. The deadline for a legislator to introduce a general bill is the 40th legislative day. The deadline for a legislator to introduce an appropriations, revenue, or referenda bill is the 62nd legislative day. Senators serve four-year terms, while Representatives serve two-year terms. All members are limited to serving no more than eight years in a single 16-year period.
Judicial
The Courts of Montana are established by the Constitution of Montana. The constitution requires the establishment of a Montana Supreme Court and Montana District Courts, and permits the legislature to establish Justice Courts, City Courts, Municipal Courts, and other inferior courts such as the legislature sees fit to establish.
The Montana Supreme Court is the court of last resort in the Montana court system. The constitution of 1889 provided for the election of no fewer than three Supreme Court justices, and one chief justice. Each court member served a six-year term. The legislature increased the number of justices to five in 1919. The 1972 constitution lengthened the term of office to eight years and established the minimum number of justices at five. It allowed the legislature to increase the number of justices by two, which the legislature did in 1979. The Montana Supreme Court has the authority to declare acts of the legislature and executive unconstitutional under either the Montana or U.S. constitutions. Its decisions may be appealed directly to the U.S. Supreme Court. The clerk of the Supreme Court is also an elected position and serves a six-year term. Neither justices nor the clerk is term-limited.
Montana District Courts are the courts of general jurisdiction in Montana. There are no intermediate appellate courts. District Courts have jurisdiction primarily over most civil cases, cases involving a monetary claim against the state, felony criminal cases, probate, and cases at law and in equity. When so authorized by the legislature, actions of executive branch agencies may be appealed directly to a District Court. The District Courts also have de novo appellate jurisdiction from inferior courts (city courts, justice courts, and municipal courts), and oversee naturalization proceedings. District Court judges are elected and serve six-year terms. They are not term-limited. There are 22 judicial districts in Montana, served by 56 District Courts and 46 District Court judges. The District Courts suffer from excessive workload, and the legislature has struggled to find a solution to the problem.
Montana Youth Courts were established by the Montana Youth Court Act of 1974. They are overseen by District Court judges. They consist of a chief probation officer, one or more juvenile probation officers, and support staff. Youth Courts have jurisdiction over misdemeanor and felony acts committed by those charged as a juvenile under the law. There is a Youth Court in every judicial district, and decisions of the Youth Court are appealable directly to the Montana Supreme Court.
The Montana Worker's Compensation Court was established by the Montana Workers' Compensation Act in 1975. There is a single Workers' Compensation Court. It has a single judge, appointed by the governor. The Worker's Compensation Court has statewide jurisdiction and holds trials in Billings, Great Falls, Helena, Kalispell, and Missoula. The court hears cases arising under the Montana Workers' Compensation Act and is the court of original jurisdiction for reviews of orders and regulations issued by the Montana Department of Labor and Industry. Decisions of the court are appealable directly to the Montana Supreme Court.
The Montana Water Court was established by the Montana Water Court Act of 1979. The Water Court consists of a chief water judge and four district water judges (Lower Missouri River Basin, Upper Missouri River Basin, Yellowstone River Basin, and Clark Fork River Basin). The court employs 12 permanent special masters. The Montana Judicial Nomination Commission develops short lists of nominees for all five Water Judges, who are then appointed by the Chief justice of the Montana Supreme Court (subject to confirmation by the Montana Senate). The Water Court adjudicates water rights claims under the Montana Water Use Act of 1973 and has statewide jurisdiction. District Courts have the authority to enforce decisions of the Water Court, but only the Montana Supreme Court has the authority to review decisions of the Water Court.
From 1889 to 1909, elections for judicial office in Montana were partisan. Beginning in 1909, these elections became nonpartisan. The Montana Supreme Court struck down the nonpartisan law in 1911 on technical grounds, but a new law was enacted in 1935 which barred political parties from endorsing, making contributions to, or making expenditures on behalf of or against judicial candidates. In 2012, the U.S. Supreme Court struck down Montana's judicial nonpartisan election law in Although candidates must remain nonpartisan, spending by partisan entities is now permitted. Spending on state supreme court races exponentially increased to $1.6 million in 2014, and to more than $1.6 million in 2016 (both new records).
Federal offices and courts
The U.S. Constitution provides each state with two senators. Montana's two U.S. senators are Jon Tester (Democrat), who was reelected in 2018, and Steve Daines (Republican), first elected in 2014 and later reelected in 2020. The U.S. Constitution provides each state with a single representative, with additional representatives apportioned based on population. From statehood in 1889 until 1913, Montana was represented in the United States House of Representatives by a single representative, elected at-large. Montana received a second representative in 1913, following the 1910 census and reapportionment. Both members, however, were still elected at-large. Beginning in 1919, Montana moved to district, rather than at-large, elections for its two House members. This created Montana's 1st congressional district in the west and Montana's 2nd congressional district in the east. In the reapportionment following the 1990 census, Montana lost one of its House seats. The remaining seat was again elected at-large. Matt Rosendale is the current officeholder.
In the reapportionment following the 2020 census, Montana regained a House seat, increasing the state's number of representatives in the House to two after a thirty-year break, starting from 2023.
Montana's Senate district is the fourth largest by area, behind Alaska, Texas, and California. The most notorious of Montana's early senators was William A. Clark, a "Copper King" and one of the 50 richest Americans ever. He is well known for having bribed his way into the U.S. Senate. Among Montana's most historically prominent senators are Thomas J. Walsh (serving from 1913 to 1933), who was President-elect Franklin D. Roosevelt's choice for attorney general when he died; Burton K. Wheeler (serving from 1923 to 1947), an oft-mentioned presidential candidate and strong supporter of isolationism; Mike Mansfield, the longest-serving Senate majority leader in U.S. history; Max Baucus (served 1978 to 2014), longest-serving U.S. senator in Montana history, and the senator who shepherded the Patient Protection and Affordable Care Act through the Senate in 2010; and Lee Metcalf (served 1961 to 1978), a pioneer of the environmental movement.
Montana's House district is the largest congressional district in the United States by population, with just over 1,023,000 constituents. It is the second-largest House district by area, after Alaska's at-large congressional district. Of Montana's House delegates, Jeannette Rankin was the first woman to hold national office in the United States when she was elected to the U.S. House of Representatives in 1916. Also notable is Representative (later Senator) Thomas H. Carter, the first Catholic to serve as chairman of the Republican National Committee (from 1892 to 1896).
Federal courts in Montana include the United States District Court for the District of Montana and the United States Bankruptcy Court for the District of Montana. Three former Montana politicians have been named judges on the U.S. District Court: Charles Nelson Pray (who served in the U.S. House of Representatives from 1907 to 1913), James F. Battin (who served in the U.S. House of Representatives from 1961 to 1969), and Paul G. Hatfield (who served as an appointed U.S. Senator in 1978). Brian Morris, who served as an associate justice of the Montana Supreme Court from 2005 to 2013, currently serves as a judge on the court.
Politics
Elections in the state have been historically competitive, particularly for state-level offices. The Democratic Party's strength in the state is gained from support among unionized miners and railroad workers, while farmers generally vote Republican.
Montana has a history of voters splitting their tickets and filling elected offices with individuals from both parties. Through the mid-20th century, the state had a tradition of "sending the liberals to Washington and the conservatives to Helena". Between 1988 and 2006, the pattern flipped, with voters more likely to elect conservatives to federal offices. There have also been long-term shifts in party control. From 1968 through 1988, the state was dominated by the Democratic Party, with Democratic governors for a 20-year period, and a Democratic majority of both the national congressional delegation and during many sessions of the state legislature. This pattern shifted, beginning with the 1988 election when Montana elected a Republican governor for the first time since 1964 and sent a Republican to the U.S. Senate for the first time since 1948. This shift continued with the reapportionment of the state's legislative districts that took effect in 1994, when the Republican Party took control of both chambers of the state legislature, consolidating a Republican party dominance that lasted until the 2004 reapportionment produced more swing districts and a brief period of Democratic legislative majorities in the mid-2000s.
Montana has voted for the Republican nominee in all but two presidential elections since 1952. The state last supported a Democrat for president in 1992, when Bill Clinton won a plurality victory. However, since 1889 the state has voted for Democratic governors 60 percent of the time, and Republican governors 40 percent of the time. In the 2008 presidential election, Montana was considered a swing state and was ultimately won by Republican John McCain by a narrow margin of two percent.
At the state level, the pattern of split-ticket voting and divided government holds. Democrats hold one of the state's two U.S. Senate seats with Jon Tester. The lone congressional district has been Republican since 1996, and its Class 2 Senate seat has been held by Republican Steve Daines since 2014. The two chambers of the state's legislature had split party control from 2004 to 2010, when that year's mid-term elections decisively returned both branches to Republican control. The Montana Senate is, as of 2021, controlled by Republicans 31 to 19, and the House of Representatives is currently 67 to 33. Historically, Republicans are strongest in the east, while Democrats are strongest in the west.
Montana has only one representative in the U.S. House, having lost its second district in the 1990 census reapportionment. However it will get its second district back due to reapportionment following the 2020 census. Montana's at-large congressional district holds the largest population of any district in the country, which means its one member in the House of Representatives represents more people than any other member of the U.S. House (see List of U.S. states by population). Montana's population grew at about the national average during the 2000s, but it failed to regain its second seat in 2010.
In a 2020 study, Montana was ranked as the 21st easiest state for citizens to vote in.
See also
Index of Montana-related articles
Outline of Montana
Timeline of Montana history
Notes
References
Bibliography
Reviewed by
Further reading
External links
Census of Montana
General Information About Montana
List of Searchable Databases Produced by Montana State Agencies
Montana Energy Data & Statistics—From the U.S. Department of Energy
Montana Historical Society
Montana Official Travel Information Site
Montana Official Website
Montana State Facts From the U.S. Department of Agriculture
USGS Real-time, Geographic, and Other Scientific Resources of Montana
1889 establishments in the United States
States and territories established in 1889
States of the United States
Western United States
Contiguous United States |
19980 | https://en.wikipedia.org/wiki/Machine%20translation | Machine translation | Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation), is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another.
On a basic level, MT performs mechanical substitution of words in one language for words in another, but that alone rarely produces a good translation because recognition of whole phrases and their closest counterparts in the target language is needed. Not all words in one language have equivalent words in another language, and many words have more than one meaning.
Solving this problem with corpus statistical and neural techniques is a rapidly-growing field that is leading to better translations, handling differences in linguistic typology, translation of idioms, and the isolation of anomalies.
Current machine translation software often allows for customization by domain or profession (such as weather reports), improving output by limiting the scope of allowable substitutions. This technique is particularly effective in domains where formal or formulaic language is used. It follows that machine translation of government and legal documents more readily produces usable output than conversation or less standardised text.
Improved output quality can also be achieved by human intervention: for example, some systems are able to translate more accurately if the user has unambiguously identified which words in the text are proper names. With the assistance of these techniques, MT has proven useful as a tool to assist human translators and, in a very limited number of cases, can even produce output that can be used as is (e.g., weather reports).
The progress and potential of machine translation have been much debated through its history. Since the 1950s, a number of scholars, first and most notably Yehoshua Bar-Hillel, have questioned the possibility of achieving fully automatic machine translation of high quality.
History
Origins
The origins of machine translation can be traced back to the work of Al-Kindi, a ninth-century Arabic cryptographer who developed techniques for systemic language translation, including cryptanalysis, frequency analysis, and probability and statistics, which are used in modern machine translation. The idea of machine translation later appeared in the 17th century. In 1629, René Descartes proposed a universal language, with equivalent ideas in different tongues sharing one symbol.
The idea of using digital computers for translation of natural languages was proposed as early as 1946 by England's A. D. Booth and Warren Weaver at Rockefeller Foundation at the same time. "The memorandum written by Warren Weaver in 1949 is
perhaps the single most influential publication in the earliest days of machine translation." Others followed. A demonstration was made in 1954 on the APEXC machine at Birkbeck College (University of London) of a rudimentary translation of English into French. Several papers on the topic were published at the time, and even articles in popular journals (for example an article by Cleave and Zacharov in the September 1955 issue of Wireless World). A similar application, also pioneered at Birkbeck College at the time, was reading and composing Braille texts by computer.
1950s
The first researcher in the field, Yehoshua Bar-Hillel, began his research at MIT (1951). A Georgetown University MT research team, led by Professor Michael Zarechnak, followed (1951) with a public demonstration of its Georgetown-IBM experiment system in 1954. MT research programs popped up in Japan and Russia (1955), and the first MT conference was held in London (1956).
David G. Hays "wrote about computer-assisted language processing as early as 1957" and "was project leader on computational linguistics
at Rand from 1955 to 1968."
1960–1975
Researchers continued to join the field as the Association for Machine Translation and Computational Linguistics was formed in the U.S. (1962) and the National Academy of Sciences formed the Automatic Language Processing Advisory Committee (ALPAC) to study MT (1964). Real progress was much slower, however, and after the ALPAC report (1966), which found that the ten-year-long research had failed to fulfill expectations, funding was greatly reduced. According to a 1972 report by the Director of Defense Research and Engineering (DDR&E), the feasibility of large-scale MT was reestablished by the success of the Logos MT system in translating military manuals into Vietnamese during that conflict.
The French Textile Institute also used MT to translate abstracts from and into French, English, German and Spanish (1970); Brigham Young University started a project to translate Mormon texts by automated translation (1971).
1975 and beyond
SYSTRAN, which "pioneered the field under contracts from the U. S. government" in the 1960s, was used by Xerox to translate technical manuals (1978). Beginning in the late 1980s, as computational power increased and became less expensive, more interest was shown in statistical models for machine translation. MT became more popular after the advent of computers. SYSTRAN's first implementation system was implemented in 1988 by the online service of the French Postal Service called Minitel. Various computer based translation companies were also launched, including Trados (1984), which was the first to develop and market Translation Memory technology (1989), though this is not the same as MT. The first commercial MT system for Russian / English / German-Ukrainian was developed at Kharkov State University (1991).
By 1998, "for as little as $29.95" one could "buy a program for translating in one direction between English and a major European language of
your choice" to run on a PC.
MT on the web started with SYSTRAN offering free translation of small texts (1996) and then providing this via AltaVista Babelfish, which racked up 500,000 requests a day (1997). The second free translation service on the web was Lernout & Hauspie's GlobaLink. Atlantic Magazine wrote in 1998 that "Systran's Babelfish and GlobaLink's Comprende" handled
"Don't bank on it" with a "competent performance."
Franz Josef Och (the future head of Translation Development AT Google) won DARPA's speed MT competition (2003). More innovations during this time included MOSES, the open-source statistical MT engine (2007), a text/SMS translation service for mobiles in Japan (2008), and a mobile phone with built-in speech-to-speech translation functionality for English, Japanese and Chinese (2009). In 2012, Google announced that Google Translate translates roughly enough text to fill 1 million books in one day.
Translation process
The human translation process may be described as:
Decoding the meaning of the source text; and
Re-encoding this meaning in the target language.
Behind this ostensibly simple procedure lies a complex cognitive operation. To decode the meaning of the source text in its entirety, the translator must interpret and analyse all the features of the text, a process that requires in-depth knowledge of the grammar, semantics, syntax, idioms, etc., of the source language, as well as the culture of its speakers. The translator needs the same in-depth knowledge to re-encode the meaning in the target language.
Therein lies the challenge in machine translation: how to program a computer that will "understand" a text as a person does, and that will "create" a new text in the target language that sounds as if it has been written by a person. Unless aided by a 'knowledge base' MT provides only a general, though imperfect, approximation of the original text, getting the "gist" of it (a process called "gisting"). This is sufficient for many purposes, including making best use of the finite and expensive time of a human translator, reserved for those cases in which total accuracy is indispensable.
Approaches
Machine translation can use a method based on linguistic rules, which means that words will be translated in a linguistic way – the most suitable (orally speaking) words of the target language will replace the ones in the source language.
It is often argued that the success of machine translation requires the problem of natural language understanding to be solved first.
Generally, rule-based methods parse a text, usually creating an intermediary, symbolic representation, from which the text in the target language is generated. According to the nature of the intermediary representation, an approach is described as interlingual machine translation or transfer-based machine translation. These methods require extensive lexicons with morphological, syntactic, and semantic information, and large sets of rules.
Given enough data, machine translation programs often work well enough for a native speaker of one language to get the approximate meaning of what is written by the other native speaker. The difficulty is getting enough data of the right kind to support the particular method. For example, the large multilingual corpus of data needed for statistical methods to work is not necessary for the grammar-based methods. But then, the grammar methods need a skilled linguist to carefully design the grammar that they use.
To translate between closely related languages, the technique referred to as rule-based machine translation may be used.
Rule-based
The rule-based machine translation paradigm includes transfer-based machine translation, interlingual machine translation and dictionary-based machine translation paradigms. This type of translation is used mostly in the creation of dictionaries and grammar programs. Unlike other methods, RBMT involves more information about the linguistics of the source and target languages, using the morphological and syntactic rules and semantic analysis of both languages. The basic approach involves linking the structure of the input sentence with the structure of the output sentence using a parser and an analyzer for the source language, a generator for the target language, and a transfer lexicon for the actual translation. RBMT's biggest downfall is that everything must be made explicit: orthographical variation and erroneous input must be made part of the source language analyser in order to cope with it, and lexical selection rules must be written for all instances of ambiguity. Adapting to new domains in itself is not that hard, as the core grammar is the same across domains, and the domain-specific adjustment is limited to lexical selection adjustment.
Transfer-based machine translation
Transfer-based machine translation is similar to interlingual machine translation in that it creates a translation from an intermediate representation that simulates the meaning of the original sentence. Unlike interlingual MT, it depends partially on the language pair involved in the translation.
Interlingual
Interlingual machine translation is one instance of rule-based machine-translation approaches. In this approach, the source language, i.e. the text to be translated, is transformed into an interlingual language, i.e. a "language neutral" representation that is independent of any language. The target language is then generated out of the interlingua. One of the major advantages of this system is that the interlingua becomes more valuable as the number of target languages it can be turned into increases. However, the only interlingual machine translation system that has been made operational at the commercial level is the KANT system (Nyberg and Mitamura, 1992), which is designed to translate Caterpillar Technical English (CTE) into other languages.
Dictionary-based
Machine translation can use a method based on dictionary entries, which means that the words will be translated as they are by a dictionary.
Statistical
Statistical machine translation tries to generate translations using statistical methods based on bilingual text corpora, such as the Canadian Hansard corpus, the English-French record of the Canadian parliament and EUROPARL, the record of the European Parliament. Where such corpora are available, good results can be achieved translating similar texts, but such corpora are still rare for many language pairs. The first statistical machine translation software was CANDIDE from IBM. Google used SYSTRAN for several years, but switched to a statistical translation method in October 2007. In 2005, Google improved its internal translation capabilities by using approximately 200 billion words from United Nations materials to train their system; translation accuracy improved. Google Translate and similar statistical translation programs work by detecting patterns in hundreds of millions of documents that have previously been translated by humans and making intelligent guesses based on the findings. Generally, the more human-translated documents available in a given language, the more likely it is that the translation will be of good quality. Newer approaches into Statistical Machine translation such as METIS II and PRESEMT use minimal corpus size and instead focus on derivation of syntactic structure through pattern recognition. With further development, this may allow statistical machine translation to operate off of a monolingual text corpus. SMT's biggest downfall includes it being dependent upon huge amounts of parallel texts, its problems with morphology-rich languages (especially with translating into such languages), and its inability to correct singleton errors.
Example-based
Example-based machine translation (EBMT) approach was proposed by Makoto Nagao in 1984. Example-based machine translation is based on the idea of analogy. In this approach, the corpus that is used is one that contains texts that have already been translated. Given a sentence that is to be translated, sentences from this corpus are selected that contain similar sub-sentential components. The similar sentences are then used to translate the sub-sentential components of the original sentence into the target language, and these phrases are put together to form a complete translation.
Hybrid MT
Hybrid machine translation (HMT) leverages the strengths of statistical and rule-based translation methodologies. Several MT organizations claim a hybrid approach that uses both rules and statistics. The approaches differ in a number of ways:
Rules post-processed by statistics: Translations are performed using a rules based engine. Statistics are then used in an attempt to adjust/correct the output from the rules engine.
Statistics guided by rules: Rules are used to pre-process data in an attempt to better guide the statistical engine. Rules are also used to post-process the statistical output to perform functions such as normalization. This approach has a lot more power, flexibility and control when translating. It also provides extensive control over the way in which the content is processed during both pre-translation (e.g. markup of content and non-translatable terms) and post-translation (e.g. post translation corrections and adjustments).
More recently, with the advent of Neural MT, a new version of hybrid machine translation is emerging that combines the benefits of rules, statistical and neural machine translation. The approach allows benefitting from pre- and post-processing in a rule guided workflow as well as benefitting from NMT and SMT. The downside is the inherent complexity which makes the approach suitable only for specific use cases.
Neural MT
A deep learning-based approach to MT, neural machine translation has made rapid progress in recent years, and Google has announced its translation services are now using this technology in preference over its previous statistical methods. A Microsoft team claimed to have reached human parity on WMT-2017 ("EMNLP 2017
Second Conference On Machine Translation") in 2018, marking a historical milestone.
However, many researchers have criticized this claim, rerunning and discussing their experiments; current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test suits- i.e., it lacks statistical significance power. There is still a long journey before NMT reaches real human parity performances.
To address the idiomatic phrase translation, multi-word expressions, and low-frequency words (also called OOV, or out-of-vocabulary word translation), language-focused linguistic features have been explored in state-of-the-art neural machine translation (NMT) models. For instance, the Chinese character decompositions into radicals and strokes have proven to be helpful for translating multi-word expressions in NMT.
Major issues
Disambiguation
Word-sense disambiguation concerns finding a suitable translation when a word can have more than one meaning. The problem was first raised in the 1950s by Yehoshua Bar-Hillel. He pointed out that without a "universal encyclopedia", a machine would never be able to distinguish between the two meanings of a word. Today there are numerous approaches designed to overcome this problem. They can be approximately divided into "shallow" approaches and "deep" approaches.
Shallow approaches assume no knowledge of the text. They simply apply statistical methods to the words surrounding the ambiguous word. Deep approaches presume a comprehensive knowledge of the word. So far, shallow approaches have been more successful.
Claude Piron, a long-time translator for the United Nations and the World Health Organization, wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved:
The ideal deep approach would require the translation software to do all the research necessary for this kind of disambiguation on its own; but this would require a higher degree of AI than has yet been attained. A shallow approach which simply guessed at the sense of the ambiguous English phrase that Piron mentions (based, perhaps, on which kind of prisoner-of-war camp is more often mentioned in a given corpus) would have a reasonable chance of guessing wrong fairly often. A shallow approach that involves "ask the user about each ambiguity" would, by Piron's estimate, only automate about 25% of a professional translator's job, leaving the harder 75% still to be done by a human.
Non-standard speech
One of the major pitfalls of MT is its inability to translate non-standard language with the same accuracy as standard language. Heuristic or statistical based MT takes input from various sources in standard form of a language. Rule-based translation, by nature, does not include common non-standard usages. This causes errors in translation from a vernacular source or into colloquial language. Limitations on translation from casual speech present issues in the use of machine translation in mobile devices.
Named entities
In information extraction, named entities, in a narrow sense, refer to concrete or abstract entities in the real world such as people, organizations, companies, and places that have a proper name: George Washington, Chicago, Microsoft. It also refers to expressions of time, space and quantity such as 1 July 2011, $500.
In the sentence "Smith is the president of Fabrionix" both Smith and Fabrionix are named entities, and can be further qualified via first name or other information; "president" is not, since Smith could have earlier held another position at Fabrionix, e.g. Vice President.
The term rigid designator is what defines these usages for analysis in statistical machine translation.
Named entities must first be identified in the text; if not, they may be erroneously translated as common nouns, which would most likely not affect the BLEU rating of the translation but would change the text's human readability. They may be omitted from the output translation, which would also have implications for the text's readability and message.
Transliteration includes finding the letters in the target language that most closely correspond to the name in the source language. This, however, has been cited as sometimes worsening the quality of translation. For "Southern California" the first word should be translated directly, while the second word should be transliterated. Machines often transliterate both because they treated them as one entity. Words like these are hard for machine translators, even those with a transliteration component, to process.
Use of a "do-not-translate" list, which has the same end goal – transliteration as opposed to translation. still relies on correct identification of named entities.
A third approach is a class-based model. Named entities are replaced with a token to represent their "class"; "Ted" and "Erica" would both be replaced with "person" class token. Then the statistical distribution and use of person names, in general, can be analyzed instead of looking at the distributions of "Ted" and "Erica" individually, so that the probability of a given name in a specific language will not affect the assigned probability of a translation. A study by Stanford on improving this area of translation gives the examples that different probabilities will be assigned to "David is going for a walk" and "Ankit is going for a walk" for English as a target language due to the different number of occurrences for each name in the training data. A frustrating outcome of the same study by Stanford (and other attempts to improve named recognition translation) is that many times, a decrease in the BLEU scores for translation will result from the inclusion of methods for named entity translation.
Somewhat related are the phrases "drinking tea with milk" vs. "drinking tea with Molly."
Translation from multiparallel sources
Some work has been done in the utilization of multiparallel corpora, that is a body of text that has been translated into 3 or more languages. Using these methods, a text that has been translated into 2 or more languages may be utilized in combination to provide a more accurate translation into a third language compared with if just one of those source languages were used alone.
Ontologies in MT
An ontology is a formal representation of knowledge that includes the concepts (such as objects, processes etc.) in a domain and some relations between them. If the stored information is of linguistic nature, one can speak of a lexicon.
In NLP, ontologies can be used as a source of knowledge for machine translation systems. With access to a large knowledge base, systems can be enabled to resolve many (especially lexical) ambiguities on their own.
In the following classic examples, as humans, we are able to interpret the prepositional phrase according to the context because we use our world knowledge, stored in our lexicons:
I saw a man/star/molecule with a microscope/telescope/binoculars.
A machine translation system initially would not be able to differentiate between the meanings because syntax does not change. With a large enough ontology as a source of knowledge however, the possible interpretations of ambiguous words in a specific context can be reduced.
Other areas of usage for ontologies within NLP include information retrieval, information extraction and text summarization.
Building ontologies
The ontology generated for the PANGLOSS knowledge-based machine translation system in 1993 may serve as an example of how an ontology for NLP purposes can be compiled:
A large-scale ontology is necessary to help parsing in the active modules of the machine translation system.
In the PANGLOSS example, about 50,000 nodes were intended to be subsumed under the smaller, manually-built upper (abstract) region of the ontology. Because of its size, it had to be created automatically.
The goal was to merge the two resources LDOCE online and WordNet to combine the benefits of both: concise definitions from Longman, and semantic relations allowing for semi-automatic taxonomization to the ontology from WordNet.
A definition match algorithm was created to automatically merge the correct meanings of ambiguous words between the two online resources, based on the words that the definitions of those meanings have in common in LDOCE and WordNet. Using a similarity matrix, the algorithm delivered matches between meanings including a confidence factor. This algorithm alone, however, did not match all meanings correctly on its own.
A second hierarchy match algorithm was therefore created which uses the taxonomic hierarchies found in WordNet (deep hierarchies) and partially in LDOCE (flat hierarchies). This works by first matching unambiguous meanings, then limiting the search space to only the respective ancestors and descendants of those matched meanings. Thus, the algorithm matched locally unambiguous meanings (for instance, while the word seal as such is ambiguous, there is only one meaning of seal in the animal subhierarchy).
Both algorithms complemented each other and helped constructing a large-scale ontology for the machine translation system. The WordNet hierarchies, coupled with the matching definitions of LDOCE, were subordinated to the ontology's upper region. As a result, the PANGLOSS MT system was able to make use of this knowledge base, mainly in its generation element.
Applications
While no system provides the holy grail of fully automatic high-quality machine translation of unrestricted text, many fully automated systems produce reasonable output. The quality of machine translation is substantially improved if the domain is restricted and controlled.
Despite their inherent limitations, MT programs are used around the world. Probably the largest institutional user is the European Commission. The project, for example, coordinated by the University of Gothenburg, received more than 2.375 million euros project support from the EU to create a reliable translation tool that covers a majority of the EU languages. The further development of MT systems comes at a time when budget cuts in human translation may increase the EU's dependency on reliable MT programs. The European Commission contributed 3.072 million euros (via its ISA programme) for the creation of MT@EC, a statistical machine translation program tailored to the administrative needs of the EU, to replace a previous rule-based machine translation system.
In 2005, Google claimed that promising results were obtained using a proprietary statistical machine translation engine. The statistical translation engine used in the Google language tools for Arabic <-> English and Chinese <-> English had an overall score of 0.4281 over the runner-up IBM's BLEU-4 score of 0.3954 (Summer 2006) in tests conducted by the National Institute for Standards and Technology.
With the recent focus on terrorism, the military sources in the United States have been investing significant amounts of money in natural language engineering. In-Q-Tel (a venture capital fund, largely funded by the US Intelligence Community, to stimulate new technologies through private sector entrepreneurs) brought up companies like Language Weaver. Currently the military community is interested in translation and processing of languages like Arabic, Pashto, and Dari. Within these languages, the focus is on key phrases and quick communication between military members and civilians through the use of mobile phone apps. The Information Processing Technology Office in DARPA hosts programs like TIDES and Babylon translator. US Air Force has awarded a $1 million contract to develop a language translation technology.
The notable rise of social networking on the web in recent years has created yet another niche for the application of machine translation software – in utilities such as Facebook, or instant messaging clients such as Skype, GoogleTalk, MSN Messenger, etc. – allowing users speaking different languages to communicate with each other. Machine translation applications have also been released for most mobile devices, including mobile telephones, pocket PCs, PDAs, etc. Due to their portability, such instruments have come to be designated as mobile translation tools enabling mobile business networking between partners speaking different languages, or facilitating both foreign language learning and unaccompanied traveling to foreign countries without the need of the intermediation of a human translator.
Despite being labelled as an unworthy competitor to human translation in 1966 by the Automated Language Processing Advisory Committee put together by the United States government, the quality of machine translation has now been improved to such levels that its application in online collaboration and in the medical field are being investigated. The application of this technology in medical settings where human translators are absent is another topic of research, but difficulties arise due to the importance of accurate translations in medical diagnoses.
Evaluation
There are many factors that affect how machine translation systems are evaluated. These factors include the intended use of the translation, the nature of the machine translation software, and the nature of the translation process.
Different programs may work well for different purposes. For example, statistical machine translation (SMT) typically outperforms example-based machine translation (EBMT), but researchers found that when evaluating English to French translation, EBMT performs better. The same concept applies for technical documents, which can be more easily translated by SMT because of their formal language.
In certain applications, however, e.g., product descriptions written in a controlled language, a dictionary-based machine-translation system has produced satisfactory translations that require no human intervention save for quality inspection.
There are various means for evaluating the output quality of machine translation systems. The oldest is the use of human judges to assess a translation's quality. Even though human evaluation is time-consuming, it is still the most reliable method to compare different systems such as rule-based and statistical systems. Automated means of evaluation include BLEU, NIST, METEOR, and LEPOR.
Relying exclusively on unedited machine translation ignores the fact that communication in human language is context-embedded and that it takes a person to comprehend the context of the original text with a reasonable degree of probability. It is certainly true that even purely human-generated translations are prone to error. Therefore, to ensure that a machine-generated translation will be useful to a human being and that publishable-quality translation is achieved, such translations must be reviewed and edited by a human. The late Claude Piron wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved. Such research is a necessary prelude to the pre-editing necessary in order to provide input for machine-translation software such that the output will not be meaningless.
In addition to disambiguation problems, decreased accuracy can occur due to varying levels of training data for machine translating programs. Both example-based and statistical machine translation rely on a vast array of real example sentences as a base for translation, and when too many or too few sentences are analyzed accuracy is jeopardized. Researchers found that when a program is trained on 203,529 sentence pairings, accuracy actually decreases. The optimal level of training data seems to be just over 100,000 sentences, possibly because as training data increases, the number of possible sentences increases, making it harder to find an exact translation match.
Using machine translation as a teaching tool
Although there have been concerns about machine translation's accuracy, Dr. Ana Nino of the University of Manchester has researched some of the advantages in utilizing machine translation in the classroom. One such pedagogical method is called using "MT as a Bad Model." MT as a Bad Model forces the language learner to identify inconsistencies or incorrect aspects of a translation; in turn, the individual will (hopefully) possess a better grasp of the language. Dr. Nino cites that this teaching tool was implemented in the late 1980s. At the end of various semesters, Dr. Nino was able to obtain survey results from students who had used MT as a Bad Model (as well as other models.) Overwhelmingly, students felt that they had observed improved comprehension, lexical retrieval, and increased confidence in their target language.
Machine translation and signed languages
In the early 2000s, options for machine translation between spoken and signed languages were severely limited. It was a common belief that deaf individuals could use traditional translators. However, stress, intonation, pitch, and timing are conveyed much differently in spoken languages compared to signed languages. Therefore, a deaf individual may misinterpret or become confused about the meaning of written text that is based on a spoken language.
Researchers Zhao, et al. (2000), developed a prototype called TEAM (translation from English to ASL by machine) that completed English to American Sign Language (ASL) translations. The program would first analyze the syntactic, grammatical, and morphological aspects of the English text. Following this step, the program accessed a sign synthesizer, which acted as a dictionary for ASL. This synthesizer housed the process one must follow to complete ASL signs, as well as the meanings of these signs. Once the entire text is analyzed and the signs necessary to complete the translation are located in the synthesizer, a computer generated human appeared and would use ASL to sign the English text to the user.
Copyright
Only works that are original are subject to copyright protection, so some scholars claim that machine translation results are not entitled to copyright protection because MT does not involve creativity. The copyright at issue is for a derivative work; the author of the original work in the original language does not lose his rights when a work is translated: a translator must have permission to publish a translation.
See also
AI-complete
Cache language model
Comparison of machine translation applications
Comparison of different machine translation approaches
Computational linguistics
Computer-assisted translation and Translation memory
Controlled language in machine translation
Controlled natural language
Foreign language writing aid
Fuzzy matching
History of machine translation
Human language technology
Humour in translation ("howlers")
Language and Communication Technologies
Language barrier
List of emerging technologies
List of research laboratories for machine translation
Mobile translation
Neural machine translation
OpenLogos
Phraselator
Postediting
Pseudo-translation
Round-trip translation
Statistical machine translation
Translation memory
ULTRA (machine translation system)
Universal Networking Language
Universal translator
Notes
Further reading
Lewis-Kraus, Gideon, "Tower of Babble", New York Times Magazine, 7 June 2015, pp. 48–52.
Weber, Steven and Nikita Mehandru. 2021. "The 2020s political economy of machine translation." Business and Politics.
External links
The Advantages and Disadvantages of Machine Translation
International Association for Machine Translation (IAMT)
Machine Translation Archive by John Hutchins. An electronic repository (and bibliography) of articles, books and papers in the field of machine translation and computer-based translation technology
Machine translation (computer-based translation) – Publications by John Hutchins (includes PDFs of several books on machine translation)
Machine Translation and Minority Languages
John Hutchins 1999
Artificial intelligence applications
Computational linguistics
Computer-assisted translation
Tasks of natural language processing |
19983 | https://en.wikipedia.org/wiki/Central%20moment | Central moment | In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean. The various moments form one set of values by which the properties of a probability distribution can be usefully characterized. Central moments are used in preference to ordinary moments, computed in terms of deviations from the mean instead of from zero, because the higher-order central moments relate only to the spread and shape of the distribution, rather than also to its location.
Sets of central moments can be defined for both univariate and multivariate distributions.
Univariate moments
The nth moment about the mean (or nth central moment) of a real-valued random variable X is the quantity μn := E[(X − E[X])n], where E is the expectation operator. For a continuous univariate probability distribution with probability density function f(x), the nth moment about the mean μ is
For random variables that have no mean, such as the Cauchy distribution, central moments are not defined.
The first few central moments have intuitive interpretations:
The "zeroth" central moment μ0 is 1.
The first central moment μ1 is 0 (not to be confused with the first raw moment or the expected value μ).
The second central moment μ2 is called the variance, and is usually denoted σ2, where σ represents the standard deviation.
The third and fourth central moments are used to define the standardized moments which are used to define skewness and kurtosis, respectively.
Properties
The nth central moment is translation-invariant, i.e. for any random variable X and any constant c, we have
For all n, the nth central moment is homogeneous of degree n:
Only for n such that n equals 1, 2, or 3 do we have an additivity property for random variables X and Y that are independent:
provided n ∈ }.
A related functional that shares the translation-invariance and homogeneity properties with the nth central moment, but continues to have this additivity property even when n ≥ 4 is the nth cumulant κn(X). For n = 1, the nth cumulant is just the expected value; for n = either 2 or 3, the nth cumulant is just the nth central moment; for n ≥ 4, the nth cumulant is an nth-degree monic polynomial in the first n moments (about zero), and is also a (simpler) nth-degree polynomial in the first n central moments.
Relation to moments about the origin
Sometimes it is convenient to convert moments about the origin to moments about the mean. The general equation for converting the nth-order moment about the origin to the moment about the mean is
where μ is the mean of the distribution, and the moment about the origin is given by
For the cases n = 2, 3, 4 — which are of most interest because of the relations to variance, skewness, and kurtosis, respectively — this formula becomes (noting that and ):
which is commonly referred to as
... and so on, following Pascal's triangle, i.e.
because
The following sum is a stochastic variable having a compound distribution
where the are mutually independent random variables sharing the same common distribution and a random integer variable independent of the with its own distribution. The moments of are obtained as
where is defined as zero for .
Symmetric distributions
In a symmetric distribution (one that is unaffected by being reflected about its mean), all odd central moments equal zero, because in the formula for the nth moment, each term involving a value of X less than the mean by a certain amount exactly cancels out the term involving a value of X greater than the mean by the same amount.
Multivariate moments
For a continuous bivariate probability distribution with probability density function f(x,y) the (j,k) moment about the mean μ = (μX, μY) is
Central moment of complex random variables
The nth central moment for a complex random variable X is defined as
The absolute nth central moment of X is defined as
The 2nd-order central moment β2 is called the variance of X whereas the 2nd-order central moment α2 is the pseudo-variance of X.
See also
Standardized moment
Image moment
Complex random variable
References
Statistical deviation and dispersion
Moment (mathematics)
fr:Moment (mathématiques)#Moment centré |
19986 | https://en.wikipedia.org/wiki/Murad%20I | Murad I | Murad I (; (nicknamed Hüdavendigâr, from – meaning "sovereign" in this context); 29 June 1326 – 15 June 1389) was the Ottoman Sultan from 1362 to 1389. He was a son of Orhan Gazi and Nilüfer Hatun. Murad I came into the throne after his elder brother Süleyman Pasha's death.
Murad I conquered Adrianople, renamed it to Edirne, and in 1363 made it the new capital of the Ottoman Sultanate. Then he further expanded the Ottoman realm in Southern Europe by bringing most of the Balkans under Ottoman rule, and forced the princes of Serbia and Bulgaria as well as the East Roman emperor John V Palaiologos to pay him tribute. Murad I administratively divided his sultanate into the two provinces of Anatolia (Asia Minor) and Rumelia (the Balkans).
Titles
According to the Ottoman sources, Murad I's titles includes Bey, Emîr-i a’zam (Great Emir), Ghazi, Hüdavendigâr, Khan, Padishah, Sultânü’s-selâtîn (Sultan of sultans), Melikü’l-mülûk (Malik of maliks), while in Bulgarian and Serbian sources he was referred to as Tsar. In a Genoese document, he was referred to as dominus armiratorum Turchie (Master lord of Turks).
Personality
Murad I was illiterate and could not even sign his own name. In 1363, Murad I signed a treaty by dipping his hand in ink and impressing it with his finger marks.
Wars
Murad fought against the powerful beylik of Karaman in Anatolia and against the Serbs, Albanians, Bulgarians and Hungarians in Europe. In particular, a Serb expedition to expel the Turks from Adrianople led by the Serbian brothers King Vukašin and Despot Uglješa, was defeated on September 26, 1371, by Murad's capable second lieutenant Lala Şâhin Paşa, the first governor (beylerbey) of Rumeli. In 1385, Sofia fell to the Ottomans. In 1386 Prince Lazar Hrebeljanović defeated an Ottoman force at the Battle of Pločnik. The Ottoman army suffered heavy casualties, and was unable to capture Niš on the way back.
Battle of Kosovo
In 1389, Murad's army defeated the Serbian Army and its allies under the leadership of Lazar at the Battle of Kosovo.
There are different accounts from different sources about when and how Murad I was assassinated. The contemporary sources mainly noted that the battle took place and that both Prince Lazar and the Sultan lost their lives in the battle. The existing evidence of the additional stories and speculations as to how Murad I died were disseminated and recorded in the 15th century and later, decades after the actual event. One Western source states that during first hours of the battle, Murad I was assassinated by Serbian nobleman and knight Miloš Obilić by knife. Most Ottoman chroniclers (including Dimitrie Cantemir) state that he was assassinated after the finish of the battle while going around the battlefield. His older son Bayezid, who was in charge of the left wing of the Ottoman forces, took charge after that. His other son, Yakub Bey, who was in charge of the other wing, was called to the Sultan's command center tent by Bayezid, but when Yakub Bey arrived he was strangled, leaving Bayezid as the sole claimant to the throne.
In a letter from the Florentine senate (written by Coluccio Salutati) to the King Tvrtko I of Bosnia, dated 20 October 1389, Murad I's (and Jakub Bey's) killing was described. A party of twelve Serbian lords slashed their way through the Ottoman lines defending Murad I. One of them, allegedly Miloš Obilić, had managed to get through to the Sultan's tent and kill him with sword stabs to the throat and belly.
Sultan Murad's internal organs were buried in Kosovo field and remains to this day on a corner of the battlefield in a location called Meshed-i Hudavendigar which has gained a religious significance by the local Muslims. It has been vandalized between 1999–2006 and renovated recently. His other remains were carried to Bursa, his Anatolian capital city, and were buried in a tomb at the complex built in his name.
Establishment of sultanate
He established the sultanate by building up a society and government in the newly conquered city of Adrianople (Edirne in Turkish) and by expanding the realm in Europe, bringing most of the Balkans under Ottoman rule and forcing the Byzantine emperor to pay him tribute. It was Murad who established the former Osmanli tribe into an sultanate. He established the title of sultan in 1363 and the corps of the janissaries and the devşirme recruiting system. He also organised the government of the Divan, the system of timars and timar-holders (timariots) and the military judge, the kazasker. He also established the two provinces of Anadolu (Anatolia) and Rumeli (Europe).
Family
He was the son of Orhan and the Valide Hatun Nilüfer Hatun, daughter of the Lord of Yarhisar, who was of ethnic Greek descent.
Wives
Gülçiçek Hatun;
Paşa Melek Hatun, daughter of Kızıl Murad Bey;
In 1370 Thamara Hatun – daughter of Bulgarian Tsar Ivan Alexander;
Sons
Yahşi Bey;
Şehzade Savcı Bey – son. He and his ally, Byzantine emperor John V Palaeologus' son Andronicus, rebelled against their fathers. Murad had Savcı killed. Andronicus, who had surrendered to his father, was imprisoned and blinded at Murad's insistence.
Sultan Bayezid I (1354–1402) – son of Gülçiçek Hatun;
Şehzade Yakub Çelebi (? – d. 1389) – son. Bayezid I had Yakub killed during or following the Battle of Kosovo at which their father had been killed.
Şehzade Ibrahim;
Daughter
Nefise Hatun;
Further reading
Harris, Jonathan, The End of Byzantium. New Haven and London: Yale University Press, 2010.
Notes and references
Notes:
References:
External links
1326 births
1389 deaths
14th-century murdered monarchs
14th-century Ottoman sultans
Assassinated people of the Ottoman Empire
People of the Bulgarian–Ottoman wars
Characters in Serbian epic poetry
Monarchs killed in action
Filicides
Illiterate monarchs |
19987 | https://en.wikipedia.org/wiki/Mehmed%20I | Mehmed I | Mehmed I (1389 – 26 May 1421), also known as Mehmed Çelebi (, "the noble-born") or Kirişçi (, "lord's son"), was the Ottoman sultan from 1413 to 1421. The fourth son of Sultan Bayezid I and Devlet Hatun, he fought with his brothers over control of the Ottoman realm in the Ottoman Interregnum (1402–1413). Starting from the province of Rûm he managed to bring first Anatolia and then the European territories (Rumelia) under his control, reuniting the Ottoman state by 1413, and ruling it until his death in 1421. Called "The Restorer," he reestablished central authority in Anatolia, and expanded the Ottoman presence in Europe by the conquest of Wallachia in 1415. Venice destroyed his fleet off Gallipoli in 1416, as the Ottomans lost a naval war.
Early life
Mehmed was born in 1386 or 1387 as the fourth son of Sultan Bayezid I () and one of his consorts, the slave girl Devlet Hatun. Following Ottoman custom, when he reached adolescence in 1399, he was sent to gain experience as provincial governor over the Rûm Eyalet (central northern Anatolia), recently conquered from its Eretnid rulers.
On 20 July 1402, his father Bayezid was defeated in the Battle of Ankara by the Turko-Mongol conqueror and ruler Timur. The brothers (with the exception of Mustafa, who was captured and taken along with Bayezid to Samarkand) were rescued from the battlefield, Mehmed being saved by Bayezid Pasha, who took him to his hometown of Amasya. Mehmed later made Bayezid Pasha his grand vizier (1413–1421).
The early Ottoman Empire had no regulated succession, and according to Turkish tradition, every son could succeed his father. Of Mehmed's brothers, the eldest, Ertuğrul, had died in 1400, while the next in line, Mustafa, was a prisoner of Timur. Leaving aside the underage siblings, this left four princes—Mehmed, Süleyman, İsa, and Musa, to contend over control of the remaining Ottoman territories in the civil war known as the "Ottoman Interregnum". In modern historiography, these princes are usually called by the title Çelebi, but in contemporary sources, the title is reserved for Mehmed and Musa. The Byzantine sources translated the title as Kyritzes (Κυριτζής), which was in turn adopted into Turkish as kirişçi, sometimes misinterpreted as güreşçi, "the wrestler".
During the early interregnum, Mehmed Çelebi behaved as Timur's vassal. Beside the other princes, Mehmed minted coin which Timur's name appeared as "Demur han Gürgân" (تيمور خان كركان), alongside his own as "Mehmed bin Bayezid han" (محمد بن بايزيد خان). This was probably an attempt on Mehmed's part to justify to Timur his conquest of Bursa after the Battle of Ulubad. After Mehmed established himself in Rum, Timur had already begun preparations for his return to Central Asia, and took no further steps to interfere with the status quo in Anatolia.
Reign
After winning the Interregnum, Mehmed crowned himself sultan in the Thracian city of Edirne that lay in the European part of the empire (the area dividing the Anatolian and European sides of the empire, Constantinople and the surrounding region, was still held by the Byzantine Empire), becoming Mehmed I. He consolidated his power, made Edirne the most important of the dual capitals, and conquered parts of Albania, the Jandarid emirate, and the Armenian Kingdom of Cilicia from the Mamelukes. Taking his many achievements into consideration, Mehmed is widely known as the "second founder" of the Ottoman Sultanate.
Soon after Mehmed began his reign, his brother Mustafa Çelebi, who had originally been captured along with their father Bayezid I during the Battle of Ankara and held captive in Samarkand, hiding in Anatolia during the Interregnum, reemerged and asked Mehmed to partition the empire with him. Mehmed refused and met Mustafa's forces in battle, easily defeating them. Mustafa escaped to the Byzantine city of Thessaloniki, but after an agreement with Mehmed, the Byzantine emperor Manuel II Palaiologos exiled Mustafa to the island of Lemnos.
However, Mehmed still faced some problems, first being the problem of his nephew Orhan, who Mehmed perceived as a threat to his rule, much like his late brothers had been. There was allegedly a plot involving him by Manuel II Palaiologos, who tried to use Orhan against Sultan Mehmed; however, the sultan found out about the plot and had Orhan blinded for betrayal, according to a common Byzantine practice.
Furthermore, as a result of the Battle of Ankara and other civil wars, the population of the empire had become unstable and traumatized. A very powerful social and religious movement arose in the empire and became disruptive. The movement was led by Sheikh Bedreddin (1359–1420), a famous Muslim Sufi and charismatic theologian. He was an eminent Ulema, born of a Greek mother and a Muslim father in Simavna (Kyprinos) southwest of Edirne (formerly Adrianople). Mehmed's brother Musa had made Bedreddin his "qadi of the army," or the supreme judge. Bedreddin created a populist religious movement in the Ottoman Sultanate, "subversive conclusions promoting the suppression of social differences between rich and poor as well as the barriers between different forms of monotheism." Successfully developing a popular social revolution and syncretism of the various religions and sects of the empire, Bedreddin's movement began in the European side of the empire and underwent further expansion in western Anatolia.
In 1416, Sheikh Bedreddin started his rebellion against the throne. After a four-year struggle, he was finally captured by Mehmed's grand vizier Bayezid Pasha and hanged in the city of Serres, a city in modern-day Greece, in 1420.
Death
The reign of Mehmed I as sultan of the re-united empire lasted only eight years before his death, but he had also been the most powerful brother contending for the throne and de facto ruler of most of the empire for nearly the whole preceding period of 11 years of the Ottoman Interregnum that passed between his father's captivity at Ankara and his own final victory over his brother Musa Çelebi at the Battle of Çamurlu.
Before his death, to secure passing the throne safely to his son Murad II, Mehmed blinded his nephew Orhan Çelebi (son of Süleyman), and decided to send his two sons Yusuf and Mahmud to be held as a hostage by Emperor Manuel II, hoping to ensure the continuing custody of his brother Mustafa.
He was buried in Bursa, in a mausoleum erected by himself near the celebrated mosque which he built there, and which, because of its decorations of green glazed tiles, is called the Green Mosque. Mehmed I also completed another mosque in Bursa, which his grandfather Murad I had commenced but which had been neglected during the reign of Bayezid. Mehmed founded in the vicinity of his own Green Mosque and mausoleum two other characteristic institutions, one a school and one a refectory for the poor, both of which he endowed with royal munificence.
Wives and children
Wives
Şehzade Hatun, daughter of Dividdar Ahmed Pasha, third ruler of Kutluşah of Canik;
Emine Hatun (m. 1403), daughter of Şaban Süli Bey, fifth ruler of Dulkadirids;
Kumru Hatun, mother of Selçuk Hatun;
Sons
Sultan Murad II, son of Emine Hatun;
Şehzade Küçük Mustafa Çelebi (1408 – killed October 1423);
Şehzade Mahmud Çelebi (1413 – August 1429, buried in Mehmed I Mausoleum, Bursa);
Şehzade Yusuf Çelebi (1414 – August 1429, buried in Mehmed I Mausoleum, Bursa);
Şehzade Ahmed Çelebi (died in infancy);
Daughters
Selçuk Hatun (died 25 October 1485, buried in Mehmed I Mausoleum, Bursa), married Prince Damat Taceddin Ibrahim II Bey, ruler of Isfendiyarids (1392 – 30 May 1443), son of Prince İsfendiyar Bey, ruler of Isfendiyarids;
Sultan Hatun (died 1444), married Prince Damat Kasim Bey (died 1464), son of Prince Isfendiar Bey, ruler of Isfendiyarids;
Hatice Hatun, married to Damat Karaca Paşa (died 10 November 1444);
Hafsa Hatun (buried in Mehmed I Mausoleum, Bursa), married Damat Mahmud Bey (died January 1444), son of Çandarlı Halil Pasha;
İlaldi Hatun, married Prince Damat Ibrahim II Bey, ruler of Karamanids (died 16 July 1464), son of Prince Mehmed II Bey;
A daughter, married to Prince Damat Isa Bey (died 1437), son of Prince Damat Mehmed II Bey;
Ayşe Hatun (buried in Mehmed I Mausoleum, Bursa);
Sitti Hatun (buried in Mehmed I Mausoleum, Bursa);
A daughter, married to Prince Damat Alaattin Ali Bey, ruler of Karamanids, son of Prince Halil Bey;
References
Sources
Further reading
Harris, Jonathan, The End of Byzantium. New Haven and London: Yale University Press, 2010.
External links
15th-century Ottoman sultans
People of the Ottoman Interregnum
1421 deaths
1389 births
15th-century people of the Ottoman Empire |
19988 | https://en.wikipedia.org/wiki/Murad%20II | Murad II | Murad II (, , 16 June 1404 – 3 February 1451) was the sultan of the Ottoman Empire from 1421 to 1444 and again from 1446 to 1451.
Murad II's reign was a period of important economic development. Trade increased and Ottoman cities expanded considerably. In 1432, the traveller Bertrandon de la Broquière noted that Ottoman annual revenue had risen to 2,500,000 ducats, and that if Murad II had used all available resources he could easily have invaded Europe.
Early life
Murad was born in June 1404 (or 1403) to Sultan Mehmed I. The identity of his mother is disputed. According to 15th century historian Şükrullah, Murad's mother was a concubine. Hüseyin Hüsâmeddin Yasar, an early 20th century historian, wrote in his work Amasya Tarihi, that his mother was Şehzade Hatun, daughter of Divitdar Ahmed Pasha. According to historians İsmail Hami Danişmend, and Heath W. Lowry, his mother was Emine Hatun, daughter of Şaban Suli Bey, ruler of the Dulkadirids.
He spent his early childhood in Amasya. In 1410, Murad came along with his father to the Ottoman capital, Edirne. After his father ascended to the Ottoman throne, he made Murad governor of the Amasya Sanjak. Murad remained at Amasya until the death of Mehmed I in 1421. He was solemnly recognized as sultan of the Ottoman Sultanate at sixteen years of age, girded with the Sword of Osman at Bursa, and the troops and officers of the state willingly paid homage to him as their sovereign.
Reign
Accession and first reign
Murad's reign was troubled by insurrection early on. The Byzantine Emperor, Manuel II, released the 'pretender' Mustafa Çelebi (known as Düzmece Mustafa) from confinement and acknowledged him as the legitimate heir to the throne of Bayezid I (1389–1402). The Byzantine Emperor had first secured a stipulation that Mustafa should, if successful, repay him for his liberation by giving up a large number of important cities. The pretender was landed by the Byzantine galleys in the European dominion of the sultan and for a time made rapid progress. Many Turkish soldiers joined him, and he defeated and killed the veteran general Beyazid Pasha, whom Murad had sent to fight him. Mustafa defeated Murad's army and declared himself Sultan of Adrianople (modern Edirne). He then crossed the Dardanelles to Asia with a large army but Murad out-manoeuvered Mustafa. Mustafa's force passed over in large numbers to Murad II. Mustafa took refuge in the city of Gallipoli, but the sultan, who was greatly aided by a Genoese commander named Adorno, besieged him there and stormed the place. Mustafa was taken and put to death by the sultan, who then turned his arms against the Roman emperor and declared his resolution to punish the Palaiologos for their unprovoked enmity by the capture of Constantinople.
Murad II then formed a new army called Azap in 1421 and marched through the Byzantine Empire and laid siege to Constantinople. While Murad was besieging the city, the Byzantines, in league with some independent Turkish Anatolian states, sent the sultan's younger brother Küçük Mustafa (who was only 13 years old) to rebel against the sultan and besiege Bursa. Murad had to abandon the siege of Constantinople in order to deal with his rebellious brother. He caught Prince Mustafa and executed him. The Anatolian states that had been constantly plotting against him — Aydinids, Germiyanids, Menteshe and Teke — were annexed and henceforth became part of the Ottoman Sultanate.
Murad II then declared war against Venice, the Karamanid Emirate, Serbia and Hungary. The Karamanids were defeated in 1428 and Venice withdrew in 1432 following the defeat at the second Siege of Thessalonica in 1430. In the 1430s Murad captured vast territories in the Balkans and succeeded in annexing Serbia in 1439. In 1441 the Holy Roman Empire and Poland joined the Serbian-Hungarian coalition. Murad II won the Battle of Varna in 1444 against John Hunyadi.
Abdication and second reign
Murad II relinquished his throne in 1444 to his son Mehmed II, but a Janissary revolt in the Empire forced him to return.
In 1448 he defeated the Christian coalition at the Second Battle of Kosovo (the first one took place in 1389). When the Balkan front was secured, Murad II turned east to defeat Timur's son, Shah Rokh, and the emirates of Karamanid and Çorum-Amasya. In 1450 Murad II led his army into Albania and unsuccessfully besieged the Castle of Kruje in an effort to defeat the resistance led by Skanderbeg. In the winter of 1450–1451, Murad II fell ill, and died in Edirne. He was succeeded by his son Mehmed II (1451–1481).
As Ghazi Sultan
When Murad ascended to the throne, he sought to regain the lost Ottoman territories that had reverted to autonomy following his grandfather Bayezid I’s defeat at the Battle of Ankara in 1402 at the hands of Timur. He needed the support of both the public and the nobles “who would enable him to exercise his rule”, and utilized the old and potent Islamic trope of Ghazi King.
In order to gain popular, international support for his conquests, Murad II modeled himself after the legendary Ghazi kings of old. The Ottomans already presented themselves as ghazis, painting their origins as rising from the ghazas of Osman, the founder of the dynasty. For them, ghaza was the noble championing of Islam and justice against non-Muslims and Muslims alike, if they were cruel; for example, Bayezid I labeled Timur Lang, also a Muslim, an apostate prior to the Battle of Ankara because of the violence his troops had committed upon innocent civilians and because “all you do is to break promises and vows, shed blood, and violate the honor of women.” Murad II only had to capitalize on this dynastic inheritance of doing ghaza, which he did by actively crafting the public image of Ghazi Sultan.
After his accession, there was a flurry of translating and compiling activity where old Persian, Arab, and Anatolian epics were translated into Turkish so Murad II could uncover the ghazi king legends. He drew from the noble behavior of the nameless Caliphs in the Battalname, an epic about a fictional Arab warrior who fought against the Byzantines, and modelled his actions on theirs. He was careful to embody the simplicity, piety, and noble sense of justice that was part of the Ghazi King persona.
For example, the Caliph in Battalname saw the battle turning in his enemy's favor, and got down from his horse and prayed, after which the battle ended in a victory for him. In the Battle of Varna in 1444, Murad II saw the Hungarians gaining the upper hand, and he got down from his horse and prayed just like the Caliph, and soon after, the tide turned in the Ottoman's favor and the Hungarian king Wladyslaw was killed. Similarly, the Caliph in the epic roused his warriors by saying “Those of you who die will be martyrs. Those of you who kill will be ghazis”; before the Battle of Varna, Murad II repeated these words to his army, saying “Those of us who kill will be ghazis; those of us who die will be martyrs.” In another instance, since the Ghazi King is meant to be a just and fair, when Murad took Thessalonica in the Balkans, he took care to keep the troops in check and prevented widespread looting. Finally, just as the fictional Caliph's ghazas were immortalized in Battalname, Murad II's battles and victories were also compiled and given the title "The Ghazas of Sultan Murad" (Gazavat- i Sultan Murad).
Murad II successfully painted himself as a simple soldier who did not partake in royal excesses, and as a noble ghazi sultan who sought to consolidate Muslim power against non-Muslims such as the Venetians and Hungarians. Through this self-presentation, he got the support of the Muslim population of not only the Ottoman territories, for both himself and his extensive, expensive campaigns, but also the greater Muslim populations in the Dar-al-Islam – such as the Mamluks and the Muslim Delhi Sultanates of India. Murad II was basically presenting himself not only as “a ghazi king who fights caffres [nonmuslims], but also serves as protector and master of lesser ghazis.”
Family
Consorts
Murad II had four known wives:
Yeni Hatun, daughter of Şadgeldi Paşazade Mustafa Bey of the Kutluşah of Amasya
Sultan Hatun, daughter of İsfendiyar Bey, the ruler of Isfendiyarids;
Hüma Hatun;
Mara Hatun (m. 1435), the daughter of Đurađ Branković of Serbia;
Sons
Ahmed Çelebi (1419 – 1437, buried in Muradiye Complex, Bursa);
Alaeddin Ali Çelebi (1425 – 1443, buried in Muradiye Complex, Bursa);
Mehmed the Conqueror (1431 – 3 May 1481, buried in Fatih Mosque, Istanbul) – with Hüma Hatun;
Yusuf Adil Shah (possibly) (1450 – 1510, buried in India);
Orhan Çelebi (died 1453, buried in Darülhadis Mausoleum, Edirne);
Hasan Çelebi (1450 – 18 February 1451, buried in Darülhadis Türbesi) – with Sultan Hatun;
Daughters
Erhundu Hatun, married to Damat Yakub Bey;
Şehzade Hatun (buried in Muradiye Complex, Bursa), married to Damat Sinan Bey;
Fatma Hatun (buried in Muradiye Complex, Bursa) – with Hüma Hatun, married to Damat Mahmud Çelebi, son of Çandırlı Ibrahim Pasha;
Hatice Hatun (buried in Muradiye Complex, Bursa), married Damat Isa Bey.
Portrayals
Murad II is portrayed by İlker Kurt in 2012 film Fetih 1453, by Vahram Papazian in the Albanian movie The Great Warrior Skanderbeg in 1953, and by Tolga Tekin in the 2020 Netflix series Rise of Empires: Ottoman.
References
Attribution
Further reading
Harris, Jonathan, The End of Byzantium. New Haven and London: Yale University Press, 2010.
Imber, Colin, The Ottoman Empire. London: Palgrave/Macmillan, 2002.
External links
Encyclopædia Britannica
1404 births
1451 deaths
15th-century Ottoman sultans
Burials in Turkey
Muslims of the Crusade of Varna
Ottoman people of the Byzantine–Ottoman wars
People from Amasya |
19989 | https://en.wikipedia.org/wiki/Murad%20III | Murad III | Murad III (Ottoman Turkish: مراد ثالث Murād-i sālis, Turkish: III. Murat) (4 July 1546 – 16 January 1595) was the Sultan of the Ottoman Empire from 1574 until his death in 1595.
Early life
Born in Manisa on 4 July 1546, Şehzade Murad was the oldest son of Şehzade Selim and his powerful wife Nurbanu Sultan. He received a good education and learned Arabic and Persian language. After his ceremonial circumcision in 1557, Murad's grandfather, the Sultan Suleiman I, appointed him sancakbeyi (governor) of Akşehir in 1558. At the age of 18 he was appointed sancakbeyi of Saruhan. Suleiman died (1566) when Murad was 20, and his father became the new sultan, Selim II. Selim II broke with tradition by sending only his oldest son out of the palace to govern a province, assigning Murad to Manisa.
Reign
Selim died in 1574 and was succeeded by Murad, who began his reign by having his five younger brothers strangled. His authority was undermined by harem influences – more specifically, those of his mother and later of his favorite wife Safiye Sultan, often to the detriment of Sokollu Mehmed Pasha's influence on the court. Under Selim II power had only been maintained by the genius of the powerful Grand Vizier Sokollu Mehmed Pasha, who remained in office until his assassination in October 1579. During Murad's reign the northern borders with the Habsburg Monarchy were defended by the Bosnian governor Hasan Predojević. The reign of Murad III was marked by exhausting wars on the empire's western and eastern fronts. The Ottomans also suffered defeats in battles such as the Battle of Sisak.
Expedition to Morocco
Abd al-Malik became a trusted member of the Ottoman establishment during his exile. He made the proposition of making Morocco an Ottoman vassal in exchange for the support of Murad III in helping him gain the Saadi throne.
With an army of 10,000 men whom were mostly Turks, Ramazan Pasha and Abd al-Malik left from Algiers to install Abd al-Malik as an Ottoman vassal ruler of Morocco. Ramazan Pasha conquered Fez which caused the Saadi Sultan to flee to Marrakesh which was also conquered, Abd al-Malik then assumed rule over Morocco as a client of the Ottomans.
Abd al-Malik made a deal with the Ottoman troops by paying them a large amount of gold and sending them back to Algiers suggesting a looser concept of vassalage than Murad III may have thought. Murad's name was recited in the Friday prayer and stamped on coinage marking the two traditional signs of sovereignty in the Islamic world. The reign of Abd al-Malik is understood to be a period of Moroccan vassalage to the Ottoman Empire. Abd al-Malik died in 1578 and was succeeded by his brother Ahmad al-Mansur who formally recognised the suzerainty of the Ottoman Sultan at the start of his reign while remaining de facto independent, however he stopped minting coins in Murads name, dropped his name from the Khutba and declared his full independence in 1582.
War with the Safavids
The Ottomans had been at peace with the neighbouring rivaling Safavid Empire since 1555, per the Treaty of Amasya, that for some time had settled border disputes. But in 1577 Murad declared war, starting the Ottoman–Safavid War (1578–90), seeking to take advantage of the chaos in the Safavid court after the death of Shah Tahmasp I. Murad was influenced by viziers Lala Kara Mustafa Pasha and Sinan Pasha and disregarded the opposing counsel of Grand Vizier Sokollu. Murad also fought the Safavids which would drag on for 12 years, ending with the Treaty of Constantinople (1590), which resulted in temporary significant territorial gains for the Ottomans.
Ottoman Activity in the Horn of Africa
During his reign an Ottoman Admiral by the name of Ali Bey was successful in establishing Ottoman supremacy in numerous cities in the Swahili coast between Mogadishu and Kilwa. Ottoman suzerainty was recognised in Mogadishu in 1585 and Ottoman supremacy was also established in other cities such as Barawa, Mombasa, Kilifi, Pate, Lamu and Faza.
Financial Affairs
Murad's reign was a time of financial stress for the Ottoman state. To keep up with changing military techniques, the Ottomans trained infantrymen in the use of firearms, paying them directly from the treasury. By 1580 an influx of silver from the New World had caused high inflation and social unrest, especially among Janissaries and government officials who were paid in debased currency. Deprivation from the resulting rebellions, coupled with the pressure of over-population, was especially felt in Anatolia. Competition for positions within the government grew fierce, leading to bribery and corruption. Ottoman and Habsburg sources accuse Murad himself of accepting enormous bribes, including 20,000 ducats from a statesman in exchange for the governorship of Tripoli and Tunisia, thus outbidding a rival who had tried bribing the Grand Vizier.
During his period, excessive inflation was experienced, the value of silver money was constantly played, food prices increased. 400 dirhams should be cut from 600 dirhams of silver, while 800 was cut, which meant 100 percent inflation. For the same reason, the purchasing power of wage earners was halved, and the consequence was an uprising.
English Pact
Numerous envoys and letters were exchanged between Elizabeth I and Sultan Murad III. In one correspondence, Murad entertained the notion that Islam and Protestantism had "much more in common than either did with Roman Catholicism, as both rejected the worship of idols", and argued for an alliance between England and the Ottoman Empire. To the dismay of Catholic Europe, England exported tin and lead (for cannon-casting) and ammunition to the Ottoman Empire, and Elizabeth seriously discussed joint military operations with Murad III during the outbreak of war with Spain in 1585, as Francis Walsingham was lobbying for a direct Ottoman military involvement against the common Spanish enemy. This diplomacy would be continued under Murad's successor Mehmed III, by both the sultan and Safiye Sultan alike.
Personal life
Palace life
Following the example of his father Selim II, Murad was the second Ottoman sultan who never went on campaign during his reign, instead spending it entirely in Constantinople. During the final years of his reign, he did not even leave Topkapı Palace. For two consecutive years he did not attend the Friday procession to the imperial mosque—an unprecedented breaking of custom. The Ottoman historian Mustafa Selaniki wrote that whenever Murad planned to go out to Friday prayer, he changed his mind after hearing of alleged plots by the Janissaries to dethrone him once he left the palace. Murad withdrew from his subjects and spent the majority of his reign keeping to the company of few people and abiding by a daily routine structured by the five daily Islamic prayers. Murad's personal physician Domenico Hierosolimitano described a typical day in the life of the sultan:
Murad's sedentary lifestyle and lack of participation in military campaigns earned him the disapproval of Mustafa Âlî and Mustafa Selaniki, the major Ottoman historians who lived during his reign. Their negative portrayals of Murad influenced later historians. Both historians also accused Murad of sexual excess.
Children
Before becoming sultan, Murad had been loyal to Safiye Sultan, his Albanian concubine who had given him a son, Mehmed, and two daughters. His monogamy was disapproved of by his mother Nurbanu Sultan, who worried that Murad needed more sons to succeed him in case Mehmed died young. She also worried about Safiye's influence over her son and the Ottoman dynasty. Five or six years after his accession to the throne, Murad was given a pair of concubines by his sister Ismihan. Upon attempting sexual intercourse with them, he proved impotent. "The arrow [of Murad], [despite] keeping with his created nature, for many times [and] for many days has been unable to reach at the target of union and pleasure," wrote Mustafa Ali. Nurbanu accused Safiyye and her retainers of causing Murad's impotence with witchcraft. Several of Safiye's servants were tortured by eunuchs in order to discover a culprit. Court physicians, working under Nurbanu's orders, eventually prepared a successful cure, but a side effect was a drastic increase in sexual appetite—by the time Murad died, he was said to have fathered over a hundred children. Nineteen of these were executed by Mehmed III when he became sultan.
Women at court
Influential ladies of his court included his mother Nurbanu Sultan, his sister Ismihan Sultan, wife of grand vizier Sokollu Mehmed Pasha, and musahibes (favourites) mistress of the housekeeper Canfeda Hatun, mistress of financial affairs Raziye Hatun, and the poet Hubbi Hatun, Finally, after the death of his mother and older sister, his wife Safiye Sultan was the only influential woman in the court.
Eunuchs at court
Before Murad, the palace eunuchs had been mostly white. This began to change in 1582 when Murad gave an important position to a black eunuch. By 1592, the eunuchs' roles in the palace were racially determined: black eunuchs guarded the Sultan and the women, and white eunuchs guarded the male pages in another part of the palace. The chief black eunuch was known as the Kizlar Agha, and the chief white eunuch was known as the Kapi Agha.
Murad and the arts
Murad took great interest in the arts, particularly miniatures and books. He actively supported the court Society of Miniaturists, commissioning several volumes including the Siyer-i Nebi, the most heavily illustrated biographical work on the life of the Islamic prophet Muhammad, the Book of Skills, the Book of Festivities and the Book of Victories. He had two large alabaster urns transported from Pergamon and placed on two sides of the nave in the Hagia Sophia in Constantinople and a large wax candle dressed in tin which was donated by him to the Rila monastery in Bulgaria is on display in the monastery museum.
Murad also furnished the content of Kitabü’l-Menamat (The Book of Dreams), addressed to Murad's spiritual advisor, Şüca Dede. A collection of first person accounts, it tells of Murad's spiritual experiences as a Sufi disciple. Compiled from thousands of letters Murad wrote describing his dream visions, it presents a hagiographic self-portrait. Murad dreams of various activities, including being stripped naked by his father and having to sit on his lap, single-handedly killing 12,000 infidels in battle, walking on water, ascending to heaven, and producing milk from his fingers. He frequently encounters the Prophet Muhammed, and in one dream sits in the Prophet's lap and kisses his mouth.
In another letter addressed to Şüca Dede, Murad wrote "I wish that God, may He be glorified and exalted, had not created this poor servant as the descendant of the Ottomans so that I would not hear this and that, and would not worry. I wish I were of unknown pedigree. Then, I would have one single task, and could ignore the whole world."
The diplomatic edition of these dream letters have been recently published by Ozgen Felek in Turkish.
Death
Murad died from what is assumed to be natural causes in the Topkapı Palace and was buried in tomb next to the Hagia Sophia. In the mausoleum are 54 sarcophagus of the sultan, his wives and children that are also buried there. He is also responsible for changing the burial customs of the sultans' mothers. Murad had his mother Nurbanu buried next to her husband Selim II, making her the first consort to share a sultan's tomb.
Family
Consorts
Murad's named consorts were:
Safiye Sultan, an ethnic Albanian. Haseki Sultan of Murad and Valide Sultan of Mehmed III;
Şahıhuban Hatun;
Zerefşan Hatun;
Şahi Hatun;
Şemsiruhsar Hatun, mother of Rukiye Sultan;
Nazperver Hatun;
Sons
Murad had twenty-two sons:
Sultan Mehmed III (26 May 1566 – 22 December 1603, Topkapı Palace, Constantinople, buried in Mehmed III Mausoleum, Hagia Sophia Mosque, Constantinople), became the next sultan;
Şehzade Mahmud (1568, Manisa Palace, Manisa – 1581, Topkapı Palace, Istanbul, buried in Selim II Mausoleum, Hagia Sophia Mosque);
Şehzade Mustafa (1578-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Osman (1573-died 1587, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Bayezid (1579-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Selim (1581-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Cihangir (1585-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Abdullah (1580-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Abdurrahman (1585-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Hasan (1586-died 1591, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Ahmed (1586-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Yakub (1587-murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Alemşah (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Yusuf (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Hüseyin (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Korkud (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Ali (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Ishak (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Ömer (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Alaeddin (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Davud (murdered 28 January 1595, Topkapı Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque);
Şehzade Suleiman (born and died in 1585, Topkapi Palace, Constantinople, buried in Murad III Mausoleum, Hagia Sophia Mosque):
Şehzade Yahya (1585, Manisa Palace, Manisa – 1648, Kotor, Montenegro, buried in Kotor, Montenegro), was claimed to be a son of Murad III;
Daughters
Murad had twenty-eight daughters, of whom
sixteen died of plague in 1597. The rest, who were married, included the following:
Hümaşah Sultan, married only once to Damad Nişar Mustafazade Mehmed Pasha (died 1586);. Safiye's possibly eldest daughter.
Ayşe Sultan (died 15 May 1605, buried in Mehmed III Mausoleum, Hagia Sophia Mosque), daughter with Safiye, married firstly on 20 May 1586, to Damat Ibrahim Pasha, married secondly on 5 April 1602, to Damad Yemişçi Hasan Pasha, married thirdly on 29 June 1604, to Damad Güzelce Mahmud Pasha;
Fatma Sultan (died 1620 buried in Murad III Mausoleum, Hagia Sophia Mosque), daughter with Safiye, married firstly on 6 December 1593, to Damad Halil Pasha, married secondly December 1604, to Damad Hızır Pasha;
Mihrimah Sultan,(buried in Murad III Mausoleum, Hagia Sophia Mosque), daughter with Safiye, married in 1613 to Damad Mirahur Ahmed Pasha, married secondly to Damad Çerkes Mehmed Pasha;
Rukiye Sultan (born in 1586, buried in Murad III Mausoleum, Hagia Sophia Mosque), daughter with Şemsiruhsar Hatun, married to Damad Nakkaş Hasan Pasha;
Mihriban Sultan (born in 1595, buried in Murad III Mausoleum, Hagia Sophia Mosque),daughter with Nazperver Hatun, married in 1613 to Damad Kapıcıbaşı Topal Mehmed Agha;
Fahriye Sultan (died in 1641,buried in Murad III Mausoleum, Hagia Sophia Mosque), married to Damad Sofu Bayram Pasha, sometime Governor of Bosnia;
Hatice Sultan (1589-?), was married and had three children who died young.
Amriye Sultan
In fiction
Orhan Pamuk's historical novel Benim Adım Kırmızı (My Name is Red, 1998) takes place at the court of Murad III, during nine snowy winter days of 1591, which the writer uses in order to convey the tension between East and West.
The Harem Midwife by Roberta Rich - a historical fiction set in Constantinople (1578) which follows Hannah, a midwife, who tends to many of the women in Sultan Murad III's harem.
In popular culture
In the 2011 TV series Muhteşem Yüzyıl, Murad III is portrayed by Turkish actor Serhan Onat.
References
External links
Ancestry of Sultana Nur-Banu (Cecilia Venier-Baffo)
[aged 48]
1546 births
1595 deaths
16th-century Ottoman sultans
Turks of the Ottoman Empire
Custodian of the Two Holy Mosques
Ottoman dynasty |
19990 | https://en.wikipedia.org/wiki/Mehmed%20III | Mehmed III | Mehmed III (, Meḥmed-i sālis; ; 26 May 1566 – 22 December 1603) was Sultan of the Ottoman Empire from 1595 until his death in 1603.
Early life
Mehmed was born at the Manisa Palace in 1566, during the reign of his great-grandfather, Suleiman the Magnificent. He was the son of Murad III, himself the son of Selim II, who was the son of Sultan Suleiman and Hurrem Sultan. His mother was Safiye Sultan, an Albanian from the Dukagjin highlands. His great-grandfather Suleiman I died the year he was born and his grandfather became the new sultan, Selim II. His grandfather Selim II died when Mehmed was eight, and Mehmed's father, Murad III, became sultan in 1574. Murad died in 1595, when Mehmed was 28 years old.
Mehmed spent most of his time in Manisa with his father Murad and mother Safiye, his first teacher Ibrahim Efendi. His circumcision took place on 29 May 1582 when he was 16 years old.
Reign
Fratricide
Upon ascending to the throne, Mehmed III ordered that all of his nineteen brothers be executed. They were strangled by his royal executioners, many of whom were deaf, mute or 'half-witted' to ensure absolute loyalty. Fratricidal successions were not unprecedented, as sultans would often have dozens of children with their concubines.
Power struggle in Constantinople
Mehmed III was an idle ruler, leaving government to his mother Safiye Sultan, the valide sultan. His first major problem was the rivalry between two of his viziers, Serdar Ferhad Pasha and Koca Sinan Pasha, and their supporters. His mother and her son-in-law Damat Ibrahim Pasha supported Koca Sinan Pasha and prevented Mehmed III from taking control of the issue himself. The issue grew to cause major disturbances by janissaries. On 7 July 1595, Mehmed III finally sacked Serdar Ferhad Pasha from the position of Grand Vizier due to his failure in Wallachia and replaced him with Sinan.
Austro-Hungarian War
The major event of his reign was the Austro-Ottoman War in Hungary (1593–1606). Ottoman defeats in the war caused Mehmed III to take personal command of the army, the first sultan to do so since Suleiman I in 1566. Accompanied by the Sultan, the Ottomans conquered Eger in 1596. Upon hearing of the Habsburg army's approach, Mehmed wanted to dismiss the army and return to Istanbul. However, the Ottomans eventually decided to face the enemy and defeated the Habsburg and Transylvanian forces at the Battle of Keresztes (known in Turkish as the Battle of Haçova), during which the Sultan had to be dissuaded from fleeing the field halfway through the battle. Upon returning to Istanbul in victory, Mehmed told his viziers that he would campaign again. The next year the Venetian Bailo in Istanbul noted, "the doctors declared that the Sultan cannot leave for a war on account of his bad health, produced by excesses of eating and drinking".
In reward for his services at the war, Cigalazade Yusuf Sinan Pasha was made Grand Vizier in 1596. However, with pressure from the court and his mother, Mehmed reinstated Damat Ibrahim Pasha to this position shortly afterward.
However, the victory at the Battle of Keresztes was soon set back by some important losses, including the loss of Győr () to the Austrians and the defeat of the Ottoman forces led by Hafız Ahmet Pasha by the Wallachian forces under Michael the Brave in Nikopol in 1599. In 1600, Ottoman forces under Tiryaki Hasan Pasha captured Nagykanizsa after a 40-day siege and later successfully held it against a much greater attacking force in the Siege of Nagykanizsa.
Jelali revolts
Another major event of his reign was the Jelali revolts in Anatolia. Karayazıcı Abdülhalim, a former Ottoman official, captured the city of Urfa and declared himself a sultan in 1600. The rumors of his claim to the throne spread to Constantinople and Mehmed ordered the rebels to be treated harshly to dispel the rumors, among these, was the execution of Hüseyin Pasha, whom Karayazıcı Abdülhalim styled as Grand Vizier. In 1601, Abdülhalim fled to the vicinity of Samsun after being defeated by the forces under Sokulluzade Hasan Pasha, the governor of Baghdad. However, his brother, Deli Hasan, killed Sokulluzade Hasan Pasha and defeated troops under the command of Hadım Hüsrev Pasha. He then marched on to Kütahya, captured and burned the city.
Relationship with England
In 1599, the fourth year of Mehmed III's reign, Queen Elizabeth I sent a convoy of gifts to the Ottoman court. These gifts were originally intended for the sultan's predecessor, Murad III, who had died before they had arrived. Included in these gifts was a large jewel-studded clockwork organ that was assembled on the slope of the Royal Private Garden by a team of engineers including Thomas Dallam. The organ took many weeks to complete and featured dancing sculptures such as a flock of blackbirds that sung and shook their wings at the end of the music. Also among the English gifts was a ceremonial coach, accompanied by a letter from the Queen to Mehmed's mother, Safiye Sultan. These gifts were intended to cement relations between the two countries, building on the trade agreement signed in 1581 that gave English merchants priority in the Ottoman region. Under the looming threat of Spanish military presence, England was eager to secure an alliance with the Ottomans, the two nations together having the capability to divide the power. Elizabeth's gifts arrived in a large 27-gun merchantman ship that Mehmed personally inspected, a clear display of English maritime strength that would prompt him to build up his fleet over the following years of his reign. The Anglo-Ottoman alliance would never be consummated, however, as relations between the nations grew stagnant due to anti-European sentiments reaped from the worsening Austro-Ottoman War and the deaths of Safiye Sultan's interpreter and the pro-English chief Hasan Pasha.
Death
Mehmed died on 22 December 1603 at the age of 37. According to one source, the cause of his death was the distress caused by the death of his son, Şehzade Mahmud. According to another source, he died either of plague or of stroke. He was buried in Hagia Sophia Mosque. He was succeeded by his son Ahmed I as the new sultan.
Family
Consorts
None of Mehmed's consorts are listed as haseki sultan in Ottoman palace archives. Known consorts were:
Halime Sultan (buried in Mustafa I Mausoleum, Hagia Sophia Mosque, Istanbul);
Handan Sultan (died 9 November 1605, Topkapı Palace, Istanbul, buried in Mehmed III Mausoleum, Hagia Sophia Mosque);
A consort who died in 1597, during the outbreak of plague ;
Sons
Şehzade Selim (1585, Manisa Palace, Manisa – 20 April 1597, Topkapı Palace, Istanbul, buried in Hagia Sophia Mosque) - with Handan;
Şehzade Süleyman (born 1586, Manisa Palace, Manisa, died young, buried in Hagia Sophia Mosque) - with Handan;
Şehzade Mahmud (born 1588, Manisa Palace, Manisa – executed by Mehmed III, 7 June 1603, Topkapı Palace, Istanbul, buried in Şehzade Mahmud Mausoleum, Şehzade Mosque) - with Halime;
Sultan Ahmed I (18 April 1590, Manisa Palace, Manisa – 22 November 1617, Topkapı Palace, Istanbul, buried in Ahmed I Mausoleum, Sultan Ahmed Mosque), Sultan of the Ottoman Empire - with Handan;
Sultan Mustafa I (1591, Manisa Palace, Manisa – 20 January 1639, Eski Palace, Istanbul, buried in Mustafa I Mausoleum, Hagia Sophia Mosque), Sultan of the Ottoman Empire - with Halime;
A son who died in the second year of his life, after Selim's death;
Şehzade Cihangir (1599, Topkapı Palace, Istanbul – 1602, Topkapı Palace, Istanbul, buried in Hagia Sophia Mosque);
Şehzade Osman (died aged three or four);
Daughters
Şah Sultan, married in 1604 to Damat Kara Davud Pasha, Grand Vizier - with Halime Sultan;
Hatice Sultan, married firstly in 1604 to Damat Mirahur Mustafa Pasha, married secondly in 1612 to Damat Mahmud Pasha, son of Cigalazade Sinan Pasha; with Halime Sultan;
A daughter, married firstly in 1604 to Damat Tiryaki Hasan Pasha, married secondly in 1616 to Damat Ali Pasha, Vizier;
A daughter, married in 1612 to Damat Halil Pasha;
Ayşe Sultan
References
External links
[aged 37]
1566 births
1603 deaths
16th-century Ottoman sultans
17th-century Ottoman sultans
Turks of the Ottoman Empire
Custodian of the Two Holy Mosques
Ottoman dynasty
People of the Long Turkish War |
19991 | https://en.wikipedia.org/wiki/Mustafa%20I | Mustafa I | Mustafa I (; ; 1591 – 20 January 1639), called Mustafa the Saint (Veli Mustafa) during his second reign and often called Mustafa the Mad (Deli Mustafa) by modern historians, was the son of Sultan Mehmed III and Halime Sultan. He was the Sultan of the Ottoman Empire from 22 November 1617 to 26 February 1618 and from 20 May 1622 to 10 September 1623.
Early life
Mustafa was born in the Manisa Palace, as the younger half-brother of Sultan Ahmed I (1603–1617). His mother was Halime Sultan, an Abkhazian lady.
Before 1603 it was customary for an Ottoman Sultan to have his brothers executed shortly after he gained the throne (Mustafa's father Mehmed III had executed 19 of his own brothers). But when the thirteen-year-old Ahmed I was enthroned in 1603, he spared the life of the twelve-year-old Mustafa.
A factor in Mustafa's survival is the influence of Kösem Sultan (Ahmed's favorite consort), who may have wished to preempt the succession of Sultan Osman II, Ahmed's first-born son from another concubine. If Osman became Sultan, he would likely try to execute his half-brothers, the sons of Ahmed and Kösem. (This scenario later became a reality when Osman II executed his brother Mehmed in 1621.) However, the reports of foreign ambassadors suggest that Ahmed actually liked his brother.
Until Ahmed's death in 1617, Mustafa lived in the Old Palace, along with his mother, and grandmother Safiye Sultan.
First reign (1617–1618)
Ahmed's death created a dilemma never before experienced by the Ottoman Empire. Multiple princes were now eligible for the Sultanate, and all of them lived in Topkapı Palace. A court faction headed by the Şeyhülislam Esad Efendi and Sofu Mehmed Pasha (who represented the Grand Vizier when he was away from Constantinople) decided to enthrone Mustafa instead of Ahmed's son Osman. Sofu Mehmed argued that Osman was too young to be enthroned without causing adverse comment among the populace. The Chief Black Eunuch Mustafa Agha objected, citing Mustafa's mental problems, but he was overruled. Mustafa's rise created a new succession principle of seniority that would last until the end of the Empire. It was the first time an Ottoman Sultan was succeeded by his brother instead of his son. His mother Halime Sultan became the Valide Sultan as well as a regent and wielded great power. Due to Mustafa's mental conditions, she acted as a regent and exercised power more directly.
It was hoped that regular social contact would improve Mustafa's mental health, but his behavior remained eccentric. He pulled off the turbans of his viziers and yanked their beards. Others observed him throwing coins to birds and fish. The Ottoman historian İbrahim Peçevi wrote "this situation was seen by all men of state and the people, and they understood that he was psychologically disturbed."
Deposition
Mustafa was never more than a tool of court cliques at the Topkapı Palace. In 1618, after a short rule, another palace faction deposed him in favour of his young nephew Osman II (1618–1622), and Mustafa was sent back to the Old Palace. The conflict between the Janissaries and Osman II presented him with a second chance. After a Janissary rebellion led to the deposition and assassination of Osman II in 1622, Mustafa was restored to the throne and held it for another year.
Alleged mental instability
Nevertheless, according to Baki Tezcan, there is not enough evidence to properly establish that Mustafa was mentally imbalanced when he came to the throne. Mustafa "made a number of excursions to the arsenal and the navy docks, examining various sorts of arms and taking an active interest in the munitions supply of the army and the navy." One of the dispatches of Baron de Sancy, the French ambassador, "suggested that Mustafa was interested in leading the Safavid campaign himself and was entertaining the idea of wintering in Konya for that purpose."
Moreover, one contemporary observer provides an explanation of the coup which does not mention the incapacity of Mustafa. Baron de Sancy ascribes the deposition to a political conspiracy between the grand admiral Ali Pasha and Chief Black Eunuch Mustafa Agha, who were angered by the former's removal from office upon Sultan Mustafa's accession. They may have circulated rumors of the sultan's mental instability subsequent to the coup in order to legitimize it.
Second reign (1622–1623)
He commenced his reign by executing all those who had taken any share in the murder of Sultan Osman. Hoca Ömer Efendi, the chief of the rebels, the kızlar Agha Suleiman Agha, the vizier Dilaver Pasha, the Kaim-makam Ahmed Pasha, the defterdar Baki Pasha, the segban-bashi Nasuh Agha, and the general of the janissaries Ali Agha, were cut into pieces.
The epithet "Veli" (meaning "saint") was used in reference to him during his reign.
His mental condition unimproved, Mustafa was a puppet controlled by his mother and brother-in-law, the grand vizier Kara Davud Pasha. He believed that Osman II was still alive and was seen searching for him throughout the palace, knocking on doors and crying out to his nephew to relieve him from the burden of sovereignty. "The present emperor being a fool" (according to English Ambassador Sir Thomas Roe), he was compared unfavorably with his predecessor. In fact, it was his mother Halime Sultan the de facto-co-ruler as Valide Sultan of the Ottoman Empire.
Deposition and last years
Political instability was generated by conflict between the Janissaries and the sipahis (Ottoman cavalry), followed by the Abaza rebellion, which occurred when the governor-general of Erzurum, Abaza Mehmed Pasha, decided to march to Istanbul to avenge the murder of Osman II. The regime tried to end the conflict by executing Kara Davud Pasha, but Abaza Mehmed continued his advance. Clerics and the new Grand Vizier (Kemankeş Kara Ali Pasha) prevailed upon Mustafa's mother to allow the deposition of her son. She agreed, on condition that Mustafa's life would be spared.
The 11-year-old Murad IV, son of Ahmed I and Kösem, was enthroned on 10 September 1623. In return for her consent to his deposition, the request of Mustafa's mother that he be spared execution was granted. Mustafa was sent along with his mother to the Eski (old) Palace.
Death
One source states that Mustafa was executed by the orders of his nephew, Sultan Murad IV on 20 January 1639 in order to end the Ottoman dynasty and prevented to give power to his mother Kösem Sultan. Another source states that he died of epilepsy which was caused by being imprisoned for 34 years out of his 48 years of life. He is buried in the courtyard of the Haghia Sophia.
See also
Transformation of the Ottoman Empire
Notes
External links
1591 births
1639 deaths
17th-century Ottoman sultans
Ottoman people of the Ottoman–Persian Wars
Turks of the Ottoman Empire
Ottoman dynasty
People of the Ottoman Empire of Abkhazian descent |
19992 | https://en.wikipedia.org/wiki/Murad%20IV | Murad IV | Murad IV (, Murād-ı Rābiʿ; , 27 July 1612 – 8 February 1640) was the Sultan of the Ottoman Empire from 1623 to 1640, known both for restoring the authority of the state and for the brutality of his methods. Murad IV was born in Constantinople, the son of Sultan Ahmed I (r. 1603–17) and Kösem Sultan. He was brought to power by a palace conspiracy in 1623, and he succeeded his uncle Mustafa I (r. 1617–18, 1622–23). He was only 11 when he ascended the throne. His reign is most notable for the Ottoman–Safavid War (1623–1639), of which the outcome would partition the Caucasus between the two Imperial powers for around two centuries, while it also roughly laid the foundation for the current Turkey–Iran–Iraq borders.
Early life
Murad IV was born on 27 July 1612 to Ahmed I (reign 16031617) and his consort and later wife Kösem Sultan. After his father's death when he was six years he was confined in the Kafes with his brothers, Suleiman, Kasim, Bayezid and Ibrahim.
Grand Vizier Kemankeş Ali Pasha and Şeyhülislam Yahya Efendi were deposed from their position. They did not stop their words the next day the sultan, the child of the age of 6, was taken to the Eyüp Sultan Mausoleum. The swords of Muhammad and Yavuz Sultan Selim were bequeathed to him. Five days later he was circumcised.
Reign
Early reign (1623–32)
Murad IV was for a long time under the control of his relatives and during his early years as Sultan, his mother, Kösem Sultan, essentially ruled through him. The Empire fell into anarchy; the Safavid Empire invaded Iraq almost immediately, Northern Anatolia erupted in revolts, and in 1631 the Janissaries stormed the palace and killed the Grand Vizier, among others. Murad IV feared suffering the fate of his elder brother, Osman II (1618–22), and decided to assert his power.
At the age of 16 in 1628, he had his brother-in-law (his sister Gevherhan Sultan's husband, who was also the former governor of Egypt), Kara Mustafa Pasha, executed for a claimed action "against the law of God".
After the death of the Grand Vizier Çerkes Mehmed Pasha in the winter of Tokat, Diyarbekir Beylerbeyi Hafez Ahmed Pasha became a vizier on 8 February 1625.
The epidemic, which started in the summer of 1625 and called the plague of Bayrampaşa, spread to threaten the population of Istanbul. On average, a thousand people died every day. The people fled to the Okmeydanı to escape the plague. The situation was worse in the countryside outside of Istanbul.
Absolute rule and imperial policies (1632–1640)
Murad IV tried to quell the corruption that had grown during the reigns of previous Sultans, and that had not been checked while his mother was ruling through proxy.
Murad IV banned alcohol, tobacco, and coffee in Constantinople. He ordered execution for breaking this ban. He would reportedly patrol the streets and the lowest taverns of Constantinople in civilian clothes at night, policing the enforcement of his command by casting off his disguise on the spot and beheading the offender with his own hands. Rivaling the exploits of Selim the Grim, he would sit in a kiosk by the water near his Seraglio Palace and shoot arrows at any passerby or boatman who rowed too close to his imperial compound, seemingly for sport. He restored the judicial regulations by very strict punishments, including execution; he once strangled a grand vizier for the reason that the official had beaten his mother-in-law.
Fire of 1633
On 2 September 1633, the big Cibali fire broke out, burning a fifth of the city. The fire started during the day when a caulker burned the shrub and the ship caulked into the walls. The fire, which spread from three branches to the city. One arm lowered towards the sea. He returned from Zeyrek and walked to Atpazan. Other kollan Büyükkaraman, Küçükkaraman, Sultanmehmet (Fatih), Saraçhane, Sangürz (Sangüzel) districts were ruined. The sultan could not do anything other than watching sentence viziers, Bostancı and Yeniçeri. The most beautiful districts of Istanbul were ruined, from the Yeniodas, Mollagürani districts, Fener gate to Sultanselim, Mesihpaşa, Bali Pasha and Lutfi Pasha mosques, Şahı buhan Palace, Unkapam to Atpazarı, Bostanzade houses, Sofular Bazaar. The fire that lasted for 30 hours was only extinguished after the wind stopped.
The war against Safavid Iran
Murad IV's reign is most notable for the Ottoman–Safavid War (1623–39) against Persia (today Iran) in which Ottoman forces managed to conquer Azerbaijan, occupying Tabriz, Hamadan, and capturing Baghdad in 1638. The Treaty of Zuhab that followed the war generally reconfirmed the borders as agreed by the Peace of Amasya, with Eastern Armenia, Eastern Georgia, Azerbaijan, and Dagestan staying Persian, while Western Armenia, and Western Georgia stayed Ottoman. Mesopotamia was irrevocably lost for the Persians. The borders fixed as a result of the war, are more or less the same as the present border line between Turkey, Iraq and Iran.
During the siege of Baghdad in 1638, the city held out for forty days but was compelled to surrender.
Murad IV himself commanded the Ottoman army in the last years of the war.
Relations with the Mughal Empire
While he was encamped in Baghdad, Murad IV is known to have met ambassadors of the Mughal Emperor Shah Jahan, Mir Zarif and Mir Baraka, who presented 1000 pieces of finely embroidered cloth and even armor. Murad IV gave them the finest weapons, saddles and Kaftans and ordered his forces to accompany the Mughals to the port of Basra, where they set sail to Thatta and finally Surat.
Architecture
Murad IV put emphasis on architecture and in his period many monuments were erected. The Baghdad Kiosk, built in 1635, and the Revan Kiosk, built in 1638 in Yerevan, were both built in the local styles. Some of the others include the Kavak Sarayı pavilion; the Meydanı Mosque; the Bayram Pasha Dervish Lodge, Tomb, Fountain, and Primary School; and the Şerafettin Mosque in Konya.
Music and poetry
Murad IV wrote many poems. He used the "Muradi" penname for his poems. He also liked testing people with riddles. Once he wrote a poetic riddle and announced that whoever came with the correct answer would get a generous reward. Cihadi Bey, a poet from Enderun School, gave the correct answer and he was promoted.
Murad IV was also a composer. He has a composition called "Uzzal Peshrev".
Family
Consorts
Very little is known about the concubines of Murad IV, principally because he did not leave sons who survived his death to reach the throne, but many historians consider Ayşe Sultan as his only consort until the very end of Murad's seventeen-year reign, when a second Haseki appeared in the records. It is possible that Murad had only a single concubine until the advent of the second, or that he had a number of concubines but singled out only two as Haseki.
Sons
Şehzade Ahmed (21 December 1628 – 1639, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Numan (1628 – 1629, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Orhan (1629 – 1629, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Hasan (March 1631 – 1632, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Suleiman (2 February 1632 – 1635, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Mehmed (11 August 1633 – 11 January 1640, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Osman (9 February 1634 – 1635, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Alaeddin (26 August 1635 – 1637, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Selim (1637 – 1640, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Şehzade Mahmud (15 May 1638 – 1638, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul)
Daughters
Murad had several daughters, among whom were:
Kaya Sultan (1633–1659, buried in Mustafa I Mausoleum, Hagia Sophia Mosque, Istanbul), married August 1644, Melek Ahmed Pasha;
Safiye Sultan (buried in Ahmed I Mausoleum, Blue Mosque, Istanbul), married 1659, Sarı Hasan Pasha;
Rukiye Sultan (died 1696, buried in Ahmed I Mausoleum, Blue Mosque, Istanbul), married firstly 1663, Şeytan Divrikli Ibrahim Pasha, Vizier, married secondly 1693 Gürcü Mehmed Pasha.
Gevherhan Sultan (born in February 1630)
Hanzade Sultan (1631–1675)
Death
Murad IV died from cirrhosis in Constantinople at the age of 27 in 1640.
Rumours had circulated that on his deathbed, Murad IV ordered the execution of his mentally disabled brother, Ibrahim (reigned 1640–48), which would have meant the end of the Ottoman line. However, the order was not carried out.
In popular culture
In the TV series Muhteşem Yüzyıl: Kösem, Murad IV is portrayed by Cağan Efe Ak as a child, and Metin Akdülger as the Sultan.
See also
Transformation of the Ottoman Empire
Polish–Ottoman War (1633–34)
Koçi Bey
References
Sources
External links
1612 births
1640 deaths
Ottoman sultans born to Greek mothers
Deaths from cirrhosis
Modern child rulers
17th-century Ottoman sultans
Turks of the Ottoman Empire
Ottoman people of the Ottoman–Persian Wars
Ottoman dynasty |
19994 | https://en.wikipedia.org/wiki/Masamune%20Shirow | Masamune Shirow | , better known by his pen name , is a Japanese manga artist. Shirow is best known for the manga Ghost in the Shell, which has since been turned into three theatrical anime movies, two anime television series, an anime television movie, an anime ONA series, a theatrical live action movie, and several video games.
Life and career
Born in the Hyōgo Prefecture capital city of Kobe, he studied oil painting at Osaka University of Arts. While in college, he developed an interest in manga, which led him to create his own complete work, Black Magic, which was published in the manga dōjinshi Atlas. His work caught the eye of Seishinsha President Harumichi Aoki, who offered to publish him.
The result was best-selling manga Appleseed, a full volume of densely plotted drama taking place in an ambiguous future. The story was a sensation, and won the 1986 Seiun Award for Best Manga. After a professional reprint of Black Magic and a second volume of Appleseed, he released Dominion in 1986. Two more volumes of Appleseed followed before he began work on Ghost in the Shell.
In 2007, he collaborated again with Production I.G to co-create the original concept for the anime television series Ghost Hound, Production I.G's 20th anniversary project. A further original collaboration with Production I.G began airing in April 2008, titled Real Drive.
Bibliography
Manga
series:
(1985–1986)
"Dominion: Phantom of the Audience" (1988), short story
(1995)
series:
(1989–1990)
1.5. (1991–1996)
1.6. (2019), with Junichi Fujisaku
(1997)
Stand-alones:
(1983)
(1985–1989)
(1986)
(1987)
(1990–1991)
Exon Depot (1992)
(1992–1994)
(2007–2008)
(2008)
Art books
A substantial amount of Shirow's work has been released in art book or poster book format. The following is an incomplete list.
Intron Depot 1 (1992) (science fiction–themed color illustration art book collecting his work from 1981 to 1991)
Intron Depot 2: Blades (1998) (fantasy-themed color illustration art book featuring female characters with armor and edged weapons)
Cybergirls Portfolio (2000)
Intron Depot 3: Ballistics (2003) (military-themed color illustration and CG art book featuring female characters with guns)
Intron Depot 4: Bullets (2004) (color illustration art book collecting his work between 1995 and 1999)
Intron Depot 5: Battalion (2012) (game & animation artwork covering the period 2001–2009)
Intron Depot 6: Barb Wire 01 (2013) (illustrations for novels 2007–2010)
Intron Depot 7: Barb Wire 02 (2013) (illustrations for novels 2007–2010)
Intron Depot 8: Bomb Bay (2018) (illustrations 1992-2009)
Intron Depot 9: Barrage Fire (2019) (illustrations 1998-2017)
Intron Depot 10: Bloodbard (2020) (illustrations 2004-2019)
Intron Depot 11: Bailey Bridge (2020) (illustrations 2012-2014)
Kokin Toguihime Zowshi Shu (2009)
Pieces 1 (2009)
Pieces 2: Phantom Cats (2010)
Pieces 3: Wild Wet Quest (2010)
Pieces 4: Hell Hound 01 (2010)
Pieces 5: Hell Hound 02 (2011)
Pieces 6: Hell Cat (2011)
Pieces 7: Hell Hound 01 & 02 Miscellaneous Work + α (2011)
Pieces 8: Wild Wet West (2012)
Pieces 9: Kokon Otogizoshi Shu Hiden (2012)
Pieces GEM 01: The Ghost in The Shell Data + α (2014)
Pieces GEM 02: Neuro Hard Bee Planet (2015)
Pieces GEM 03: Appleseed Drawings (2016)
W-Tails Cat 1 (2012)
W-Tails Cat 2 (2013)
W-Tails Cat 3 (2016)
Greaseberries 1 (2014)
Greaseberries 2 (2014)
Greaseberries 3 (2018)
Greaseberries 4 (2019)
Greaseberries Rough (2019)
Galgrease
Galgrease (published in Uppers Magazine, 2002) is the collected name of several erotic manga and poster books by Shirow. The name comes from the fact that the women depicted often look "greased".
The first series of Galgrease booklets included four issues each in the following settings:
Wild Wet West (Wild West-themed)
Hellhound (Horror-themed)
Galhound (Near-future science fiction–themed)
The second series included another run of 12 booklets in the following worlds:
Wild Wet Quest (A Tomb Raider or Indiana Jones–style sequel to Wild Wet West)
Hellcat (Pirate-themed)
Galhound 2 (Near-future science fiction–themed)
After each regular series, there were one or more bonus poster books that revisited the existing characters and settings.
Minor works
"Areopagus Arther" (1980), published in ATLAS (dōjinshi)
"Yellow Hawk" (1981), published in ATLAS (dōjinshi)
"Colosseum Pick" (1982), published in Funya (dōjinshi)
"Pursuit (Manga)" (1982), published in Kintalion (dōjinshi)
"Opional Orientation" (1984), published in ATLAS (dōjinshi)
"Battle on Mechanism" (1984), published in ATLAS (dōjinshi)
"Metamorphosis in Amazoness" (1984), published in ATLAS (dōjinshi)
"Arice in Jargon" (1984), published in ATLAS (dōjinshi)
"Bike Nut" (1985), published in Dorothy (dōjinshi)
"Gun Dancing" (1986), published in Young Magazine Kaizokuban
"Colosseum Pick" (1990), published in Comic Fusion Atpas (dōjinshi)
Other
Design of the MAPP1-SM mouse series (2002, commissioned by Elecom)
Pandora in the Crimson Shell: Ghost Urn (2012), original concept
Design of the EHP-SH1000 and EHP-SL100 headphones (2016, commissioned by Elecom)
Adaptations
Anime
Film
Ghost in the Shell (1995) by Mamoru Oshii
Ghost in the Shell 2: Innocence (2004) by Mamoru Oshii
Appleseed (2004) by Shinji Aramaki
Ghost in the Shell: Stand Alone Complex - Solid State Society (2006) by Kenji Kamiyama
Appleseed Ex Machina (2007) by Shinji Aramaki and John Woo
Appleseed Alpha (2014) by Shinji Aramaki and Joseph Chou
Kōkaku no Pandora - Ghost Urn (2015) by Munenori Nawa
Ghost in the Shell: The New Movie (2016) by Kazuya Nomura
OVAs and ONAs
Black Magic M-66 (1987) by Hiroyuki Kitakubo and Shirow Masamune (this is the only anime in which Shirow played a direct role in the production)
Appleseed (1988) by Kazuyoshi Katayama
Dominion (1988) by Takaaki Ishiyama and Kôichi Mashimo
New Dominion Tank Police (1990) by Noboru Furuse and Junichi Sakai
Landlock (1995) by Yasuhiro Matsumura (character and mecha designs only)
Gundress (1999) by Junichi Sakai (character and mecha designs only)
Tank Police Team: Tank S.W.A.T. 01 (2006) by Romanov Higa
W Tails Cat: A Strange Presence (2013)
Ghost in the Shell: Arise (2013) by Kazuchika Kise
Ghost in the Shell: SAC_2045 (2020) by Shinji Aramaki and Kenji Kamiyama
Television
Ghost in the Shell: Stand Alone Complex (2003) by Kenji Kamiyama (also called Alone on Earth or GitS:SAC)
Ghost in the Shell: S.A.C. 2nd GIG (2004) by Kenji Kamiyama (second season of GitS:SAC)
Ghost Hound (2007) by Ryūtarō Nakamura; original concept in collaboration with Production I.G
Real Drive (2008) by Kazuhiro Furuhashi; original concept in collaboration with Production I.G
Appleseed XIII (2011) by Takayuki Hamana
Ghost in the Shell: Arise - Alternative Architecture (2015) by Kazuchika Kise
Pandora in the Crimson Shell: Ghost Urn (2016) by Munenori Nawa, original concept for the source manga
Live action
Ghost in the Shell (2017) by Rupert Sanders
Video games
PC Engine
Toshi Tensou Keikaku: Eternal City (action platformer)
Super Famicom
Appleseed: Oracle of Prometheus
Nintendo DS
Fire Emblem: Shadow Dragon (Strategy RPG)
PlayStation
Ghost in the Shell
Yarudora Series Vol. 3: Sampaguita
Project Horned Owl
GunDress
PlayStation 2
Ghost in the Shell: Stand Alone Complex
Appleseed EX
PlayStation Portable
Ghost in the Shell: Stand Alone Complex
Yarudora Series Vol. 3: Sampaguita
Microsoft Windows
Ghost in the Shell: Stand Alone Complex - First Assault Online
References
Further reading
External links
Masamune Shirow at Media Arts Database
Masamune Shirow at Baka-Updates Manga
1961 births
Living people
Japanese animators
Japanese erotic artists
Osaka University of Arts alumni
People from Kobe
Hentai creators
Manga artists
Manga artists from Hyōgo Prefecture
Cyberpunk writers
Pseudonymous artists |
19995 | https://en.wikipedia.org/wiki/Musical%20saw | Musical saw | A musical saw, also called a singing saw, is a hand saw used as a musical instrument. Capable of continuous glissando (portamento), the sound creates an ethereal tone, very similar to the theremin. The musical saw is classified as a plaque friction idiophone with direct friction (132.22) under the Hornbostel-Sachs system of musical instrument classification, and as a metal sheet played by friction (151) under the revision of the Hornbostel-Sachs classification by the MIMO Consortium.
Playing
The saw is generally played seated with the handle squeezed between the legs, and the far end held with one hand. Some sawists play standing, either with the handle between the knees and the blade sticking out in front of them. The saw is usually played with the serrated edge, or "teeth", facing the body, though some players face them away. Some saw players file down the teeth which makes no discernable difference to the sound. Manyespecially professionalsaw players use a handle, called a Tip-Handle or a Cheat, at the tip of the saw for easier bending and higher virtuosity.
To sound a note, a sawist first bends the blade into an S-curve. The parts of the blade that are curved are damped from vibration, and do not sound. At the center of the S-curve a section of the blade remains relatively flat. This section, the "sweet spot", can vibrate across the width of the blade, producing a distinct pitch: the wider the section of blade, the lower the sound. Sound is usually created by drawing a bow across the back edge of the saw at the sweet spot, or sometimes by striking the sweet spot with a mallet.
The sawist controls the pitch by adjusting the S-curve, making the sweet spot travel up the blade (toward a thinner width) for a higher pitch, or toward the handle for a lower pitch. Harmonics can be created by playing at varying distances on either side of the sweet spot. Sawists can add vibrato by shaking one of their legs or by wobbling the hand that holds the tip of the blade. Once a sound is produced, it will sustain for quite a while, and can be carried through several notes of a phrase.
On occasion the musical saw is called for in orchestral music, but orchestral percussionists are seldom also sawists. If a note outside of the saw's range is called for, an electric guitar with a slide can be substituted.
Types
Sawists often use standard wood-cutting saws, although special musical saws are also made. As compared with wood-cutting saws, the blades of musical saws are generally wider, for range, and longer, for finer control. They do not have set or sharpened teeth, and may have grain running parallel to the back edge of the saw, rather than parallel to the teeth. Some musical saws are made with thinner metal, to increase flexibility, while others are made thicker, for a richer tone, longer sustain, and stronger harmonics.
A typical musical saw is wide at the handle end and wide at the tip. Such a saw will generally produce about two octaves, regardless of length. A bass saw may be over at the handle and produce about two-and-a-half octaves. There are also musical saws with 3–4 octaves range, and new improvements have resulted in as much as 5 octaves note range. Two-person saws, also called "misery whips", can also be played, though with less virtuosity, and they produce an octave or less of range.
Most sawists use cello or violin bows, using violin rosin, but some may use improvised home-made bows, such as a wooden dowel.
Producers
Musical saws have been produced for over a century, primarily in the United States, but also in Scandinavia, Germany, France (Lame sonore) and Asia.
United States
In the early 1900s, there were at least ten companies in the United States manufacturing musical saws. These saws ranged from the familiar steel variety to gold-plated masterpieces worth hundreds of dollars. However, with the start of World War II the demand for metals made the manufacture of saws too expensive and many of these companies went out of business. By the year 2000, only three companies in the United StatesMussehl & Westphal, Charlie Blacklock, and Wentworthwere making saws. In 2012, a company called Index Drums started producing a saw that had a built-in transducer in the handle, called the "JackSaw".
Outside the United States
Outside the United States, makers of musical saws include Bahco, makers of the limited edition Stradivarius, Alexis in France, Feldmann and Stövesandt in Germany, Music Blade in Greece and Thomas Flinn & Company in the United Kingdom, based in Sheffield, who produce three different sized musical saws, as well as accessories.
Events, championships and world records
The International Musical Saw Association (IMSA) produces an annual International Musical Saw Festival (including a "Saw-Off" competition) every August in Santa Cruz and Felton, California. An International Musical Saw Festival is held every other summer in New York City, produced by Natalia Paruz. Paruz also produced a musical saw festival in Israel. There are also annual saw festivals in Japan and China.
A Guinness World Record for the largest musical-saw ensemble was established July 18, 2009, at the annual NYC Musical Saw Festival. Organized by Paruz, 53 musical saw players performed together.
In 2011 a World Championship took place in Jelenia Góra/Poland. Winners: 1. Gladys Hulot (France), 2. Katharina Micada (Germany), 3. Tom Fink (Germany).
Performers
People notable for playing the musical saw.
Natalia Paruz, also known as the "Saw Lady", plays the musical saw in movie soundtracks, in television commercials, with orchestras internationally, and is the organizer of international musical saw festivals in New York City and Israel. She was a judge at the musical saw festival in France and she played the saw in the off-Broadway show 'Sawbones'. The December 3rd 2011 crossword puzzle of the Washington Post had Paruz as a question: Down 5Instrument played by Natalia Paruz.
Mara Carlyle, a London based singer/songwriter who often performs using the musical saw, and the instrument features on her albums The Lovely and Floreat.
David Coulter, multi-instrumentalist, producer and music supervisor; ex-member of Test Dept and The Pogues, has played musical saw live, in films, on tv and stages around the world and on numerous albums with: Damon Albarn, Gorillaz, and Tom Waits, among others. He has played on many film scores, including Is Anybody There? (2008) and It's a Boy Girl Thing (2006), and has featured on TV soundtrack and themes tunes, most recently for Psychoville and episodes of Wallander.
Janeen Rae Heller played the saw in four television guest appearances: The Tracey Ullman Show (1989), Quantum Leap (1990), and Home Improvement (1992 and 1999). She has also performed on albums such as Michael Hedges' The Road to Return in 1994 and Rickie Lee Jones's Ghostyhead in 1997.
Mio Higashino, based in Osaka, Japan, won first place in the 42nd International Musical Saw Festival. Mio performs in Japan as part of the two-member group Mollen.
Charles Hindmarsh, The Yorkshire Musical Saw Player, has played the musical saw throughout the UK.
Kev Hopper, formerly the bass guitarist in the 1980s band Stump, made an EP titled Saurus in 2002 featuring six original saw tunes.
Christine Johnston (under the stage name Eve Kransky) of The Kransky Sisters plays the musical saw alongside other traditional and improvised instruments.
Julian Koster of the band Neutral Milk Hotel played the singing saw, along with other instruments, in the band and currently plays the saw in his solo project, The Music Tapes. In 2008, he released The Singing Saw at Christmastime. He also writes the podcast The Orbiting Human Circus (of the Air) which prominently features singing saws in the story.
Katharina Micada plays the musical saw on cabaret stages and with different Symphony Orchestras like Berlin Philharmonic Orchestra and London Philharmonic Orchestra. A singer, she is one of the few players, who can sing and play the saw simultaneously and in pitch. She has played in TV- and Radio shows and for film and CD recordings.
Jamie Muir of the progressive rock band King Crimson briefly uses a musical saw on the song "Easy Money" from the album Larks' Tongues in Aspic.
Bonnie Paine, singer and multi-instrumentalist from Talequah, Oklahoma, co-founder of Colorado folk-rock group Elephant Revival has performed on the musical saw as a member of the band.
Angela Perley and the Howlin' Moons, an American rock band from Columbus, Ohio, features singer/guitarist Angela Perley who performs the musical saw on their recorded albums and at their live shows.
Quinta (a.k.a. Kath Mann), London-based multi-instrumentalist and composer, has collaborated with many artists on the musical saw, including Bat for Lashes, Radiohead's Philip Selway, and The Paper Cinema.
Thomas Jefferson Scribner was a familiar figure on the streets of Santa Cruz, California during the 1970s playing the musical saw. He performed on a variety of recordings and appeared in folk music festivals in the United States and Canada during the 1970s. His work as labour organizer and member of the Industrial Workers of the World is documented in the 1979 film The Wobblies. Canadian composer/saw player Robert Minden pays tribute to him on his Web site. Musician and songwriter Utah Phillips has recorded a song referencing Scribner, "The Saw Playing Musician" on the album Fellow Workers with Ani DiFranco. Artist Marghe McMahon was inspired in 1978 to create a bronze statue of Tom playing the musical saw which sits in downtown Santa Cruz.
That 1 Guy, an American based musician who performs using homemade instruments.
Jim Turner released The Well-Tempered Saw on Owl Records in 1971
Victor Victoria (Victoria Falconer) of the musical cabaret troupe Fringe Wives Club and dark cabaret comedy duo EastEnd Cabaret plays the musical saw as part of their live shows, amongst other instruments.
Liu Ya from China is a professional violinist and saw player and is famous for her interpretation of the "Bird song", which she performed in Chinese TV.
Marlene Dietrich
German actress and singer Marlene Dietrich, who lived and worked in the United States for a long time, is probably the best-known musical saw player. When she studied the violin for one year in Weimar in her early twenties, her musical skills were already evident. Some years later she learned to play the musical saw while she was shooting the film Café Elektric in Vienna in 1927. Her colleague, the Bavarian actor and musician Igo Sym, taught her how to play. In the shooting breaks and at weekends both performed romantic duets, he at the piano and she at the musical saw.
Sym gave his saw to her as a farewell gift. The following words are engraved on the saw: "Now Suidy is gone / the sun d’ont [sic!] / shine… / Igo / Vienna 1927"
She took the saw with her, when she left for Hollywood in 1929 and played there in the following years at film sets and Hollywood parties.
When she participated in the United Service Organizations (USO) shows for the US troops in 1944, she also played on the saw. Some of these shows were broadcast on radio, so there exist two rare recordings of her saw playing, embedded in entertaining interviews. 1. Aloha Oe 2. other song
In fiction
The Theme song of the movie One Flew Over The Cuckoo's Nest is played on a musical saw.
Delicatessen is directed by Jean Pierre Jeunet and Marc Caro and includes an impressing duet for violoncello and musical saw, which is performed on a roof. (1991)
Dummy, directed by Greg Pritikin, starring Adrien Brody has an audition scene with a musical saw player (portrayed by Natalia Paruz) (2002)
In 2002, an orchestra of 30 musical saws appeared in Nicholas de Mimsy-Porpington's five-hundredth Deathday Party in the Harry Potter and the Chamber of Secrets book.
In the 2011 movie Another Earth the character of the composer plays the saw (on the soundtrack is Natalia Paruz).
In the 2014 animated film My Little Pony: Equestria Girls — Rainbow Rocks, one of the film's background characters, Derpy Hooves, plays the musical saw in her band.
In the 2014 stop-motion animated film The Boxtrolls, one of the main Boxtrolls who took care of Eggs, Fish, plays the musical saw with Eggs in their cave.
In the film Mr. Peabody and Sherman, Mr. Peabody plays a musical saw. (2014)
Composers and compositions
Beginning from the early 1920s composers of both contemporary and popular music wrote for the musical saw.
One of the first was Franz Schreker who included the musical saw in his opera Christophorus (1925–29) where it is used in the séance scene of the second act. Other early examples include Dmitri Shostakovich: he included the musical saw, e.g., in the film music for The New Babylon (1929), in The Nose (1928), and in Lady Macbeth of the Mtsensk District (1934).
Shostakovich and other composers of his time used the term "Flexaton" to mark the musical saw. "Flexaton" just means "to flex a tone", in which the saw is flexed to change the pitch. Unfortunately, there exists another instrument called Flexatone, so there has been confusion for a long time. Aram Khachaturian, who knew Shostakovich's music, included the musical saw in his Piano Concerto (1936) in the second movement. Another composer was the Swiss Arthur Honegger, who included the saw in his opera Antigone in 1924 .
The Romanian composer George Enescu used the musical saw at the end of the second act of his opera Œdipe (1931) to show in an extensive glissandowhich begins with the mezzo-soprano and is continued by the sawthe death and ascension of the sphinx killed by Oedipus.
The Italian composer Giacinto Scelsi wrote a part for the saw in his quarter-tone piece Quattro pezzi per orchestra (1959). German composer Hans Werner Henze took the saw to characterize the mean hero of his tragical opera Elegy for young lovers (1961).
Other composers were Krysztof Penderecki with Fluorescences (1961), De natura sonoris Nr. 2 (1971) and the opera Ubu Rex (1990), Bernd Alois Zimmermann with Stille und Umkehr (1970), George Crumb with Ancient voices of children (1970), John Corigliano with The Mannheim Rocket (2001).
Composer Scott Munson wrote Clover Hill (2007) for saw and orchestra, Quintet for saw and strings (2009), The World Is Too Much with Us for soprano singer, saw and strings (2009), Ars longa vitas brevis for saw and string quartet (2010), 'Bend' for saw and string quartet (2011) many pieces for jazz band and saw (2010–2013), Lullaby for the Forgotten for saw and piano (2015), and many movie and theater scores containing the saw.
Chaya Czernowin used the saw in her opera "PNIMA...Ins Innere" (2000) to represent the character of the grandfather, who is traumatized by the Holocaust.
There are further Leif Segerstam, Hans Zender (orchestration of "5 préludes" by Claude Debussy), and Oscar Strasnoy (opera Le bal).
Russian composer Lera Auerbach wrote for the saw in her ballet The Little Mermaid (2005), in her symphonic poem Dreams and Whispers of Poseidon (2005), in her oratorio "Requiem Dresden – Ode to Peace" (2012), in her Piano Concerto No.1 (2015), in her comic oratorio The Infant Minstrel and His Peculiar Menagerie (2016) and in her violin concerto Nr.4 "NyX – Fractured dreams" (2017).
Canadian composer Robert Minden has written extensively for the musical saw. Michael A. Levine composed Divination By Mirrors for musical saw soloist and two string ensembles tuned a quarter tone apart, taking advantage of the saws ability to play in both tunings.
Other composers for chamber music with musical saw are Jonathan Rutherford (An Intake of Breath), Dana Wilson (Whispers from Another Time), Heinrich Gattermeyer (Elegie für Singende Säge, Cembalo (oder Klavier), Vito Zuraj (Musica di camera (2001)) and Britta-Maria Bernhard (Tranquillo).
See also
Flexatone
Wobble board
Daxophone
References
External links
The Musical Saw forum on Facebook
The annual NYC Musical Saw Festival
History of the musical saw, saws made for music, poetry mentioning musical saw, movies with musical saw, etc.
Brief History
Bowed instruments
Continuous pitch instruments
Individual friction plaques
Pitched percussion
Improvised musical instruments |
19996 | https://en.wikipedia.org/wiki/MIDI | MIDI | MIDI (; an acronym for Musical Instrument Digital Interface) is a technical standard that describes a communications protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing, and recording music. The specification originates in a paper titled Universal Synthesizer Interface, published by Dave Smith and Chet Wood, then of Sequential Circuits, at the October 1981 Audio Engineering Society conference in New York City.
A single MIDI link through a MIDI cable can carry up to sixteen channels of information, each of which can be routed to a separate device or instrument. This could be sixteen different digital instruments, for example. MIDI carries event messages; data that specify the instructions for music, including a note's notation, pitch, velocity (which is heard typically as loudness or softness of volume); vibrato; panning to the right or left of stereo; and clock signals (which set tempo). When a musician plays a MIDI instrument, all of the key presses, button presses, knob turns and slider changes are converted into MIDI data. One common MIDI application is to play a MIDI keyboard or other controller and use it to trigger a digital sound module (which contains synthesized musical sounds) to generate sounds, which the audience hears produced by a keyboard amplifier. MIDI data can be transferred via MIDI or USB cable, or recorded to a sequencer or digital audio workstation to be edited or played back.
A file format that stores and exchanges the data is also defined. Advantages of MIDI include small file size, ease of modification and manipulation and a wide choice of electronic instruments and synthesizer or digitally-sampled sounds. A MIDI recording of a performance on a keyboard could sound like a piano or other keyboard instrument; however, since MIDI records the messages and information about their notes and not the specific sounds, this recording could be changed to many other sounds, ranging from synthesized or sampled guitar or flute to full orchestra. A MIDI recording is not an audio signal, as with a sound recording made with a microphone.
Prior to the development of MIDI, electronic musical instruments from different manufacturers could generally not communicate with each other. This meant that a musician could not, for example, plug a Roland keyboard into a Yamaha synthesizer module. With MIDI, any MIDI-compatible keyboard (or other controller device) can be connected to any other MIDI-compatible sequencer, sound module, drum machine, synthesizer, or computer, even if they are made by different manufacturers.
MIDI technology was standardized in 1983 by a panel of music industry representatives, and is maintained by the MIDI Manufacturers Association (MMA). All official MIDI standards are jointly developed and published by the MMA in Los Angeles, and the MIDI Committee of the Association of Musical Electronics Industry (AMEI) in Tokyo. In 2016, the MMA established The MIDI Association (TMA) to support a global community of people who work, play, or create with MIDI.
History
In the early 1980s, there was no standardized means of synchronizing electronic musical instruments manufactured by different companies. Manufacturers had their own proprietary standards to synchronize instruments, such as CV/gate, DIN sync and Digital Control Bus (DCB). Roland founder Ikutaro Kakehashi felt the lack of standardization was limiting the growth of the electronic music industry. In June 1981, he proposed developing a standard to Oberheim Electronics founder Tom Oberheim, who had developed his own proprietary interface, the Oberheim System.
Kakehashi felt the Oberheim System was too cumbersome, and spoke to Sequential Circuits president Dave Smith about creating a simpler, cheaper alternative. While Smith discussed the concept with American companies, Kakehashi discussed it with Japanese companies Yamaha, Korg and Kawai. Representatives from all companies met to discuss the idea in October. Initially, only Sequential Circuits and the Japanese companies were interested. Using Roland's DCB as a basis, Smith and Sequential Circuits engineer Chet Wood devised a universal interface to allow communication between equipment from different manufacturers. Smith and Wood proposed this standard in a paper, Universal Synthesizer Interface, at the Audio Engineering Society show in October 1981. The standard was discussed and modified by representatives of Roland, Yamaha, Korg, Kawai, and Sequential Circuits. Kakehashi favored the name Universal Musical Interface (UMI), pronounced you-me, but Smith felt this was "a little corny". However, he liked the use of "instrument" instead of "synthesizer", and proposed the name Musical Instrument Digital Interface (MIDI). Moog Music founder Robert Moog announced MIDI in the October 1982 issue of Keyboard.
At the 1983 Winter NAMM Show, Smith demonstrated a MIDI connection between Prophet 600 and Roland JP-6 synthesizers. The MIDI specification was published in August 1983. The MIDI standard was unveiled by Kakehashi and Smith, who received Technical Grammy Awards in 2013 for their work. In 1982, the first instruments were released with MIDI, the Roland Jupiter-6 and the Prophet 600. In 1983, the first MIDI drum machine, the Roland TR-909, and the first MIDI sequencer, the Roland MSQ-700 were released. The first computer to support MIDI, the NEC PC-88 and PC-98, was released in 1982.
The MIDI Manufacturers Association (MMA) was formed following a meeting of "all interested companies" at the 1984 Summer NAMM Show in Chicago. The MIDI 1.0 Detailed Specification was published at the MMA's second meeting at the 1985 Summer NAMM show. The standard continued to evolve, adding standardized song files in 1991 (General MIDI) and adapted to new connection standards such as USB and FireWire. In 2016, the MIDI Association was formed to continue overseeing the standard. An initiative to create a 2.0 standard was announced in January 2019. The MIDI 2.0 standard was introduced at the 2020 Winter NAMM show.
Impact
MIDI's appeal was originally limited to professional musicians and record producers who wanted to use electronic instruments in the production of popular music. The standard allowed different instruments to communicate with each other and with computers, and this spurred a rapid expansion of the sales and production of electronic instruments and music software. This interoperability allowed one device to be controlled from another, which reduced the amount of hardware musicians needed. MIDI's introduction coincided with the dawn of the personal computer era and the introduction of samplers and digital synthesizers. The creative possibilities brought about by MIDI technology are credited for helping revive the music industry in the 1980s.
MIDI introduced capabilities that transformed the way many musicians work. MIDI sequencing makes it possible for a user with no notation skills to build complex arrangements. A musical act with as few as one or two members, each operating multiple MIDI-enabled devices, can deliver a performance similar to that of a larger group of musicians. The expense of hiring outside musicians for a project can be reduced or eliminated, and complex productions can be realized on a system as small as a synthesizer with integrated keyboard and sequencer.
MIDI also helped establish home recording. By performing preproduction in a home environment, an artist can reduce recording costs by arriving at a recording studio with a partially completed song.
Applications
Instrument control
MIDI was invented so that electronic or digital musical instruments could communicate with each other and so that one instrument can control another. For example, a MIDI-compatible sequencer can trigger beats produced by a drum sound module. Analog synthesizers that have no digital component and were built prior to MIDI's development can be retrofitted with kits that convert MIDI messages into analog control voltages. When a note is played on a MIDI instrument, it generates a digital MIDI message that can be used to trigger a note on another instrument. The capability for remote control allows full-sized instruments to be replaced with smaller sound modules, and allows musicians to combine instruments to achieve a fuller sound, or to create combinations of synthesized instrument sounds, such as acoustic piano and strings. MIDI also enables other instrument parameters (volume, effects, etc.) to be controlled remotely.
Synthesizers and samplers contain various tools for shaping an electronic or digital sound. Filters adjust timbre, and envelopes automate the way a sound evolves over time after a note is triggered. The frequency of a filter and the envelope attack (the time it takes for a sound to reach its maximum level), are examples of synthesizer parameters, and can be controlled remotely through MIDI. Effects devices have different parameters, such as delay feedback or reverb time. When a MIDI continuous controller number (CCN) is assigned to one of these parameters, the device responds to any messages it receives that are identified by that number. Controls such as knobs, switches, and pedals can be used to send these messages. A set of adjusted parameters can be saved to a device's internal memory as a patch, and these patches can be remotely selected by MIDI program changes.
Composition
MIDI events can be sequenced with computer software, or in specialized hardware music workstations. Many digital audio workstations (DAWs) are specifically designed to work with MIDI as an integral component. MIDI piano rolls have been developed in many DAWs so that the recorded MIDI messages can be easily modified. These tools allow composers to audition and edit their work much more quickly and efficiently than did older solutions, such as multitrack recording.
Because MIDI is a set of commands that create sound, MIDI sequences can be manipulated in ways that prerecorded audio cannot. It is possible to change the key, instrumentation or tempo of a MIDI arrangement, and to reorder its individual sections. The ability to compose ideas and quickly hear them played back enables composers to experiment. Algorithmic composition programs provide computer-generated performances that can be used as song ideas or accompaniment.
Some composers may take advantage of standard, portable set of commands and parameters in MIDI 1.0 and General MIDI (GM) to share musical data files among various electronic instruments. The data composed via the sequenced MIDI recordings can be saved as a standard MIDI file (SMF), digitally distributed, and reproduced by any computer or electronic instrument that also adheres to the same MIDI, GM, and SMF standards. MIDI data files are much smaller than corresponding recorded audio files.
Use with computers
The personal computer market stabilized at the same time that MIDI appeared, and computers became a viable option for music production. In 1983 computers started to play a role in mainstream music production. In the years immediately after the 1983 ratification of the MIDI specification, MIDI features were adapted to several early computer platforms. NEC's PC-88 and PC-98 began supporting MIDI as early as 1982. The Yamaha CX5M introduced MIDI support and sequencing in an MSX system in 1984.
The spread of MIDI on personal computers was largely facilitated by Roland Corporation's MPU-401, released in 1984, as the first MIDI-equipped PC sound card, capable of MIDI sound processing and sequencing. After Roland sold MPU sound chips to other sound card manufacturers, it established a universal standard MIDI-to-PC interface. The widespread adoption of MIDI led to computer-based MIDI software being developed. Soon after, a number of platforms began supporting MIDI, including the Apple II Plus, IIe and Macintosh, Commodore 64 and Amiga, Atari ST, Acorn Archimedes, and PC DOS.
The Macintosh was a favorite among musicians in the United States, as it was marketed at a competitive price, and it took several years for PC systems to catch up with its efficiency and graphical interface. The Atari ST was preferred in Europe, where Macintoshes were more expensive. The Atari ST had the advantage of MIDI ports that were built directly into the computer. Most music software in MIDI's first decade was published for either the Apple or the Atari. By the time of Windows 3.0's 1990 release, PCs had caught up in processing power and had acquired a graphical interface and software titles began to see release on multiple platforms.
In 2015, Retro Innovations released the first MIDI interface for a Commodore VIC-20, making the computer's four voices available to electronic musicians and retro-computing enthusiasts for the first time. Retro Innovations also makes a MIDI interface cartridge for Tandy Color Computer and Dragon computers.
Chiptune musicians also use retro gaming consoles to compose, produce and perform music using MIDI interfaces. Custom interfaces are available for the Famicom, Nintendo Entertainment System (NES), Nintendo Gameboy and Game Boy Advance, Sega Megadrive and Sega Genesis.
Computer files
Standard files
The Standard MIDI File (SMF) is a file format that provides a standardized way for music sequences to be saved, transported, and opened in other systems. The standard was developed and is maintained by the MMA, and usually uses a .mid extension. The compact size of these files led to their widespread use in computers, mobile phone ringtones, webpage authoring and musical greeting cards. These files are intended for universal use and include such information as note values, timing and track names. Lyrics may be included as metadata, and can be displayed by karaoke machines.
SMFs are created as an export format of software sequencers or hardware workstations. They organize MIDI messages into one or more parallel tracks and time-stamp the events so that they can be played back in sequence. A header contains the arrangement's track count, tempo and an indicator of which of three SMF formats the file uses. A type 0 file contains the entire performance, merged onto a single track, while type 1 files may contain any number of tracks that are performed synchronously. Type 2 files are rarely used and store multiple arrangements, with each arrangement having its own track and intended to be played in sequence.
RMID files
Microsoft Windows bundles SMFs together with Downloadable Sounds (DLS) in a Resource Interchange File Format (RIFF) wrapper, as RMID files with a .rmi extension. RIFF-RMID has been deprecated in favor of Extensible Music Files (XMF).
A MIDI file is not an audio recording. Rather, it is a set of instructionsfor example, for pitch or tempoand can use a thousand times less disk space than the equivalent recorded audio. Due to their tiny filesize, fan-made MIDI arrangements became an attractive way to share music online, before the advent of broadband internet access and multi-gigabyte hard drives. The major drawback to this is the wide variation in quality of users' audio cards, and in the actual audio contained as samples or synthesized sound in the card that the MIDI data only refers to symbolically. Even a sound card that contains high-quality sampled sounds can have inconsistent quality from one sampled instrument to another, Early budget-priced cards, such as the AdLib and the Sound Blaster and its compatibles, used a stripped-down version of Yamaha's frequency modulation synthesis (FM synthesis) technology played back through low-quality digital-to-analog converters. The low-fidelity reproduction of these ubiquitous cards was often assumed to somehow be a property of MIDI itself. This created a perception of MIDI as low-quality audio, while in reality MIDI itself contains no sound, and the quality of its playback depends entirely on the quality of the sound-producing device.
Software
The main advantage of the personal computer in a MIDI system is that it can serve a number of different purposes, depending on the software that is loaded. Multitasking allows simultaneous operation of programs that may be able to share data with each other.
Sequencers
Sequencing software allows recorded MIDI data to be manipulated using standard computer editing features such as cut, copy and paste and drag and drop. Keyboard shortcuts can be used to streamline workflow, and, in some systems, editing functions may be invoked by MIDI events. The sequencer allows each channel to be set to play a different sound and gives a graphical overview of the arrangement. A variety of editing tools are made available, including a notation display or scorewriter that can be used to create printed parts for musicians. Tools such as looping, quantization, randomization, and transposition simplify the arranging process.
Beat creation is simplified, and groove templates can be used to duplicate another track's rhythmic feel. Realistic expression can be added through the manipulation of real-time controllers. Mixing can be performed, and MIDI can be synchronized with recorded audio and video tracks. Work can be saved, and transported between different computers or studios.
Sequencers may take alternate forms, such as drum pattern editors that allow users to create beats by clicking on pattern grids, and loop sequencers such as ACID Pro, which allow MIDI to be combined with prerecorded audio loops whose tempos and keys are matched to each other. Cue-list sequencing is used to trigger dialogue, sound effect, and music cues in stage and broadcast production.
Notation software
With MIDI, notes played on a keyboard can automatically be transcribed to sheet music. Scorewriting software typically lacks advanced sequencing tools, and is optimized for the creation of a neat, professional printout designed for live instrumentalists. These programs provide support for dynamics and expression markings, chord and lyric display, and complex score styles. Software is available that can print scores in braille.
Notation programs include Finale, Encore, Sibelius, MuseScore and Dorico. SmartScore software can produce MIDI files from scanned sheet music.
Editor/librarians
Patch editors allow users to program their equipment through the computer interface. These became essential with the appearance of complex synthesizers such as the Yamaha FS1R, which contained several thousand programmable parameters, but had an interface that consisted of fifteen tiny buttons, four knobs and a small LCD. Digital instruments typically discourage users from experimentation, due to their lack of the feedback and direct control that switches and knobs would provide, but patch editors give owners of hardware instruments and effects devices the same editing functionality that is available to users of software synthesizers. Some editors are designed for a specific instrument or effects device, while other, universal editors support a variety of equipment, and ideally can control the parameters of every device in a setup through the use of System Exclusive messages.
Patch librarians have the specialized function of organizing the sounds in a collection of equipment and exchange entire banks of sounds between an instrument and a computer. In this way the device's limited patch storage is augmented by a computer's much greater disk capacity. Once transferred to the computer, it is possible to share custom patches with other owners of the same instrument. Universal editor/librarians that combine the two functions were once common, and included Opcode Systems' Galaxy and eMagic's SoundDiver. These programs have been largely abandoned with the trend toward computer-based synthesis, although Mark of the Unicorn's (MOTU)'s Unisyn and Sound Quest's Midi Quest remain available. Native Instruments' Kore was an effort to bring the editor/librarian concept into the age of software instruments.
Auto-accompaniment programs
Programs that can dynamically generate accompaniment tracks are called auto-accompaniment programs. These create a full band arrangement in a style that the user selects, and send the result to a MIDI sound generating device for playback. The generated tracks can be used as educational or practice tools, as accompaniment for live performances, or as a songwriting aid.
Synthesis and sampling
Computers can use software to generate sounds, which are then passed through a digital-to-analog converter (DAC) to a power amplifier and loudspeaker system. The number of sounds that can be played simultaneously (the polyphony) is dependent on the power of the computer's CPU, as are the sample rate and bit depth of playback, which directly affect the quality of the sound. Synthesizers implemented in software are subject to timing issues that are not necessarily present with hardware instruments, whose dedicated operating systems are not subject to interruption from background tasks as desktop operating systems are. These timing issues can cause synchronization problems, and clicks and pops when sample playback is interrupted. Software synthesizers also may exhibit additional latency in their sound generation.
The roots of software synthesis go back as far as the 1950s, when Max Mathews of Bell Labs wrote the MUSIC-N programming language, which was capable of non-real-time sound generation. The first synthesizer to run directly on a host computer's CPU was Reality, by Dave Smith's Seer Systems, which achieved a low latency through tight driver integration, and therefore could run only on Creative Labs soundcards. Some systems use dedicated hardware to reduce the load on the host CPU, as with Symbolic Sound Corporation's Kyma System, and the Creamware/Sonic Core Pulsar/SCOPE systems, which power an entire recording studio's worth of instruments, effect units, and mixers. The ability to construct full MIDI arrangements entirely in computer software allows a composer to render a finalized result directly as an audio file.
Game music
Early PC games were distributed on floppy disks, and the small size of MIDI files made them a viable means of providing soundtracks. Games of the DOS and early Windows eras typically required compatibility with either Ad Lib or Sound Blaster audio cards. These cards used FM synthesis, which generates sound through modulation of sine waves. John Chowning, the technique's pioneer, theorized that the technology would be capable of accurate recreation of any sound if enough sine waves were used, but budget computer audio cards performed FM synthesis with only two sine waves. Combined with the cards' 8-bit audio, this resulted in a sound described as "artificial" and "primitive".
Wavetable daughterboards that were later available provided audio samples that could be used in place of the FM sound. These were expensive, but often used the sounds from respected MIDI instruments such as the E-mu Proteus. The computer industry moved in the mid-1990s toward wavetable-based soundcards with 16-bit playback, but standardized on a 2 MB of wavetable storage, a space too small in which to fit good-quality samples of 128 General MIDI instruments plus drum kits. To make the most of the limited space, some manufacturers stored 12-bit samples and expanded those to 16 bits on playback.
Other applications
Despite its association with music devices, MIDI can control any electronic or digital device that can read and process a MIDI command. MIDI has been adopted as a control protocol in a number of non-musical applications. MIDI Show Control uses MIDI commands to direct stage lighting systems and to trigger cued events in theatrical productions. VJs and turntablists use it to cue clips, and to synchronize equipment, and recording systems use it for synchronization and automation. Apple Motion allows control of animation parameters through MIDI. The 1987 first-person shooter game MIDI Maze and the 1990 Atari ST computer puzzle game Oxyd used MIDI to network computers together.
Devices
Connectors
The cables terminate in a 180° five-pin DIN connector. Standard applications use only three of the five conductors: a ground wire (pin 2), and a balanced pair of conductors (pins 4 and 5) that carry a +5 volt data signal. This connector configuration can only carry messages in one direction, so a second cable is necessary for two-way communication. Some proprietary applications, such as phantom-powered footswitch controllers, use the spare pins for direct current (DC) power transmission.
Opto-isolators keep MIDI devices electrically separated from their MIDI connections, which prevents ground loops and protects equipment from voltage spikes. There is no error detection capability in MIDI, so the maximum cable length is set at to limit interference.
Most devices do not copy messages from their input to their output port. A third type of port, the "thru" port, emits a copy of everything received at the input port, allowing data to be forwarded to another instrument in a "daisy chain" arrangement. Not all devices contain thru ports, and devices that lack the ability to generate MIDI data, such as effects units and sound modules, may not include out ports.
Management devices
Each device in a daisy chain adds delay to the system. This is avoided with a MIDI thru box, which contains several outputs that provide an exact copy of the box's input signal. A MIDI merger is able to combine the input from multiple devices into a single stream, and allows multiple controllers to be connected to a single device. A MIDI switcher allows switching between multiple devices, and eliminates the need to physically repatch cables. MIDI patch bays combine all of these functions. They contain multiple inputs and outputs, and allow any combination of input channels to be routed to any combination of output channels. Routing setups can be created using computer software, stored in memory, and selected by MIDI program change commands. This enables the devices to function as standalone MIDI routers in situations where no computer is present. MIDI patch bays also clean up any skewing of MIDI data bits that occurs at the input stage.
MIDI data processors are used for utility tasks and special effects. These include MIDI filters, which remove unwanted MIDI data from the stream, and MIDI delays, effects that send a repeated copy of the input data at a set time.
Interfaces
A computer MIDI interface's main function is to match clock speeds between the MIDI device and the computer. Some computer sound cards include a standard MIDI connector, whereas others connect by any of various means that include the D-subminiature DA-15 game port, USB, FireWire, Ethernet or a proprietary connection. The increasing use of USB connectors in the 2000s has led to the availability of MIDI-to-USB data interfaces that can transfer MIDI channels to USB-equipped computers. Some MIDI keyboard controllers are equipped with USB jacks, and can be plugged into computers that run music software.
MIDI's serial transmission leads to timing problems. A three-byte MIDI message requires nearly 1 millisecond for transmission. Because MIDI is serial, it can only send one event at a time. If an event is sent on two channels at once, the event on the second channel cannot transmit until the first one is finished, and so is delayed by 1 ms. If an event is sent on all channels at the same time, the last channel's transmission is delayed by as much as 16 ms. This contributed to the rise of MIDI interfaces with multiple in- and out-ports, because timing improves when events are spread between multiple ports as opposed to multiple channels on the same port. The term "MIDI slop" refers to audible timing errors that result when MIDI transmission is delayed.
Controllers
There are two types of MIDI controllers: performance controllers that generate notes and are used to perform music, and controllers that may not send notes, but transmit other types of real-time events. Many devices are some combination of the two types.
Keyboards are by far the most common type of MIDI controller. MIDI was designed with keyboards in mind, and any controller that is not a keyboard is considered an "alternative" controller. This was seen as a limitation by composers who were not interested in keyboard-based music, but the standard proved flexible, and MIDI compatibility was introduced to other types of controllers, including guitars, stringed and wind instruments, drums and specialized and experimental controllers. Other controllers include drum controllers and wind controllers, which can emulate the playing of drum kit and wind instruments, respectively. Nevertheless, some features of the keyboard playing for which MIDI was designed do not fully capture other instruments' capabilities; Jaron Lanier cites the standard as an example of technological "lock-in" that unexpectedly limited what was possible to express. Some of these features, such as per-note pitch bend, are to be addressed in MIDI 2.0, described below.
Software synthesizers offer great power and versatility, but some players feel that division of attention between a MIDI keyboard and a computer keyboard and mouse robs some of the immediacy from the playing experience. Devices dedicated to real-time MIDI control provide an ergonomic benefit, and can provide a greater sense of connection with the instrument than an interface that is accessed through a mouse or a push-button digital menu. Controllers may be general-purpose devices that are designed to work with a variety of equipment, or they may be designed to work with a specific piece of software. Examples of the latter include Akai's APC40 controller for Ableton Live, and Korg's MS-20ic controller that is a reproduction of their MS-20 analog synthesizer. The MS-20ic controller includes patch cables that can be used to control signal routing in their virtual reproduction of the MS-20 synthesizer, and can also control third-party devices.
Instruments
A MIDI instrument contains ports to send and receive MIDI signals, a CPU to process those signals, an interface that allows user programming, audio circuitry to generate sound, and controllers. The operating system and factory sounds are often stored in a Read-only memory (ROM) unit.
A MIDI instrument can also be a stand-alone module (without a piano style keyboard) consisting of a General MIDI soundboard (GM, GS and XG), onboard editing, including transposing/pitch changes, MIDI instrument changes and adjusting volume, pan, reverb levels and other MIDI controllers. Typically, the MIDI Module includes a large screen, so the user can view information for the currently selected function. Features can include scrolling lyrics, usually embedded in a MIDI file or karaoke MIDI, playlists, song library and editing screens. Some MIDI Modules include a Harmonizer and the ability to playback and transpose MP3 audio files.
Synthesizers
Synthesizers may employ any of a variety of sound generation techniques. They may include an integrated keyboard, or may exist as "sound modules" or "expanders" that generate sounds when triggered by an external controller, such as a MIDI keyboard. Sound modules are typically designed to be mounted in a 19-inch rack. Manufacturers commonly produce a synthesizer in both standalone and rack-mounted versions, and often offer the keyboard version in a variety of sizes.
Samplers
A sampler can record and digitize audio, store it in random-access memory (RAM), and play it back. Samplers typically allow a user to edit a sample and save it to a hard disk, apply effects to it, and shape it with the same tools that synthesizers use. They also may be available in either keyboard or rack-mounted form. Instruments that generate sounds through sample playback, but have no recording capabilities, are known as "ROMplers".
Samplers did not become established as viable MIDI instruments as quickly as synthesizers did, due to the expense of memory and processing power at the time. The first low-cost MIDI sampler was the Ensoniq Mirage, introduced in 1984. MIDI samplers are typically limited by displays that are too small to use to edit sampled waveforms, although some can be connected to a computer monitor.
Drum machines
Drum machines typically are sample playback devices that specialize in drum and percussion sounds. They commonly contain a sequencer that allows the creation of drum patterns, and allows them to be arranged into a song. There often are multiple audio outputs, so that each sound or group of sounds can be routed to a separate output. The individual drum voices may be playable from another MIDI instrument, or from a sequencer.
Workstations and hardware sequencers
Sequencer technology predates MIDI. Analog sequencers use CV/Gate signals to control pre-MIDI analog synthesizers. MIDI sequencers typically are operated by transport features modeled after those of tape decks. They are capable of recording MIDI performances, and arranging them into individual tracks along a multitrack recording concept. Music workstations combine controller keyboards with an internal sound generator and a sequencer. These can be used to build complete arrangements and play them back using their own internal sounds, and function as self-contained music production studios. They commonly include file storage and transfer capabilities.
Effects devices
Some effects units can be remotely controlled via MIDI. For example, the Eventide H3000 Ultra-harmonizer allows such extensive MIDI control that it is playable as a synthesizer. The Drum Buddy, a pedal-format drum machine, has a MIDI connection so that it can have its tempo synchronized with a looper pedal or time-based effects such as delay.
Technical specifications
MIDI messages are made up of 8-bit words (commonly called bytes) that are transmitted serially at a rate of 31.25 kbit/s. This rate was chosen because it is an exact division of 1 MHz, the operational speed of many early microprocessors. The first bit of each word identifies whether the word is a status byte or a data byte, and is followed by seven bits of information. A start bit and a stop bit are added to each byte for framing purposes, so a MIDI byte requires ten bits for transmission.
A MIDI link can carry sixteen independent channels of information. The channels are numbered 1–16, but their actual corresponding binary encoding is 0–15. A device can be configured to only listen to specific channels and to ignore the messages sent on other channels ("Omni Off" mode), or it can listen to all channels, effectively ignoring the channel address ("Omni On"). An individual device may be monophonic (the start of a new "note-on" MIDI command implies the termination of the previous note), or polyphonic (multiple notes may be sounding at once, until the polyphony limit of the instrument is reached, or the notes reach the end of their decay envelope, or explicit "note-off" MIDI commands are received). Receiving devices can typically be set to all four combinations of "omni off/on" versus "mono/poly" modes.
Messages
A MIDI message is an instruction that controls some aspect of the receiving device. A MIDI message consists of a status byte, which indicates the type of the message, followed by up to two data bytes that contain the parameters. MIDI messages can be channel messages sent on only one of the 16 channels and monitored only by devices on that channel, or system messages that all devices receive. Each receiving device ignores data not relevant to its function. There are five types of message: Channel Voice, Channel Mode, System Common, System Real-Time, and System Exclusive.
Channel Voice messages transmit real-time performance data over a single channel. Examples include "note-on" messages which contain a MIDI note number that specifies the note's pitch, a velocity value that indicates how forcefully the note was played, and the channel number; "note-off" messages that end a note; program change messages that change a device's patch; and control changes that allow adjustment of an instrument's parameters. MIDI notes are numbered from 0 to 127 assigned to C−1 to G9. This corresponds to a range of 8.175799 to 12543.85 Hz (assuming equal temperament and 440 Hz A4) and extends beyond the 88 note piano range from A0 to C8.
System Exclusive messages
System Exclusive (SysEx) messages are a major reason for the flexibility and longevity of the MIDI standard. Manufacturers use them to create proprietary messages that control their equipment more thoroughly than standard MIDI messages could. SysEx messages are addressed to a specific device in a system. Each manufacturer has a unique identifier that is included in its SysEx messages, which helps ensure that only the targeted device responds to the message, and that all others ignore it. Many instruments also include a SysEx ID setting, so a controller can address two devices of the same model independently. SysEx messages can include functionality beyond what the MIDI standard provides. They target a specific instrument, and are ignored by all other devices on the system.
Implementation chart
Devices typically do not respond to every type of message defined by the MIDI specification. The MIDI implementation chart was standardized by the MMA as a way for users to see what specific capabilities an instrument has, and how it responds to messages. A specific MIDI Implementation Chart is usually published for each MIDI device within the device documentation.
Electrical specifications
The MIDI 1.0 specification for the electrical interface is based on a fully isolated current loop. The MIDI out port nominally sources a +5 volt source through a 220 ohm resistor out through pin 4 on the MIDI out DIN connector, in on pin 4 of the receiving device's MIDI in DIN connector, through a 220 ohm protection resistor and the LED of an opto-isolator. The current then returns via pin 5 on the MIDI in port to the originating device's MIDI out port pin 5, again with a 220 ohm resistor in the path, giving a nominal current of about 5 milliamperes. Despite the cable's appearance, there is no conductive path between the two MIDI devices, only an optically isolated one. Properly designed MIDI devices are relatively immune to ground loops and similar interference. The baud rate on this system is 31,250 symbols per second, logic 0 being current on.
The MIDI specification provides for a ground "wire" and a braid or foil shield, connected on pin 2, protecting the two signal-carrying conductors on pins 4 and 5. Although the MIDI cable is supposed to connect pin 2 and the braid or foil shield to chassis ground, it should do so only at the MIDI out port; the MIDI in port should leave pin 2 unconnected and isolated. Some large manufacturers of MIDI devices use modified MIDI in-only DIN 5-pin sockets with the metallic conductors intentionally omitted at pin positions 1, 2, and 3 so that the maximum voltage isolation is obtained.
Extensions
MIDI's flexibility and widespread adoption have led to many refinements of the standard, and have enabled its application to purposes beyond those for which it was originally intended.
General MIDI
MIDI allows selection of an instrument's sounds through program change messages, but there is no guarantee that any two instruments have the same sound at a given program location. Program #0 may be a piano on one instrument, or a flute on another. The General MIDI (GM) standard was established in 1991, and provides a standardized sound bank that allows a Standard MIDI File created on one device to sound similar when played back on another. GM specifies a bank of 128 sounds arranged into 16 families of eight related instruments, and assigns a specific program number to each instrument. Percussion instruments are placed on channel 10, and a specific MIDI note value is mapped to each percussion sound. GM-compliant devices must offer 24-note polyphony. Any given program change selects the same instrument sound on any GM-compatible instrument.
General MIDI is defined by a standard layout of defined instrument sounds called 'patches', defined by a 'patch' number (program number – PC#) and triggered by pressing a key on a MIDI keyboard. This layout ensures MIDI sound modules and other MIDI devices faithfully reproduce the designated sounds expected by the user and maintains reliable and consistent sound palettes across different manufacturers MIDI devices.
The GM standard eliminates variation in note mapping. Some manufacturers had disagreed over what note number should represent middle C, but GM specifies that note number 69 plays A440, which in turn fixes middle C as note number 60. GM-compatible devices are required to respond to velocity, aftertouch, and pitch bend, to be set to specified default values at startup, and to support certain controller numbers such as for sustain pedal, and Registered Parameter Numbers. A simplified version of GM, called GM Lite, is used in mobile phones and other devices with limited processing power.
GS, XG, and GM2
A general opinion quickly formed that the GM's 128-instrument sound set was not large enough. Roland's General Standard, or GS, system included additional sounds, drumkits and effects, provided a "bank select" command that could be used to access them, and used MIDI Non-Registered Parameter Numbers (NRPNs) to access its new features. Yamaha's Extended General MIDI, or XG, followed in 1994. XG similarly offered extra sounds, drumkits and effects, but used standard controllers instead of NRPNs for editing, and increased polyphony to 32 voices. Both standards feature backward compatibility with the GM specification, but are not compatible with each other. Neither standard has been adopted beyond its creator, but both are commonly supported by music software titles.
Member companies of Japan's AMEI developed the General MIDI Level 2 specification in 1999. GM2 maintains backward compatibility with GM, but increases polyphony to 32 voices, standardizes several controller numbers such as for sostenuto and soft pedal (una corda), RPNs and Universal System Exclusive Messages, and incorporates the MIDI Tuning Standard. GM2 is the basis of the instrument selection mechanism in Scalable Polyphony MIDI (SP-MIDI), a MIDI variant for low power devices that allows the device's polyphony to scale according to its processing power.
Tuning standard
Most MIDI synthesizers use equal temperament tuning. The MIDI tuning standard (MTS), ratified in 1992, allows alternate tunings. MTS allows microtunings that can be loaded from a bank of up to 128 patches, and allows real-time adjustment of note pitches. Manufacturers are not required to support the standard. Those who do are not required to implement all of its features.
Time code
A sequencer can drive a MIDI system with its internal clock, but when a system contains multiple sequencers, they must synchronize to a common clock. MIDI Time Code (MTC), developed by Digidesign, implements SysEx messages that have been developed specifically for timing purposes, and is able to translate to and from the SMPTE time code standard. MIDI Clock is based on tempo, but SMPTE time code is based on frames per second, and is independent of tempo. MTC, like SMPTE code, includes position information, and can adjust itself if a timing pulse is lost. MIDI interfaces such as Mark of the Unicorn's MIDI Timepiece can convert SMPTE code to MTC.
Machine control
MIDI Machine Control (MMC) consists of a set of SysEx commands that operate the transport controls of hardware recording devices. MMC lets a sequencer send Start, Stop, and Record commands to a connected tape deck or hard disk recording system, and to fast-forward or rewind the device so that it starts playback at the same point as the sequencer. No synchronization data is involved, although the devices may synchronize through MTC.
Show control
MIDI Show Control (MSC) is a set of SysEx commands for sequencing and remotely cueing show control devices such as lighting, music and sound playback, and motion control systems. Applications include stage productions, museum exhibits, recording studio control systems, and amusement park attractions.
Timestamping
One solution to MIDI timing problems is to mark MIDI events with the times they are to be played, and store them in a buffer in the MIDI interface ahead of time. Sending data beforehand reduces the likelihood that a busy passage can send a large amount of information that overwhelms the transmission link. Once stored in the interface, the information is no longer subject to timing issues associated with USB jitter and computer operating system interrupts, and can be transmitted with a high degree of accuracy. MIDI timestamping only works when both hardware and software support it. MOTU's MTS, eMagic's AMT, and Steinberg's Midex 8 had implementations that were incompatible with each other, and required users to own software and hardware manufactured by the same company to work. Timestamping is built into FireWire MIDI interfaces, Mac OS X Core Audio, and Linux ALSA Sequencer.
Sample dump standard
An unforeseen capability of SysEx messages was their use for transporting audio samples between instruments. This led to the development of the sample dump standard (SDS), which established a new SysEx format for sample transmission. The SDS was later augmented with a pair of commands that allow the transmission of information about sample loop points, without requiring that the entire sample be transmitted.
Downloadable sounds
The Downloadable Sounds (DLS) specification, ratified in 1997, allows mobile devices and computer sound cards to expand their wave tables with downloadable sound sets. The DLS Level 2 Specification followed in 2006, and defined a standardized synthesizer architecture. The Mobile DLS standard calls for DLS banks to be combined with SP-MIDI, as self-contained Mobile XMF files.
MIDI Polyphonic Expression
MIDI Polyphonic Expression (MPE) is a method of using MIDI that enables pitch bend, and other dimensions of expressive control, to be adjusted continuously for individual notes. MPE works by assigning each note to its own MIDI channel so that particular messages can be applied to each note individually. The specifications were released in November 2017 by AMEI and in January 2018 by the MMA. Instruments like the Continuum Fingerboard, LinnStrument, ROLI Seaboard, Sensel Morph, and Eigenharp let users control pitch, timbre, and other nuances for individual notes within chords.
Alternative hardware transports
In addition to the original 31.25 kbit/s current-loop transported on 5-pin DIN, other connectors have been used for the same electrical data, and transmission of MIDI streams in different forms over USB, IEEE 1394 a.k.a. FireWire, and Ethernet is now common. Some samplers and hard drive recorders can also pass MIDI data between each other over SCSI.
USB and FireWire
Members of the USB-IF in 1999 developed a standard for MIDI over USB, the "Universal Serial Bus Device Class Definition for MIDI Devices" MIDI over USB has become increasingly common as other interfaces that had been used for MIDI connections (serial, joystick, etc.) disappeared from personal computers. Linux, Microsoft Windows, Macintosh OS X, and Apple iOS operating systems include standard class drivers to support devices that use the "Universal Serial Bus Device Class Definition for MIDI Devices". Some manufacturers choose to implement a MIDI interface over USB that is designed to operate differently from the class specification, using custom drivers.
Apple Computer developed the FireWire interface during the 1990s. It began to appear on digital video cameras toward the end of the decade, and on G3 Macintosh models in 1999. It was created for use with multimedia applications. Unlike USB, FireWire uses intelligent controllers that can manage their own transmission without attention from the main CPU. As with standard MIDI devices, FireWire devices can communicate with each other with no computer present.
XLR connectors
The Octave-Plateau Voyetra-8 synthesizer was an early MIDI implementation using XLR3 connectors in place of the 5-pin DIN. It was released in the pre-MIDI years and later retrofitted with a MIDI interface but keeping its XLR connector.
Serial parallel, and joystick port
As computer-based studio setups became common, MIDI devices that could connect directly to a computer became available. These typically used the 8-pin mini-DIN connector that was used by Apple for serial ports prior to the introduction of the Blue & White G3 models. MIDI interfaces intended for use as the centerpiece of a studio, such as the Mark of the Unicorn MIDI Time Piece, were made possible by a "fast" transmission mode that could take advantage of these serial ports' ability to operate at 20 times the standard MIDI speed. Mini-DIN ports were built into some late-1990s MIDI instruments, and enabled such devices to be connected directly to a computer. Some devices connected via PCs' DB-25 parallel port, or through the joystick port found in many PC sound cards.
mLAN
Yamaha introduced the mLAN protocol in 1999. It was conceived as a Local Area Network for musical instruments using FireWire as the transport, and was designed to carry multiple MIDI channels together with multichannel digital audio, data file transfers, and time code. mLan was used in a number of Yamaha products, notably digital mixing consoles and the Motif synthesizer, and in third-party products such as the PreSonus FIREstation and the Korg Triton Studio. No new mLan products have been released since 2007.
Ethernet and Internet
Computer network implementations of MIDI provide network routing capabilities, and the high-bandwidth channel that earlier alternatives to MIDI, such as ZIPI, were intended to bring. Proprietary implementations have existed since the 1980s, some of which use fiber optic cables for transmission. The Internet Engineering Task Force's RTP-MIDI open specification has gained industry support. Apple has supported this protocol from Mac OS X 10.4 onwards, and a Windows driver based on Apple's implementation exists for Windows XP and newer versions.
Wireless
Systems for wireless MIDI transmission have been available since the 1980s. Several commercially available transmitters allow wireless transmission of MIDI and OSC signals over Wi-Fi and Bluetooth. iOS devices are able to function as MIDI control surfaces, using Wi-Fi and OSC. An XBee radio can be used to build a wireless MIDI transceiver as a do-it-yourself project. Android devices are able to function as full MIDI control surfaces using several different protocols over Wi-Fi and Bluetooth.
TRS minijack
Some devices use standard 3.5 mm TRS audio minijack connectors for MIDI data, including the Korg Electribe 2 and the Arturia Beatstep Pro. Both come with adaptors that break out to standard 5-pin DIN connectors.. This became widespread enough that the Midi Manufacturers' Association standardized the wiring. The MIDI-over-minijack standards document also recommends the use of 2.5 mm connectors over 3.5 mm ones to avoid confusion with audio connectors.
MIDI 2.0
The MIDI 2.0 standard was presented on 17 January 2020 at the Winter NAMM Show in Anaheim, California at a session titled "Strategic Overview and Introduction to MIDI 2.0" by representatives Yamaha, Roli, Microsoft, Google, and the MIDI Association. This significant update adds bidirectional communication while maintaining backwards compatibility.
The new protocol has been researched since 2005. Prototype devices have been shown privately at NAMM using wired and wireless connections and licensing and product certification policies have been developed; however, no projected release date was announced. Proposed physical layer and transport layer included Ethernet-based protocols such as RTP MIDI and Audio Video Bridging/Time-Sensitive Networking, as well as User Datagram Protocol (UDP)-based transport .
AMEI and MMA announced that complete specifications will be published following interoperability testing of prototype implementations from major manufacturers such as Google, Yamaha, Steinberg, Roland, Ableton, Native Instruments, and ROLI, among others. In January 2020, Roland announced the A-88mkII controller keyboard that supports MIDI 2.0.
MIDI 2.0 includes MIDI Capability Inquiry specification for property exchange and profiles, and the new Universal MIDI Packet format for high-speed transports which supports both MIDI 1.0 and MIDI 2.0 voice messages.
MIDI Capability Inquiry
MIDI Capability Inquiry (MIDI-CI) specifies Universal SysEx messages to implement device profiles, parameter exchange, and MIDI protocol negotiation. The specifications were released in November 2017 by AMEI and in January 2018 by the MMA.
Parameter exchange defines methods for inquiry of device capabilities, such as supported controllers, patch names, instrument profiles, device configuration and other metadata, and to get or set device configuration settings. Property exchange uses System Exclusive messages that carry JSON format data. Profiles define common sets of MIDI controllers for various instrument types, such as drawbar organs and analog synths, or for particular tasks, improving interoperability between instruments from different manufacturers. Protocol negotiation allows devices to employ the Next Generation protocol or manufacturer-specific protocols.
Universal MIDI Packet
MIDI 2.0 defines a new Universal MIDI Packet format, which contains messages of varying length (32, 64, 96 or 128 bits) depending on the payload type. This new packet format supports a total of 256 MIDI channels, organized in 16 groups of 16 channels; each group can carry either a MIDI 1.0 Protocol stream or new MIDI 2.0 Protocol stream, and can also include system messages, system exclusive data, and timestamps for precise rendering of several simultaneous notes. To simplify initial adoption, existing products are explicitly allowed to only implement MIDI 1.0 messages. The Universal MIDI Packet is intended for high-speed transport such as USB and Ethernet and is not supported on the existing 5-pin DIN connections. System Real-Time and System Common messages are the same as defined in MIDI 1.0.
New protocol
As of January 2019, the draft specification of the new protocol supports all core messages that also exist in MIDI 1.0, but extends their precision and resolution; it also defines many new high-precision controller messages. The specification defines default translation rules to convert between MIDI 2.0 Channel Voice and MIDI 1.0 Channel Voice messages that use different data resolution, as well as map 256 MIDI 2.0 streams to 16 MIDI 1.0 streams.
Data transfer formats
System Exclusive 8 messages use a new 8-bit data format, based on Universal System Exclusive messages. Mixed Data Set messages are intended to transfer large sets of data. System Exclusive 7 messages use the previous 7-bit data format.
See also
ABC notation
Digital piano
Electronic drum module
Guitar synthesizer
List of music software
MIDI mockup
MusicXML
Music Macro Language
Open Sound Control
SoundFont
Scorewriter
Synthesia
Synthetic music mobile application format
Notes
References
External links
The MIDI Association
You can download English-language MIDI specifications at the MIDI Manufacturers Association
Computer hardware standards
Electronic music
Japanese inventions
Serial buses |
19999 | https://en.wikipedia.org/wiki/Microcode | Microcode | In processor design, microcode is a technique that interposes a layer of computer organization between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. Microcode is a layer of hardware-level instructions that implement higher-level machine code instructions or internal finite-state machine sequencing in many digital processing elements. Microcode is used in general-purpose central processing units, although in current desktop CPUs, it is only a fallback path for cases that the faster hardwired control unit cannot handle.
Microcode typically resides in special high-speed memory and translates machine instructions, state machine data or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram.
More extensive microcoding allows small and simple microarchitectures to emulate more powerful architectures with wider word length, more execution units and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family.
Some hardware vendors, especially IBM, use the term microcode as a synonym for firmware. In that way, all code within a device is termed microcode regardless of it being microcode or machine code; for example, hard disk drives are said to have their microcode updated, though they typically contain both microcode and firmware.
Overview
The lowest layer in a computer's software stack is traditionally raw machine code instructions for the processor. In microcoded processors, the microcode fetches and executes those instructions. To avoid confusion, each microprogram-related element is differentiated by the micro prefix: microinstruction, microassembler, microprogrammer, microarchitecture, etc.
Engineers normally write the microcode during the design phase of a processor, storing it in a read-only memory (ROM) or programmable logic array (PLA) structure, or in a combination of both. However, machines also exist that have some or all microcode stored in static random-access memory (SRAM) or flash memory. This is traditionally denoted as writeable control store in the context of computers, which can be either read-only or read-write memory. In the latter case, the CPU initialization process loads microcode into the control store from another storage medium, with the possibility of altering the microcode to correct bugs in the instruction set, or to implement new machine instructions.
Complex digital processors may also employ more than one (possibly microcode-based) control unit in order to delegate sub-tasks that must be performed essentially asynchronously in parallel. A high-level programmer, or even an assembly language programmer, does not normally see or change microcode. Unlike machine code, which often retains some backward compatibility among different processors in a family, microcode only runs on the exact electronic circuitry for which it is designed, as it constitutes an inherent part of the particular processor design itself.
Microprograms consist of series of microinstructions, which control the CPU at a very fundamental level of hardware circuitry. For example, a single typical horizontal microinstruction might specify the following operations:
Connect register 1 to the A side of the ALU
Connect register 7 to the B side of the ALU
Set the ALU to perform two's-complement addition
Set the ALU's carry input to zero
Store the result value in register 8
Update the condition codes from the ALU status flags (negative, zero, overflow, and carry)
Microjump to a given microPC address for the next microinstruction
To simultaneously control all processor's features in one cycle, the microinstruction is often wider than 50 bits; e.g., 128 bits on a 360/85 with an emulator feature. Microprograms are carefully designed and optimized for the fastest possible execution, as a slow microprogram would result in a slow machine instruction and degraded performance for related application programs that use such instructions.
Justification
Microcode was originally developed as a simpler method of developing the control logic for a computer. Initially, CPU instruction sets were hardwired. Each step needed to fetch, decode, and execute the machine instructions (including any operand address calculations, reads, and writes) was controlled directly by combinational logic and rather minimal sequential state machine circuitry. While such hard-wired processors were very efficient, the need for powerful instruction sets with multi-step addressing and complex operations (see below) made them difficult to design and debug; highly encoded and varied-length instructions can contribute to this as well, especially when very irregular encodings are used.
Microcode simplified the job by allowing much of the processor's behaviour and programming model to be defined via microprogram routines rather than by dedicated circuitry. Even late in the design process, microcode could easily be changed, whereas hard-wired CPU designs were very cumbersome to change. Thus, this greatly facilitated CPU design.
From the 1940s to the late 1970s, a large portion of programming was done in assembly language; higher-level instructions mean greater programmer productivity, so an important advantage of microcode was the relative ease by which powerful machine instructions can be defined. The ultimate extension of this are "Directly Executable High Level Language" designs, in which each statement of a high-level language such as PL/I is entirely and directly executed by microcode, without compilation. The IBM Future Systems project and Data General Fountainhead Processor are examples of this. During the 1970s, CPU speeds grew more quickly than memory speeds and numerous techniques such as memory block transfer, memory pre-fetch and multi-level caches were used to alleviate this. High-level machine instructions, made possible by microcode, helped further, as fewer more complex machine instructions require less memory bandwidth. For example, an operation on a character string can be done as a single machine instruction, thus avoiding multiple instruction fetches.
Architectures with instruction sets implemented by complex microprograms included the IBM System/360 and Digital Equipment Corporation VAX. The approach of increasingly complex microcode-implemented instruction sets was later called complex instruction set computer (CISC). An alternate approach, used in many microprocessors, is to use one or more programmable logic array (PLA) or read-only memory (ROM) (instead of combinational logic) mainly for instruction decoding, and let a simple state machine (without much, or any, microcode) do most of the sequencing. The MOS Technology 6502 is an example of a microprocessor using a PLA for instruction decode and sequencing. The PLA is visible in photomicrographs of the chip, and its operation can be seen in the transistor-level simulation.
Microprogramming is still used in modern CPU designs. In some cases, after the microcode is debugged in simulation, logic functions are substituted for the control store. Logic functions are often faster and less expensive than the equivalent microprogram memory.
Benefits
A processor's microprograms operate on a more primitive, totally different, and much more hardware-oriented architecture than the assembly instructions visible to normal programmers. In coordination with the hardware, the microcode implements the programmer-visible architecture. The underlying hardware need not have a fixed relationship to the visible architecture. This makes it easier to implement a given instruction set architecture on a wide variety of underlying hardware micro-architectures.
The IBM System/360 has a 32-bit architecture with 16 general-purpose registers, but most of the System/360 implementations use hardware that implements a much simpler underlying microarchitecture; for example, the System/360 Model 30 has 8-bit data paths to the arithmetic logic unit (ALU) and main memory and implemented the general-purpose registers in a special unit of higher-speed core memory, and the System/360 Model 40 has 8-bit data paths to the ALU and 16-bit data paths to main memory and also implemented the general-purpose registers in a special unit of higher-speed core memory. The Model 50 has full 32-bit data paths and implements the general-purpose registers in a special unit of higher-speed core memory. The Model 65 through the Model 195 have larger data paths and implement the general-purpose registers in faster transistor circuits. In this way, microprogramming enabled IBM to design many System/360 models with substantially different hardware and spanning a wide range of cost and performance, while making them all architecturally compatible. This dramatically reduces the number of unique system software programs that must be written for each model.
A similar approach was used by Digital Equipment Corporation (DEC) in their VAX family of computers. As a result, different VAX processors use different microarchitectures, yet the programmer-visible architecture does not change.
Microprogramming also reduces the cost of field changes to correct defects (bugs) in the processor; a bug can often be fixed by replacing a portion of the microprogram rather than by changes being made to hardware logic and wiring.
History
In 1947, the design of the MIT Whirlwind introduced the concept of a control store as a way to simplify computer design and move beyond ad hoc methods. The control store is a diode matrix: a two-dimensional lattice, where one dimension accepts "control time pulses" from the CPU's internal clock, and the other connects to control signals on gates and other circuits. A "pulse distributor" takes the pulses generated by the CPU clock and breaks them up into eight separate time pulses, each of which activates a different row of the lattice. When the row is activated, it activates the control signals connected to it.
Described another way, the signals transmitted by the control store are being played much like a player piano roll. That is, they are controlled by a sequence of very wide words constructed of bits, and they are played sequentially. In a control store, however, the song is short and repeated continuously.
In 1951, Maurice Wilkes enhanced this concept by adding conditional execution, a concept akin to a conditional in computer software. His initial implementation consisted of a pair of matrices: the first one generated signals in the manner of the Whirlwind control store, while the second matrix selected which row of signals (the microprogram instruction word, so to speak) to invoke on the next cycle. Conditionals were implemented by providing a way that a single line in the control store could choose from alternatives in the second matrix. This made the control signals conditional on the detected internal signal. Wilkes coined the term microprogramming to describe this feature and distinguish it from a simple control store.
Examples
The EMIDEC 1100 reputedly uses a hard-wired control store consisting of wires threaded through ferrite cores, known as "the laces".
Most models of the IBM System/360 series are microprogrammed:
The Model 25 is unique among System/360 models in using the top 16 K bytes of core storage to hold the control storage for the microprogram. The 2025 uses a 16-bit microarchitecture with seven control words (or microinstructions). After system maintenance or when changing operating mode, the microcode is loaded from the card reader, tape, or other device. The IBM 1410 emulation for this model is loaded this way.
The Model 30 uses an 8-bit microarchitecture with only a few hardware registers; everything that the programmer saw is emulated by the microprogram. The microcode for this model is also held on special punched cards, which are stored inside the machine in a dedicated reader per card, called "CROS" units (Capacitor Read-Only Storage). Another CROS unit is added for machines ordered with 1401/1440/1460 emulation and for machines ordered with 1620 emulation.
The Model 40 uses 56-bit control words. The 2040 box implements both the System/360 main processor and the multiplex channel (the I/O processor). This model uses TROS dedicated readers similar to CROS units, but with an inductive pickup (Transformer Read-only Store).
The Model 50 has two internal datapaths which operated in parallel: a 32-bit datapath used for arithmetic operations, and an 8-bit data path used in some logical operations. The control store uses 90-bit microinstructions.
The Model 85 has separate instruction fetch (I-unit) and execution (E-unit) to provide high performance. The I-unit is hardware controlled. The E-unit is microprogrammed; the control words are 108 bits wide on a basic 360/85 and wider if an emulator feature is installed.
The NCR 315 is microprogrammed with hand wired ferrite cores (a ROM) pulsed by a sequencer with conditional execution. Wires routed through the cores are enabled for various data and logic elements in the processor.
The Digital Equipment Corporation PDP-11 processors, with the exception of the PDP-11/20, are microprogrammed.
Most Data General Eclipse minicomputers are microprogrammed. The task of writing microcode for the Eclipse MV/8000 is detailed in the Pulitzer Prize-winning book titled The Soul of a New Machine.
Many systems from Burroughs are microprogrammed:
The B700 "microprocessor" execute application-level opcodes using sequences of 16-bit microinstructions stored in main memory; each of these is either a register-load operation or mapped to a single 56-bit "nanocode" instruction stored in read-only memory. This allows comparatively simple hardware to act either as a mainframe peripheral controller or to be packaged as a standalone computer.
The B1700 is implemented with radically different hardware including bit-addressable main memory but has a similar multi-layer organisation. The operating system preloads the interpreter for whatever language is required. These interpreters present different virtual machines for COBOL, Fortran, etc.
Microdata produced computers in which the microcode is accessible to the user; this allows the creation of custom assembler level instructions. Microdata's Reality operating system design makes extensive use of this capability.
The Xerox Alto workstation used a microcoded design but, unlike many computers, the microcode engine is not hidden from the programmer in a layered design. Applications take advantage of this to accelerate performance.
The IBM System/38 is described as having both horizontal and vertical microcode. In practice, the processor implements an instruction set architecture named the Internal Microprogrammed Interface (IMPI) using a horizontal microcode format. The so-called vertical microcode layer implements the System/38's hardware-independent Machine Interface instruction set in terms of IMPI instructions. Prior to the instruction of the IBM RS64 processor line, early IBM AS/400 systems used the same architecture.
The Nintendo 64's Reality Coprocessor (RCP), which serves as the console's graphics processing unit and audio processor, utilizes microcode; it is possible to implement new effects or tweak the processor to achieve the desired output. Some notable examples of custom RCP microcode include the high-resolution graphics, particle engines, and unlimited draw distances found in Factor 5's Indiana Jones and the Infernal Machine, Star Wars: Rogue Squadron, and Star Wars: Battle for Naboo; and the full motion video playback found in Angel Studios' Resident Evil 2.
The VU0 and VU1 vector units in the Sony PlayStation 2 are microprogrammable; in fact, VU1 is only accessible via microcode for the first several generations of the SDK.
The MicroCore Labs MCL86 , MCL51 and MCL65 are examples of highly encoded "vertical" microsequencer implementations of the Intel 8086/8088, 8051, and MOS 6502.
The Digital Scientific Corp. Meta 4 Series 16 computer system was a user-microprogammable system first available in 1970. The microcode had a primarily vertical style with 32-bit microinstructions. The instructions were stored on replaceable program boards with a grid of bit positions. One (1) bits were represented by small metal squares that were sensed by amplifiers, zero (0) bits by the absence of the squares. The system could be configured with up to 4K 16-bit words of microstore. One of Digital Scientific's products was an emulator for the IBM 1130.
The MCP-1600 is a microprocessor made by Western Digital in the late 1970s through the early 1980s used to implement three different computer architectures in microcode: the Pascal MicroEngine, the WD16, and the DEC LSI-11, a cost-reduced PDP-11.
Earlier x86 processors are fully microcoded; starting with the Intel 80486, less complicated instructions are implemented directly in hardware. x86 processors implemented patchable microcode (patch by BIOS or operating system) since Intel P6 microarchitecture and AMD K7 microarchitecture.
Some video cards, wireless network interface controllers implemented patchable microcode (patch by operating system).
Implementation
Each microinstruction in a microprogram provides the bits that control the functional elements that internally compose a CPU. The advantage over a hard-wired CPU is that internal CPU control becomes a specialized form of a computer program. Microcode thus transforms a complex electronic design challenge (the control of a CPU) into a less complex programming challenge. To take advantage of this, a CPU is divided into several parts:
An I-unit may decode instructions in hardware and determine the microcode address for processing the instruction in parallel with the E-unit.
A microsequencer picks the next word of the control store. A sequencer is mostly a counter, but usually also has some way to jump to a different part of the control store depending on some data, usually data from the instruction register and always some part of the control store. The simplest sequencer is just a register loaded from a few bits of the control store.
A register set is a fast memory containing the data of the central processing unit. It may include the program counter and stack pointer, and may also include other registers that are not easily accessible to the application programmer. Often the register set is a triple-ported register file; that is, two registers can be read, and a third written at the same time.
An arithmetic and logic unit performs calculations, usually addition, logical negation, a right shift, and logical AND. It often performs other functions, as well.
There may also be a memory address register and a memory data register, used to access the main computer storage. Together, these elements form an "execution unit". Most modern CPUs have several execution units. Even simple computers usually have one unit to read and write memory, and another to execute user code. These elements could often be brought together as a single chip. This chip comes in a fixed width that would form a "slice" through the execution unit. These are known as "bit slice" chips. The AMD Am2900 family is one of the best known examples of bit slice elements. The parts of the execution units and the whole execution units are interconnected by a bundle of wires called a bus.
Programmers develop microprograms, using basic software tools. A microassembler allows a programmer to define the table of bits symbolically. Because of its close relationship to the underlying architecture, "microcode has several properties that make it difficult to generate using a compiler." A simulator program is intended to execute the bits in the same way as the electronics, and allows much more freedom to debug the microprogram. After the microprogram is finalized, and extensively tested, it is sometimes used as the input to a computer program that constructs logic to produce the same data. This program is similar to those used to optimize a programmable logic array. Even without fully optimal logic, heuristically optimized logic can vastly reduce the number of transistors from the number needed for a read-only memory (ROM) control store. This reduces the cost to produce, and the electricity used by, a CPU.
Microcode can be characterized as horizontal or vertical, referring primarily to whether each microinstruction controls CPU elements with little or no decoding (horizontal microcode) or requires extensive decoding by combinatorial logic before doing so (vertical microcode). Consequently, each horizontal microinstruction is wider (contains more bits) and occupies more storage space than a vertical microinstruction.
Horizontal microcode
"Horizontal microcode has several discrete micro-operations that are combined in a single microinstruction for simultaneous operation." Horizontal microcode is typically contained in a fairly wide control store; it is not uncommon for each word to be 108 bits or more. On each tick of a sequencer clock a microcode word is read, decoded, and used to control the functional elements that make up the CPU.
In a typical implementation a horizontal microprogram word comprises fairly tightly defined groups of bits. For example, one simple arrangement might be:
For this type of micromachine to implement a JUMP instruction with the address following the opcode, the microcode might require two clock ticks. The engineer designing it would write microassembler source code looking something like this:
# Any line starting with a number-sign is a comment
# This is just a label, the ordinary way assemblers symbolically represent a
# memory address.
InstructionJUMP:
# To prepare for the next instruction, the instruction-decode microcode has already
# moved the program counter to the memory address register. This instruction fetches
# the target address of the jump instruction from the memory word following the
# jump opcode, by copying from the memory data register to the memory address register.
# This gives the memory system two clock ticks to fetch the next
# instruction to the memory data register for use by the instruction decode.
# The sequencer instruction "next" means just add 1 to the control word address.
MDR, NONE, MAR, COPY, NEXT, NONE
# This places the address of the next instruction into the PC.
# This gives the memory system a clock tick to finish the fetch started on the
# previous microinstruction.
# The sequencer instruction is to jump to the start of the instruction decode.
MAR, 1, PC, ADD, JMP, InstructionDecode
# The instruction decode is not shown, because it is usually a mess, very particular
# to the exact processor being emulated. Even this example is simplified.
# Many CPUs have several ways to calculate the address, rather than just fetching
# it from the word following the op-code. Therefore, rather than just one
# jump instruction, those CPUs have a family of related jump instructions.
For each tick it is common to find that only some portions of the CPU are used, with the remaining groups of bits in the microinstruction being no-ops. With careful design of hardware and microcode, this property can be exploited to parallelise operations that use different areas of the CPU; for example, in the case above, the ALU is not required during the first tick, so it could potentially be used to complete an earlier arithmetic instruction.
Vertical microcode
In vertical microcode, each microinstruction is significantly encoded, that is, the bit fields generally pass through intermediate combinatory logic that, in turn, generates the control and sequencing signals for internal CPU elements (ALU, registers, etc.). This is in contrast with horizontal microcode, in which the bit fields either directly produce the control and sequencing signals or are only minimally encoded. Consequently, vertical microcode requires smaller instruction lengths and less storage, but requires more time to decode, resulting in a slower CPU clock.
Some vertical microcode is just the assembly language of a simple conventional computer that is emulating a more complex computer. Some processors, such as DEC Alpha processors and the CMOS microprocessors on later IBM mainframes System/390 and z/Architecture, use machine code, running in a special mode that gives it access to special instructions, special registers, and other hardware resources unavailable to regular machine code, to implement some instructions and other functions, such as page table walks on Alpha processors. This is called PALcode on Alpha processors and millicode on IBM mainframe processors.
Another form of vertical microcode has two fields:
The field select selects which part of the CPU will be controlled by this word of the control store. The field value controls that part of the CPU. With this type of microcode, a designer explicitly chooses to make a slower CPU to save money by reducing the unused bits in the control store; however, the reduced complexity may increase the CPU's clock frequency, which lessens the effect of an increased number of cycles per instruction.
As transistors grew cheaper, horizontal microcode came to dominate the design of CPUs using microcode, with vertical microcode being used less often.
When both vertical and horizontal microcode are used, the horizontal microcode may be referred to as nanocode or picocode.
Writable control store
A few computers were built using writable microcode. In this design, rather than storing the microcode in ROM or hard-wired logic, the microcode is stored in a RAM called a writable control store or WCS. Such a computer is sometimes called a writable instruction set computer (WISC).
Many experimental prototype computers use writable control stores; there are also commercial machines that use writable microcode, such as the Burroughs Small Systems, early Xerox workstations, the DEC VAX 8800 (Nautilus) family, the Symbolics L- and G-machines, a number of IBM System/360 and System/370 implementations, some DEC PDP-10 machines, and the Data General Eclipse MV/8000.
Many more machines offer user-programmable writable control stores as an option, including the HP 2100, DEC PDP-11/60 and Varian Data Machines V-70 series minicomputers. The IBM System/370 includes a facility called Initial-Microprogram Load (IML or IMPL) that can be invoked from the console, as part of power-on reset (POR) or from another processor in a tightly coupled multiprocessor complex.
Some commercial machines, for example IBM 360/85, have both a read-only storage and a writable control store for microcode.
WCS offers several advantages including the ease of patching the microprogram and, for certain hardware generations, faster access than ROMs can provide. User-programmable WCS allows the user to optimize the machine for specific purposes.
Starting with the Pentium Pro in 1995, several x86 CPUs have writable Intel Microcode. This, for example, has allowed bugs in the Intel Core 2 and Intel Xeon microcodes to be fixed by patching their microprograms, rather than requiring the entire chips to be replaced. A second prominent example is the set of microcode patches that Intel offered for some of their processor architectures of up to 10 years in age, in a bid to counter the security vulnerabilities discovered in their designs – Spectre and Meltdown – which went public at the start of 2018. A microcode update can be installed by Linux, FreeBSD, Microsoft Windows, or the motherboard BIOS.
Comparison to VLIW and RISC
The design trend toward heavily microcoded processors with complex instructions began in the early 1960s and continued until roughly the mid-1980s. At that point the RISC design philosophy started becoming more prominent.
A CPU that uses microcode generally takes several clock cycles to execute a single instruction, one clock cycle for each step in the microprogram for that instruction. Some CISC processors include instructions that can take a very long time to execute. Such variations interfere with both interrupt latency and, what is far more important in modern systems, pipelining.
When designing a new processor, a hardwired control RISC has the following advantages over microcoded CISC:
Programming has largely moved away from assembly level, so it's no longer worthwhile to provide complex instructions for productivity reasons.
Simpler instruction sets allow direct execution by hardware, avoiding the performance penalty of microcoded execution.
Analysis shows complex instructions are rarely used, hence the machine resources devoted to them are largely wasted.
The machine resources devoted to rarely used complex instructions are better used for expediting performance of simpler, commonly used instructions.
Complex microcoded instructions may require many clock cycles that vary, and are difficult to pipeline for increased performance.
There are counterpoints as well:
The complex instructions in heavily microcoded implementations may not take much extra machine resources, except for microcode space. For example, the same ALU is often used to calculate an effective address and to compute the result from the operands, e.g., the original Z80, 8086, and others.
The simpler non-RISC instructions (i.e., involving direct memory operands) are frequently used by modern compilers. Even immediate to stack (i.e., memory result) arithmetic operations are commonly employed. Although such memory operations, often with varying length encodings, are more difficult to pipeline, it is still fully feasible to do so - clearly exemplified by the i486, AMD K5, Cyrix 6x86, Motorola 68040, etc.
Non-RISC instructions inherently perform more work per instruction (on average), and are also normally highly encoded, so they enable smaller overall size of the same program, and thus better use of limited cache memories.
Many RISC and VLIW processors are designed to execute every instruction (as long as it is in the cache) in a single cycle. This is very similar to the way CPUs with microcode execute one microinstruction per cycle. VLIW processors have instructions that behave similarly to very wide horizontal microcode, although typically without such fine-grained control over the hardware as provided by microcode. RISC instructions are sometimes similar to the narrow vertical microcode.
Microcoding has been popular in application-specific processors such as network processors, microcontrollers, digital signal processors, channel controllers, disk controllers, network interface controllers, graphics processing units, and in other hardware.
Micro Ops
Modern CISC implementations, such as the x86 family, decode instructions into dynamically buffered micro-operations ("μops") with an instruction encoding similar to RISC or traditional microcode. A hardwired instruction decode unit directly emits μops for common x86 instructions, but falls back to a more traditional microcode ROM for more complex or rarely used instructions.
For example, an x86 might look up μops from microcode to handle complex multistep operations such as loop or string instructions, floating-point unit transcendental functions or unusual values such as denormal numbers, and special-purpose instructions such as CPUID.
See also
Address generation unit (AGU)
CPU design
Finite-state machine (FSM)
Firmware
Floating-point unit (FPU)
Pentium FDIV bug
Instruction pipeline
Microsequencer
MikroSim
Millicode
Superscalar
Notes
References
Further reading
External links
Writable Instruction Set Computer
Capacitor Read-only Store
Transformer Read-only Store
A Brief History of Microprogramming
Intel processor microcode security update (fixes the issues when running 32-bit virtual machines in PAE mode)
Notes on Intel Microcode Updates, March 2013, by Ben Hawkes, archived from the original on September 7, 2015
Hole seen in Intel's bug-busting feature, EE Times, 2002, by Alexander Wolfe, archived from the original on March 9, 2003
Opteron Exposed: Reverse Engineering AMD K8 Microcode Updates, July 26, 2004
Instruction processing
Firmware
Central processing unit
BIOS |
20003 | https://en.wikipedia.org/wiki/Multitier%20architecture | Multitier architecture | In software engineering, multitier architecture (often referred to as n-tier architecture) is a client–server architecture in which presentation, application processing and data management functions are physically separated. The most widespread use of multitier architecture is the three-tier architecture.
N-tier application architecture provides a model by which developers can create flexible and reusable applications. By segregating an application into tiers, developers acquire the option of modifying or adding a specific tier, instead of reworking the entire application. A three-tier architecture is typically composed of a presentation tier, a logic tier, and a data tier.
While the concepts of layer and tier are often used interchangeably, one fairly common point of view is that there is indeed a difference. This view holds that a layer is a logical structuring mechanism for the elements that make up the software solution, while a tier is a physical structuring mechanism for the system infrastructure. For example, a three-layer solution could easily be deployed on a single tier, such in the case of an extreme database-centric architecture called RDBMS-only architecture or in a personal workstation.
Layers
The "Layers" architectural pattern has been described in various publications.
Common layers
In a logical multilayer architecture for an information system with an object-oriented design, the following four are the most common:
Presentation layer (a.k.a. UI layer, view layer, presentation tier in multitier architecture)
Application layer (a.k.a. service layer or GRASP Controller Layer )
Business layer (a.k.a. business logic layer (BLL), domain logic layer)
Data access layer (a.k.a. persistence layer, logging, networking, and other services which are required to support a particular business layer)
The book Domain Driven Design describes some common uses for the above four layers, although its primary focus is the domain layer.
If the application architecture has no explicit distinction between the business layer and the presentation layer (i.e., the presentation layer is considered part of the business layer), then a traditional client-server (two-tier) model has been implemented.
The more usual convention is that the application layer (or service layer) is considered a sublayer of the business layer, typically encapsulating the API definition surfacing the supported business functionality. The application/business layers can, in fact, be further subdivided to emphasize additional sublayers of distinct responsibility. For example, if the model–view–presenter pattern is used, the presenter sublayer might be used as an additional layer between the user interface layer and the business/application layer (as represented by the model sublayer).
Some also identify a separate layer called the business infrastructure layer (BI), located between the business layer(s) and the infrastructure layer(s). It's also sometimes called the "low-level business layer" or the "business services layer". This layer is very general and can be used in several application tiers (e.g. a CurrencyConverter).
The infrastructure layer can be partitioned into different levels (high-level or low-level technical services). Developers often focus on the persistence (data access) capabilities of the infrastructure layer and therefore only talk about the persistence layer or the data access layer (instead of an infrastructure layer or technical services layer). In other words, the other kind of technical services are not always explicitly thought of as part of any particular layer.
A layer is on top of another, because it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Another common view is that layers do not always strictly depend on only the adjacent layer below. For example, in a relaxed layered system (as opposed to a strict layered system) a layer can also depend on all the layers below it.
Three-tier architecture
Three-tier architecture is a client-server software architecture pattern in which the user interface (presentation), functional process logic ("business rules"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms. It was developed by John J. Donovan in Open Environment Corporation (OEC), a tools company he founded in Cambridge, Massachusetts.
Apart from the usual advantages of modular software with well-defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently in response to changes in requirements or technology. For example, a change of operating system in the presentation tier would only affect the user interface code.
Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic that may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe that contains the computer data storage logic. The middle tier may be multitiered itself (in which case the overall architecture is called an "n-tier architecture").
Presentation tier
This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing and shopping cart contents. It communicates with other tiers by which it puts out the results to the browser/client tier and all other tiers in the network. In simple terms, it is a layer which users can access directly (such as a web page, or an operating system's GUI).
Application tier (business logic, logic tier, or middle tier)
The logical tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.
Data tier
The data tier includes the data persistence mechanisms (database servers, file shares, etc.) and the data access layer that encapsulates the persistence mechanisms and exposes the data. The data access layer should provide an API to the application tier that exposes methods of managing the stored data without exposing or creating dependencies on the data storage mechanisms. Avoiding dependencies on the storage mechanisms allows for updates or changes without the application tier clients being affected by or even aware of the change. As with the separation of any tier, there are costs for implementation and often costs to performance in exchange for improved scalability and maintainability.
Web development usage
In the web development field, three-tier is often used to refer to websites, commonly electronic commerce websites, which are built using three tiers:
A front-end web server serving static content, and potentially some cached dynamic content. In web-based application, front end is the content rendered by the browser. The content may be static or generated dynamically.
A middle dynamic content processing and generation level application server (e.g., Symfony, Spring, ASP.NET, Django, Rails, Node.js).
A back-end database or data store, comprising both data sets and the database management system software that manages and provides access to the data.
Other considerations
Data transfer between tiers is part of the architecture. Protocols involved may include one or more of SNMP, CORBA, Java RMI, .NET Remoting, Windows Communication Foundation, sockets, UDP, web services or other standard or proprietary protocols. Often middleware is used to connect the separate tiers. Separate tiers often (but not necessarily) run on separate physical servers, and each tier may itself run on a cluster.
Traceability
The end-to-end traceability of data flows through n-tier systems is a challenging task which becomes more important when systems increase in complexity. The Application Response Measurement defines concepts and APIs for measuring performance and correlating transactions between tiers.
Generally, the term "tiers" is used to describe physical distribution of components of a system on separate servers, computers, or networks (processing nodes). A three-tier architecture then will have three processing nodes. The term "layers" refers to a logical grouping of components which may or may not be physically located on one processing node.
See also
Abstraction layer
Client–server model
Database-centric architecture
Front-end and back-end
Hierarchical internetworking model
Load balancing (computing)
Open Services Architecture
Rich web application
Service layer
Shearing layers
Web application
References
External links
Linux journal, Three Tier Architecture
Microsoft Application Architecture Guide
Example of free 3-tier system
What Is the 3-Tier Architecture?
Description of a concrete layered architecture for .NET/WPF Rich Client Applications
Distributed computing architecture
Software architecture
World Wide Web
Architectural pattern (computer science)
Software design
Software engineering terminology
Software design patterns |
20016 | https://en.wikipedia.org/wiki/Myrinet | Myrinet | Myrinet, ANSI/VITA 26-1998, is a high-speed local area networking system designed by the company Myricom to be used as an interconnect between multiple machines to form computer clusters.
Description
Myrinet was promoted as having lower protocol overhead than standards such as Ethernet, and therefore better throughput, less interference, and lower latency while using the host CPU. Although it can be used as a traditional networking system, Myrinet is often used directly by programs that "know" about it, thereby bypassing a call into the operating system.
Myrinet physically consists of two fibre optic cables, upstream and downstream, connected to the host computers with a single connector. Machines are connected via low-overhead routers and switches, as opposed to connecting one machine directly to another. Myrinet includes a number of fault-tolerance features, mostly backed by the switches. These include flow control, error control, and "heartbeat" monitoring on every link. The "fourth-generation" Myrinet, called Myri-10G, supported a 10 Gbit/s data rate and can use 10 Gigabit Ethernet on PHY, the physical layer (cables, connectors, distances, signaling). Myri-10G started shipping at the end of 2005.
Myrinet was approved in 1998 by the American National Standards Institute for use on the VMEbus as ANSI/VITA 26-1998. One of the earliest publications on Myrinet is a 1995 IEEE article.
Performance
Myrinet is a lightweight protocol with little overhead that allows it to operate with throughput close to the basic signaling speed of the physical layer. For supercomputing, the low latency of Myrinet is even more important than its throughput performance, since, according to Amdahl's law, a high-performance parallel system tends to be bottlenecked by its slowest sequential process, which in all but the most embarrassingly parallel supercomputer workloads is often the latency of message transmission across the network.
Deployment
According to Myricom, 141 (28.2%) of the June 2005 TOP500 supercomputers used Myrinet technology. In the November 2005 TOP500, the number of supercomputers using Myrinet was down to 101 computers, or 20.2%, in November 2006, 79 (15.8%), and by November 2007, 18 (3.6%), a long way behind gigabit Ethernet at 54% and InfiniBand at 24.2%.
In the June 2014 TOP500 list, the number of supercomputers using Myrinet interconnect was 1 (0.2%).
In November, 2013, the assets of Myricom (including the Myrinet technology) were acquired by CSP Inc. In 2016, it was reported that Google had also offered to buy the company.
See also
HIPPI
InfiniBand
List of device bandwidths
NUMAlink
Quadrics (company)
RapidIO
Scalable Coherent Interconnect (SCI)
References
External links
CSPI, current owner or Myrinet.
Supercomputing
Computer networks |
20017 | https://en.wikipedia.org/wiki/Musique%20concr%C3%A8te | Musique concrète | Musique concrète (; ) is a type of music composition that utilizes recorded sounds as raw material. Sounds are often modified through the application of audio effects and tape manipulation techniques, and may be assembled into a form of montage. It can feature sounds derived from recordings of musical instruments, the human voice, and the natural environment as well as those created using synthesizers and computer-based digital signal processing. Compositions in this idiom are not restricted to the normal musical rules of melody, harmony, rhythm, metre, and so on. It exploits acousmatic listening, meaning sound identities can often be intentionally obscured or appear unconnected to their source cause.
The theoretical basis of musique concrète as a compositional practice was developed by French composer Pierre Schaeffer beginning in the early 1940s. It was largely an attempt to differentiate between music based on the abstract medium of notation and that created using so-called sound-objects (l'objet sonore). By the early 1950s musique concrete was contrasted with "pure" elektronische Musik (based solely on the use of electronically produced sounds rather than recorded sounds) but the distinction has since been blurred such that the term “electronic music” covers both meanings. Schaeffer's work resulted in the establishment of France's Groupe de Recherches de Musique Concrète (GRMC), which attracted important figures including Pierre Henry, Luc Ferrari, Pierre Boulez, Karlheinz Stockhausen, Edgard Varèse, and Iannis Xenakis. From the late 1960s onward, and particularly in France, the term acousmatic music (musique acousmatique) started to be used in reference to fixed media compositions that utilized both musique concrète based techniques and live sound spatialisation. Musique concrète would influence many popular musicians, including the Beatles, Pink Floyd, and Frank Zappa.
History
Beginnings
In 1928 music critic André Cœuroy wrote in his book Panorama of Contemporary Music that "perhaps the time is not far off when a composer will be able to represent through recording, music specifically composed for the gramophone. In the same period the American composer Henry Cowell, in referring to the projects of Nikolai Lopatnikoff, believed that "there was a wide field open for the composition of music for phonographic discs." This sentiment was echoed further in 1930 by Igor Stravinsky, when he stated in the revue Kultur und Schallplatte that "there will be a greater interest in creating music in a way that will be peculiar to the gramophone record." The following year, 1931, Boris de Schloezer also expressed the opinion that one could write for the gramophone or for the wireless just as one can for the piano or the violin. Shortly after, German art theorist Rudolf Arnheim discussed the effects of microphonic recording in an essay entitled "Radio", published in 1936. In it the idea of a creative role for the recording medium was introduced and Arnheim stated that: "The rediscovery of the musicality of sound in noise and in language, and the reunification of music, noise and language in order to obtain a unity of material: that is one of the chief artistic tasks of radio".
Pierre Schaeffer and Studio d'Essai
In 1942, French composer and theoretician Pierre Schaeffer began his exploration of radiophony when he joined Jacques Copeau and his pupils in the foundation of the Studio d'Essai de la Radiodiffusion nationale. The studio originally functioned as a center for the Resistance movement in French radio, which in August 1944 was responsible for the first broadcasts in liberated Paris. It was here that Schaeffer began to experiment with creative radiophonic techniques using the sound technologies of the time. In 1948 Schaeffer began to keep a set of journals describing his attempt to create a “symphony of noises.” These journals were published in 1952 as A la recherche d’une musique concrète, and according to Brian Kane, author of Sound Unseen: Acousmatic Sound in Theory and Practice, Schaeffer was driven by: "a compositional desire to construct music from concrete objects — no matter how unsatisfactory the initial results — and a theoretical desire to find a vocabulary, solfège, or method upon which to ground such music.
The development of Schaeffer's practice was informed by encounters with voice actors, and microphone usage and radiophonic art played an important part in inspiring and consolidating Schaeffer's conception of sound-based composition. Another important influence on Schaeffer's practice was cinema, and the techniques of recording and montage, which were originally associated with cinematographic practice, came to "serve as the substrate of musique concrète." Marc Battier notes that, prior to Schaeffer, Jean Epstein drew attention to the manner in which sound recording revealed what was hidden in the act of basic acoustic listening. Epstein's reference to this "phenomenon of an epiphanic being", which appears through the transduction of sound, proved influential on Schaeffer's concept of reduced listening. Schaeffer would explicitly cite Jean Epstein with reference to his use of extra-musical sound material. Epstein had already imagined that "through the transposition of natural sounds, it becomes possible to create chords and dissonances, melodies and symphonies of noise, which are a new and specifically cinematographic music".
Halim El-Dabh's tape music
Perhaps earlier than Schaeffer conducting his preliminary experiments into sound manipulation (assuming these were later than 1944, and not as early as the foundation of the Studio d'Essai in 1942) was the activity of Egyptian composer Halim El-Dabh. As a student in Cairo in the early to mid-1940s he began experimenting with "tape music" using a cumbersome wire recorder. He recorded the sounds of an ancient zaar ceremony and at the Middle East Radio studios processed the material using reverberation, echo, voltage controls, and re-recording. The resulting tape-based composition, entitled The Expression of Zaar, was presented in 1944 at an art gallery event in Cairo. El-Dabh has described his initial activities as an attempt to unlock "the inner sound" of the recordings. While his early compositional work was not widely known outside of Egypt at the time, El-Dabh would eventually gain recognition for his influential work at the Columbia-Princeton Electronic Music Center in the late 1950s.
Club d'Essai and Cinq études de bruits
Following Schaeffer's work with Studio d'Essai at Radiodiffusion Nationale during the early 1940s he was credited with originating the theory and practice of musique concrète. The Studio d'Essai was renamed Club d'Essai de la Radiodiffusion-Télévision Française in 1946 and in the same year Schaeffer discussed, in writing, the question surrounding the transformation of time perceived through recording. The essay evidenced knowledge of sound manipulation techniques he would further exploit compositionally. In 1948 Schaeffer formally initiated "research in to noises" at the Club d'Essai and on 5 October 1948 the results of his initial experimentation were premiered at a concert given in Paris. Five works for phonograph – known collectively as Cinq études de bruits (Five Studies of Noises) including Étude violette (Study in Purple) and Étude aux chemins de fer (Study with Railroads) – were presented.
Musique concrète
By 1949 Schaeffer's compositional work was known publicly as musique concrète. Schaeffer stated: "when I proposed the term 'musique concrète,' I intended … to point out an opposition with the way musical work usually goes. Instead of notating musical ideas on paper with the symbols of solfege and entrusting their realization to well-known instruments, the question was to collect concrete sounds, wherever they came from, and to abstract the musical values they were potentially containing". According to Pierre Henry, "musique concrète was not a study of timbre, it is focused on envelopes, forms. It must be presented by means of non-traditional characteristics, you see … one might say that the origin of this music is also found in the interest in 'plastifying' music, of rendering it plastic like sculpture…musique concrète, in my opinion … led to a manner of composing, indeed, a new mental framework of composing". Schaeffer had developed an aesthetic that was centred upon the use of sound as a primary compositional resource. The aesthetic also emphasised the importance of play (jeu) in the practice of sound based composition. Schaeffer's use of the word jeu, from the verb jouer, carries the same double meaning as the English verb to play: 'to enjoy oneself by interacting with one's surroundings', as well as 'to operate a musical instrument'.
Groupe de Recherche de Musique Concrète
By 1951 the work of Schaeffer, composer-percussionist Pierre Henry, and sound engineer Jacques Poullin had received official recognition and the Groupe de Recherches de Musique Concrète, Club d 'Essai de la Radiodiffusion-Télévision Française was established at RTF in Paris, the ancestor of the ORTF. At RTF the GRMC established the first purpose-built electroacoustic music studio. It quickly attracted many who either were or were later to become notable composers, including Olivier Messiaen, Pierre Boulez, Jean Barraqué, Karlheinz Stockhausen, Edgard Varèse, Iannis Xenakis, Michel Philippot, and Arthur Honegger. Compositional "output from 1951 to 1953 comprised Étude I (1951) and Étude II (1951) by Boulez, Timbres-durées (1952) by Messiaen, Étude aux mille collants (1952) by Stockhausen, Le microphone bien tempéré (1952) and La voile d'Orphée (1953) by Henry, Étude I (1953) by Philippot, Étude (1953) by Barraqué, the mixed pieces Toute la lyre (1951) and Orphée 53 (1953) by Schaeffer/Henry, and the film music Masquerage (1952) by Schaeffer and Astrologie (1953) by Henry. In 1954 Varèse and Honegger visited to work on the tape parts of Déserts and La rivière endormie".
In the early and mid 1950s Schaeffer's commitments to RTF included official missions which often required extended absences from the studios. This led him to invest Philippe Arthuys with responsibility for the GRMC in his absence, with Pierre Henry operating as Director of Works. Pierre Henry's composing talent developed greatly during this period at the GRMC and he worked with experimental filmmakers such as Max de Haas, Jean Grémillon, Enrico Fulchignoni, and Jean Rouch, and with choreographers including Dick Sanders and Maurice Béjart. Schaeffer returned to run the group at the end of 1957, and immediately stated his disapproval of the direction the GRMC had taken. A proposal was then made to "renew completely the spirit, the methods and the personnel of the Group, with a view to undertake research and to offer a much needed welcome to young composers".
Groupe de Recherches Musicales
Following the emergence of differences within the GRMC Pierre Henry, Philippe Arthuys, and several of their colleagues, resigned in April 1958. Schaeffer created a new collective, called Groupe de Recherches Musicales (GRM) and set about recruiting new members including Luc Ferrari, Beatriz Ferreyra, François-Bernard Mâche, Iannis Xenakis, Bernard Parmegiani, and Mireille Chamass-Kyrou. Later arrivals included Ivo Malec, Philippe Carson, Romuald Vandelle, Edgardo Canton and François Bayle.
GRM was one of several theoretical and experimental groups working under the umbrella of the Schaeffer-led Service de la Recherche at ORTF (1960–74). Together with the GRM, three other groups existed: the Groupe de Recherches Image GRI, the Groupe de Recherches Technologiques GRT and the Groupe de Recherches which became the Groupe d'Etudes Critiques. Communication was the one theme that unified the various groups, all of which were devoted to production and creation. In terms of the question "who says what to whom?" Schaeffer added "how?", thereby creating a platform for research into audiovisual communication and mass media, audible phenomena and music in general (including non-Western musics). At the GRM the theoretical teaching remained based on practice and could be summed up in the catch phrase do and listen.
Schaeffer kept up a practice established with the GRMC of delegating the functions (though not the title) of Group Director to colleagues. Since 1961 GRM has had six Group Directors: Michel Philippot (1960–61), Luc Ferrari (1962–63), Bernard Baschet and François Vercken (1964–66). From the beginning of 1966, François Bayle took over the direction for the duration of thirty-one years, to 1997. He was then replaced by Daniel Teruggi.
Traité des objets musicaux
The group continued to refine Schaeffer's ideas and strengthened the concept of musique acousmatique. Schaeffer had borrowed the term acousmatic from Pythagoras and defined it as: "Acousmatic, adjective: referring to a sound that one hears without seeing the causes behind it". In 1966 Schaeffer published the book Traité des objets musicaux (Treatise on Musical Objects) which represented the culmination of some 20 years of research in the field of musique concrète. In conjunction with this publication, a set of sound recordings was produced, entitled Le solfège de l'objet sonore (Music Theory of the Acoustic Object), to provide examples of concepts dealt with in the treatise.
Technology
The development of musique concrète was facilitated by the emergence of new music technology in post-war Europe. Access to microphones, phonographs, and later magnetic tape recorders (created in 1939 and acquired by the Schaeffer's Groupe de Recherche de Musique Concrète (Research Group on Concrete Music) in 1952), facilitated by an association with the French national broadcasting organization, at that time the Radiodiffusion-Télévision Française, gave Schaeffer and his colleagues an opportunity to experiment with recording technology and tape manipulation.
Initial tools of musique concrète
In 1948, a typical radio studio consisted of a series of shellac record players, a shellac record recorder, a mixing desk with rotating potentiometers, mechanical reverberation, filters, and microphones. This technology made a number of limited operations available to a composer:
Shellac record players: could read a sound normally and in reverse mode, could change speed at fixed ratios thus permitting octave transposition.
Shellac recorder: would record any result coming out of the mixing desk.
Mixing desk: would permit several sources to be mixed together with an independent control of the gain or volume of the sound. The result of the mixing was sent to the recorder and to the monitoring loudspeakers. Signals could be sent to the filters or the reverberation unit.
Mechanical reverberation: made of a metal plate or a series of springs that created the reverberation effect, indispensable to force sounds to "fuse" together.
Filters: two kinds of filters, 1/3 octave filters and high and low-pass filters. They allow the elimination or enhancement of selected frequencies.
Microphones: essential tool for capturing sound.
The application of the above technologies in the creation of musique concrète led to the development of a number of sound manipulation techniques including:
Sound transposition: reading a sound at a different speed than the one at which it was recorded.
Sound looping: composers developed a skilled technique in order to create loops at specific locations within a recording.
Sound-sample extraction: a hand-controlled method that required delicate manipulation to get a clean sample of sound. It entailed letting the stylus read a small segment of a record. Used in the Symphonie pour un homme seul.
Filtering: by eliminating most of the central frequencies of a signal, the remains would keep some trace of the original sound but without making it recognisable.
Magnetic tape
The first tape recorders started arriving at ORTF in 1949; however, their functioning was much less reliable than the shellac players, to the point that the Symphonie pour un homme seul, which was composed in 1950–51, was mainly composed with records, even if the tape recorder was available. In 1950, when the machines finally functioned correctly, the techniques of musique concrète were expanded. A range of new sound manipulation practices were explored using improved media manipulation methods and operations such as speed variation. A completely new possibility of organising sounds appears with tape editing, which permits tape to be spliced and arranged with an extraordinary new precision. The "axe-cut junctions" were replaced with micrometric junctions and a whole new technique of production, less dependency on performance skills, could be developed. Tape editing brought a new technique called "micro-editing", in which very tiny fragments of sound, representing milliseconds of time, were edited together, thus creating completely new sounds or structures.
Development of novel devices
During the GRMC period from 1951 to 1958, Schaeffer and Poullin developed a number of novel sound creation tools. These included a three-track tape recorder; a machine with ten playback heads to replay tape loops in echo (the morphophone); a keyboard-controlled machine to replay tape loops at twenty-four preset speeds (the keyboard, chromatic, or Tolana phonogène); a slide-controlled machine to replay tape loops at a continuously variable range of speeds (the handle, continuous, or Sareg phonogène); and a device to distribute an encoded track across four loudspeakers, including one hanging from the centre of the ceiling (the potentiomètre d'espace).
Phonogène
Speed variation was a powerful tool for sound design applications. It had been identified that transformations brought about by varying playback speed lead to modification in the character of the sound material:
Variation in the sounds' length, in a manner directly proportional to the ratio of speed variation.
Variation in length is coupled with a variation in pitch, and is also proportional to the ratio of speed variation.
A sound's attack characteristic is altered, whereby it is either dislocated from succeeding events, or the energy of the attack is more sharply focused.
The distribution of spectral energy is altered, thereby influencing how the resulting timbre might be perceived, relative to its original unaltered state.
The phonogène was a machine capable of modifying sound structure significantly and it provided composers with a means to adapt sound to meet specific compositional contexts. The initial phonogènes were manufactured in 1953 by two subcontractors: the chromatic phonogène by a company called Tolana, and the sliding version by the SAREG Company. A third version was developed later at ORTF. An outline of the unique capabilities of the various phonogènes can be seen here:
Chromatic: The chromatic phonogène was controlled through a one-octave keyboard. Multiple capstans of differing diameters vary the tape speed over a single stationary magnetic tape head. A tape loop was put into the machine, and when a key was played, it would act on an individual pinch roller / capstan arrangement and cause the tape to be played at a specific speed. The machine worked with short sounds only.
Sliding: The sliding phonogène (also called continuous-variation phonogène) provided continuous variation of tape speed using a control rod. The range allowed the motor to arrive at almost a stop position, always through a continuous variation. It was basically a normal tape recorder but with the ability to control its speed, so it could modify any length of tape. One of the earliest examples of its use can by heard in Voile d'Orphée by Pierre Henry (1953), where a lengthy glissando is used to symbolise the removal of Orpheus's veil as he enters hell.
Universal: A final version called the universal phonogène was completed in 1963. The device's main ability was that it enabled the dissociation of pitch variation from time variation. This was the starting point for methods that would later become widely available using digital technology, for instance harmonising (transposing sound without modifying duration) and time stretching (modifying duration without pitch modification). This was obtained through a rotating magnetic head called the Springer temporal regulator, an ancestor of the rotating heads used in video machines.
Three-head tape recorder
This original tape recorder was one of the first machines permitting the simultaneous listening of several synchronised sources. Until 1958 musique concrète, radio and the studio machines were monophonic. The three-head tape recorder superposed three magnetic tapes that were dragged by a common motor, each tape having an independent spool. The objective was to keep the three tapes synchronised from a common starting point. Works could then be conceived polyphonically, and thus each head conveyed a part of the information and was listened to through a dedicated loudspeaker. It was an ancestor of the multi-track player (four then eight tracks) that appeared in the 1960s. Timbres Durées by Olivier Messiaen with the technical assistance of Pierre Henry was the first work composed for this tape recorder in 1952. A rapid rhythmic polyphony was distributed over the three channels.
Morphophone
This machine was conceived to build complex forms through repetition, and accumulation of events through delays, filtering and feedback. It consisted of a large rotating disk, 50 cm in diameter, on which was stuck a tape with its magnetic side facing outward. A series of twelve movable magnetic heads (one each recording head and erasing head, and ten playback heads) were positioned around the disk, in contact with the tape. A sound up to four seconds long could be recorded on the looped tape and the ten playback heads would then read the information with different delays, according to their (adjustable) positions around the disk. A separate amplifier and band-pass filter for each head could modify the spectrum of the sound, and additional feedback loops could transmit the information to the recording head. The resulting repetitions of a sound occurred at different time intervals, and could be filtered or modified through feedback. This system was also easily capable of producing artificial reverberation or continuous sounds.
Early sound spatialisation system
At the premiere of Pierre Schaeffer's Symphonie pour un homme seul in 1951, a system that was designed for the spatial control of sound was tested. It was called a "relief desk" (pupitre de relief, but also referred to as pupitre d'espace or potentiomètre d'espace) and was intended to control the dynamic level of music played from several shellac players. This created a stereophonic effect by controlling the positioning of a monophonic sound source. One of five tracks, provided by a purpose-built tape machine, was controlled by the performer and the other four tracks each supplied a single loudspeaker. This provided a mixture of live and preset sound positions. The placement of loudspeakers in the performance space included two loudspeakers at the front right and left of the audience, one placed at the rear, and in the centre of the space a loudspeaker was placed in a high position above the audience. The sounds could therefore be moved around the audience, rather than just across the front stage. On stage, the control system allowed a performer to position a sound either to the left or right, above or behind the audience, simply by moving a small, hand held transmitter coil towards or away from four somewhat larger receiver coils arranged around the performer in a manner reflecting the loudspeaker positions. A contemporary eyewitness described the potentiomètre d'espace in normal use:
One found one's self sitting in a small studio which was equipped with four loudspeakers—two in front of one—right and left; one behind one and a fourth suspended above. In the front center were four large loops and an "executant" moving a small magnetic unit through the air. The four loops controlled the four speakers, and while all four were giving off sounds all the time, the distance of the unit from the loops determined the volume of sound sent out from each.The music thus came to one at varying intensity from various parts of the room, and this "spatial projection" gave new sense to the rather abstract sequence of sound originally recorded. The central concept underlying this method was the notion that music should be controlled during public presentation in order to create a performance situation; an attitude that has stayed with acousmatic music to the present day.
Coupigny synthesiser and Studio 54 mixing desk
After the longstanding rivalry with the "electronic music" of the Cologne studio had subsided, in 1970 the GRM finally created an electronic studio using tools developed by the physicist Enrico Chiarucci, called the Studio 54, which featured the "Coupigny modular synthesiser" and a Moog synthesiser. The Coupigny synthesiser, named for its designer François Coupigny, director of the Group for Technical Research, and the Studio 54 mixing desk had a major influence on the evolution of GRM and from the point of their introduction on they brought a new quality to the music. The mixing desk and synthesiser were combined in one unit and were created specifically for the creation of musique concrète.
The design of the desk was influenced by trade union rules at French National Radio that required technicians and production staff to have clearly defined duties. The solitary practice of musique concrète composition did not suit a system that involved three operators: one in charge of the machines, a second controlling the mixing desk, and third to provide guidance to the others. Because of this the synthesiser and desk were combined and organised in a manner that allowed it to be used easily by a composer. Independently of the mixing tracks (twenty-four in total), it had a coupled connection patch that permitted the organisation of the machines within the studio. It also had a number of remote controls for operating tape recorders. The system was easily adaptable to any context, particularly that of interfacing with external equipment.
Before the late 1960s the musique concrète produced at GRM had largely been based on the recording and manipulation of sounds, but synthesised sounds had featured in a number of works prior to the introduction of the Coupigny. Pierre Henry had used oscillators to produce sounds as early as 1955. But a synthesiser with envelope control was something Pierre Schaeffer was against, since it favoured the preconception of music and therefore deviated from Schaeffer's principal of making through listening. Because of Schaeffer's concerns the Coupigny synthesiser was conceived as a sound-event generator with parameters controlled globally, without a means to define values as precisely as some other synthesisers of the day.
The development of the machine was constrained by several factors. It needed to be modular and the modules had to be easily interconnected (so that the synthesiser would have more modules than slots and it would have an easy-to-use patch). It also needed to include all the major functions of a modular synthesiser including oscillators, noise-generators, filters, ring-modulators, but an intermodulation facility was viewed as the primary requirement; to enable complex synthesis processes such as frequency modulation, amplitude modulation, and modulation via an external source. No keyboard was attached to the synthesiser and instead a specific and somewhat complex envelope generator was used to shape sound. This synthesiser was well-adapted to the production of continuous and complex sounds using intermodulation techniques such as cross-synthesis and frequency modulation but was less effective in generating precisely defined frequencies and triggering specific sounds.
The Coupigny synthesiser also served as the model for a smaller, portable unit, which has been used down to the present day.
Acousmonium
In 1966 composer and technician François Bayle was placed in charge of the Groupe de Recherches Musicales and in 1975, GRM was integrated with the new Institut national de l'audiovisuel (INA – Audiovisual National Institute) with Bayle as its head. In taking the lead on work that began in the early 1950s, with Jacques Poullin's potentiomètre d'espace, a system designed to move monophonic sound sources across four speakers, Bayle and the engineer Jean-Claude Lallemand created an orchestra of loudspeakers (un orchestre de haut-parleurs) known as the Acousmonium in 1974. An inaugural concert took place on 14 February 1974 at the Espace Pierre Cardin in Paris with a presentation of Bayle's Expérience acoustique.
The Acousmonium is a specialised sound reinforcement system consisting of between 50 and 100 loudspeakers, depending on the character of the concert, of varying shape and size. The system was designed specifically for the concert presentation of musique-concrète-based works but with the added enhancement of sound spatialisation. Loudspeakers are placed both on stage and at positions throughout the performance space and a mixing console is used to manipulate the placement of acousmatic material across the speaker array, using a performative technique known as "sound diffusion". Bayle has commented that the purpose of the Acousmonium is to "substitute a momentary classical disposition of sound making, which diffuses the sound from the circumference towards the centre of the hall, by a group of sound projectors which form an 'orchestration' of the acoustic image".
As of 2010, the Acousmonium was still performing, with 64 speakers, 35 amplifiers, and 2 consoles.
See also
Audium
Birmingham ElectroAcoustic Sound Theatre
Canadian Electroacoustic Community
Computer music
Noise music
Sound engineering
Sound design
Sound art
Sound collage
Notes
References
Bibliography
(cloth) (pbk).
Further reading
External links
INA-GRM website
François Bayle's personal website
Michel Chion official site
Electroacoustic Music Studies Network
Bernard Parmegiani's personal website
ElectroAcoustic Resource Site at De Montfort University
INA-GRM 31st Season (2008/2009). Multiphonies program of events.
Organised Sound: An International Journal of Music and Technology.
Audium A Theatre of Sound-Sculptured Space
Pierre Schaeffer |
20018 | https://en.wikipedia.org/wiki/Metric%20space | Metric space | In mathematics, a metric space is a non empty set together with a metric on the set. The metric is a function that defines a concept of distance between any two members of the set, which are usually called points. The metric satisfies a few simple properties:
the distance from to is zero if and only if and are the same point,
the distance between two distinct points is positive,
the distance from to is the same as the distance from to , and
the distance from to is less than or equal to the distance from to via any third point .
A metric on a space induces topological properties like open and closed sets, which lead to the study of more abstract topological spaces.
The most familiar metric space is 3-dimensional Euclidean space. In fact, a "metric" is the generalization of the Euclidean metric arising from the four long-known properties of the Euclidean distance. The Euclidean metric defines the distance between two points as the length of the straight line segment connecting them. Other metric spaces occur for example in elliptic geometry and hyperbolic geometry, where distance on a sphere measured by angle is a metric, and the hyperboloid model of hyperbolic geometry is used by special relativity as a metric space of velocities. Some of non-geometric metric spaces include spaces of finite strings (finite sequences of symbols from a predefined alphabet) equipped with e.g. a Hamming's or Levenshtein distance, a space of subsets of any metric space equipped with Hausdorff distance, a space of real functions integrable on a unit interval with an integral metric or probabilistic spaces on any chosen metric space equipped with Wasserstein metric. See also the section .
History
In 1906 Maurice Fréchet introduced metric spaces in his work Sur quelques points du calcul fonctionnel. However the name is due to Felix Hausdorff.
Definition
A metric space is an ordered pair where is a set and is a metric on , i.e., a function
such that for any , the following holds:
{|
|-
| 1. || || identity of indiscernibles
|-
| 2. || || symmetry
|-
| 3. || || subadditivity or triangle inequality
|}
Given the above three axioms, we also have that for any . This is deduced as follows (from the top to the bottom):
{|
|-
|style="width:250px"|
|| by triangle inequality
|-
|
| by symmetry
|-
|
| by identity of indiscernibles
|-
|
| we have non-negativity
|-
|}
The function is also called distance function or simply distance. Often, is omitted and one just writes for a metric space if it is clear from the context what metric is used.
Ignoring mathematical details, for any system of roads and terrains the distance between two locations can be defined as the length of the shortest route connecting those locations. To be a metric there shouldn't be any one-way roads. The triangle inequality expresses the fact that detours aren't shortcuts. If the distance between two points is zero, the two points are indistinguishable from one-another. Many of the examples below can be seen as concrete versions of this general idea.
Examples of metric spaces
The real numbers with the distance function given by the absolute difference, and, more generally, Euclidean -space with the Euclidean distance, are complete metric spaces. The rational numbers with the same distance function also form a metric space, but not a complete one.
The positive real numbers with distance function is a complete metric space.
Any normed vector space is a metric space by defining , see also metrics on vector spaces. (If such a space is complete, we call it a Banach space.) Examples:
The Manhattan norm gives rise to the Manhattan distance, where the distance between any two points, or vectors, is the sum of the differences between corresponding coordinates.
The cyclic Mannheim metric or Mannheim distance is a modulo variant of the Manhattan metric.
The maximum norm gives rise to the Chebyshev distance or chessboard distance, the minimal number of moves a chess king would take to travel from to .
The British Rail metric (also called the "post office metric" or the "SNCF metric") on a normed vector space is given by for distinct points and , and . More generally can be replaced with a function taking an arbitrary set to non-negative reals and taking the value at most once: then the metric is defined on by for distinct points and , and The name alludes to the tendency of railway journeys to proceed via London (or Paris) irrespective of their final destination.
If is a metric space and is a subset of then becomes a metric space by restricting the domain of to
The discrete metric, where if and otherwise, is a simple but important example, and can be applied to all sets. This, in particular, shows that for any set, there is always a metric space associated to it. Using this metric, the singleton of any point is an open ball, therefore every subset is open and the space has the discrete topology.
A finite metric space is a metric space having a finite number of points. Not every finite metric space can be isometrically embedded in a Euclidean space.
The hyperbolic plane is a metric space. More generally:
If is any connected Riemannian manifold, then we can turn into a metric space by defining the distance of two points as the infimum of the lengths of the paths (continuously differentiable curves) connecting them.
If is some set and is a metric space, then, the set of all bounded functions (i.e. those functions whose image is a bounded subset of ) can be turned into a metric space by defining for any two bounded functions and (where is supremum). This metric is called the uniform metric or supremum metric, and If is complete, then this function space is complete as well. If X is also a topological space, then the set of all bounded continuous functions from to (endowed with the uniform metric), will also be a complete metric if M is.
If is an undirected connected graph, then the set of vertices of can be turned into a metric space by defining to be the length of the shortest path connecting the vertices and In geometric group theory this is applied to the Cayley graph of a group, yielding the word metric.
Graph edit distance is a measure of dissimilarity between two graphs, defined as the minimal number of graph edit operations required to transform one graph into another.
The Levenshtein distance is a measure of the dissimilarity between two strings and , defined as the minimal number of character deletions, insertions, or substitutions required to transform into . This can be thought of as a special case of the shortest path metric in a graph and is one example of an edit distance.
Given a metric space and an increasing concave function such that if and only if , then is also a metric on .
Given an injective function from any set to a metric space , defines a metric on .
Using T-theory, the tight span of a metric space is also a metric space. The tight span is useful in several types of analysis.
The set of all by matrices over some field is a metric space with respect to the rank distance .
The Helly metric is used in game theory.
Open and closed sets, topology and convergence
Every metric space is a topological space in a natural manner, and therefore all definitions and theorems about general topological spaces also apply to all metric spaces.
About any point in a metric space we define the open ball of radius (where is a real number) about as the set
These open balls form the base for a topology on M, making it a topological space.
Explicitly, a subset of is called open if for every in there exists an such that is contained in . The complement of an open set is called closed. A neighborhood of the point is any subset of that contains an open ball about as a subset.
A topological space which can arise in this way from a metric space is called a metrizable space.
A sequence () in a metric space is said to converge to the limit if and only if for every , there exists a natural number N such that for all . Equivalently, one can use the general definition of convergence available in all topological spaces.
A subset of the metric space is closed if and only if every sequence in that converges to a limit in has its limit in .
Types of metric spaces
Complete spaces
A metric space is said to be complete if every Cauchy sequence converges in . That is to say: if as both and independently go to infinity, then there is some with .
Every Euclidean space is complete, as is every closed subset of a complete space. The rational numbers, using the absolute value metric , are not complete.
Every metric space has a unique (up to isometry) completion, which is a complete space that contains the given space as a dense subset. For example, the real numbers are the completion of the rationals.
If is a complete subset of the metric space , then is closed in . Indeed, a space is complete if and only if it is closed in any containing metric space.
Every complete metric space is a Baire space.
Bounded and totally bounded spaces
A metric space is called if there exists some number , such that for all The smallest possible such is called the of The space is called precompact or totally bounded if for every there exist finitely many open balls of radius whose union covers Since the set of the centres of these balls is finite, it has finite diameter, from which it follows (using the triangle inequality) that every totally bounded space is bounded. The converse does not hold, since any infinite set can be given the discrete metric (one of the examples above) under which it is bounded and yet not totally bounded.
Note that in the context of intervals in the space of real numbers and occasionally regions in a Euclidean space a bounded set is referred to as "a finite interval" or "finite region". However boundedness should not in general be confused with "finite", which refers to the number of elements, not to how far the set extends; finiteness implies boundedness, but not conversely. Also note that an unbounded subset of may have a finite volume.
Compact spaces
A metric space is compact if every sequence in has a subsequence that converges to a point in . This is known as sequential compactness and, in metric spaces (but not in general topological spaces), is equivalent to the topological notions of countable compactness and compactness defined via open covers.
Examples of compact metric spaces include the closed interval with the absolute value metric, all metric spaces with finitely many points, and the Cantor set. Every closed subset of a compact space is itself compact.
A metric space is compact if and only if it is complete and totally bounded. This is known as the Heine–Borel theorem. Note that compactness depends only on the topology, while boundedness depends on the metric.
Lebesgue's number lemma states that for every open cover of a compact metric space , there exists a "Lebesgue number" such that every subset of of diameter is contained in some member of the cover.
Every compact metric space is second countable, and is a continuous image of the Cantor set. (The latter result is due to Pavel Alexandrov and Urysohn.)
Locally compact and proper spaces
A metric space is said to be locally compact if every point has a compact neighborhood. Euclidean spaces are locally compact, but infinite-dimensional Banach spaces are not.
A space is proper if every closed ball is compact. Proper spaces are locally compact, but the converse is not true in general.
Connectedness
A metric space is connected if the only subsets that are both open and closed are the empty set and itself.
A metric space is path connected if for any two points there exists a continuous map with and .
Every path connected space is connected, but the converse is not true in general.
There are also local versions of these definitions: locally connected spaces and locally path connected spaces.
Simply connected spaces are those that, in a certain sense, do not have "holes".
Separable spaces
A metric space is separable space if it has a countable dense subset. Typical examples are the real numbers or any Euclidean space. For metric spaces (but not for general topological spaces) separability is equivalent to second-countability and also to the Lindelöf property.
Pointed metric spaces
If is a metric space and then is called a pointed metric space, and is called a distinguished point. Note that a pointed metric space is just a nonempty metric space with attention drawn to its distinguished point, and that any nonempty metric space can be viewed as a pointed metric space. The distinguished point is sometimes denoted due to its similar behavior to zero in certain contexts.
Types of maps between metric spaces
Suppose and are two metric spaces.
Continuous maps
The map is continuous
if it has one (and therefore all) of the following equivalent properties:
General topological continuity for every open set in , the preimage is open in
This is the general definition of continuity in topology.
Sequential continuity if is a sequence in that converges to , then the sequence converges to in .
This is sequential continuity, due to Eduard Heine.
ε-δ definition for every and every there exists such that for all in we have
This uses the (ε, δ)-definition of limit, and is due to Augustin Louis Cauchy.
Moreover, is continuous if and only if it is continuous on every compact subset of .
The image of every compact set under a continuous function is compact, and the image of every connected set under a continuous function is connected.
Uniformly continuous maps
The map is uniformly continuous if for every there exists such that
Every uniformly continuous map is continuous. The converse is true if is compact (Heine–Cantor theorem).
Uniformly continuous maps turn Cauchy sequences in into Cauchy sequences in . For continuous maps this is generally wrong; for example, a continuous map
from the open interval onto the real line turns some Cauchy sequences into unbounded sequences.
Lipschitz-continuous maps and contractions
Given a real number , the map is K-Lipschitz continuous if
Every Lipschitz-continuous map is uniformly continuous, but the converse is not true in general.
If , then is called a contraction. Suppose and is complete. If is a contraction, then admits a unique fixed point (Banach fixed-point theorem). If is compact, the condition can be weakened a bit: admits a unique fixed point if
.
Isometries
The map is an isometry if
Isometries are always injective; the image of a compact or complete set under an isometry is compact or complete, respectively. However, if the isometry is not surjective, then the image of a closed (or open) set need not be closed (or open).
Quasi-isometries
The map is a quasi-isometry if there exist constants and such that
and a constant such that every point in has a distance at most from some point in the image .
Note that a quasi-isometry is not required to be continuous. Quasi-isometries compare the "large-scale structure" of metric spaces; they find use in geometric group theory in relation to the word metric.
Notions of metric space equivalence
Given two metric spaces and :
They are called homeomorphic (topologically isomorphic) if there exists a homeomorphism between them (i.e., a bijection continuous in both directions).
They are called uniformic (uniformly isomorphic) if there exists a uniform isomorphism between them (i.e., a bijection uniformly continuous in both directions).
They are called isometric if there exists a bijective isometry between them. In this case, the two metric spaces are essentially identical.
They are called quasi-isometric if there exists a quasi-isometry between them.
Topological properties
Metric spaces are paracompact Hausdorff spaces and hence normal (indeed they are perfectly normal). An important consequence is that every metric space admits partitions of unity and that every continuous real-valued function defined on a closed subset of a metric space can be extended to a continuous map on the whole space (Tietze extension theorem). It is also true that every real-valued Lipschitz-continuous map defined on a subset of a metric space can be extended to a Lipschitz-continuous map on the whole space.
Metric spaces are first countable since one can use balls with rational radius as a neighborhood base.
The metric topology on a metric space is the coarsest topology on relative to which the metric is a continuous map from the product of with itself to the non-negative real numbers.
Distance between points and sets; Hausdorff distance and Gromov metric
A simple way to construct a function separating a point from a closed set (as required for a completely regular space) is to consider the distance between the point and the set. If is a metric space, is a subset of and is a point of , we define the distance from to as
where represents the infimum.
Then if and only if belongs to the closure of . Furthermore, we have the following generalization of the triangle inequality:
which in particular shows that the map is continuous.
Given two subsets and of , we define their Hausdorff distance to be
where represents the supremum.
In general, the Hausdorff distance can be infinite. Two sets are close to each other in the Hausdorff distance if every element of either set is close to some element of the other set.
The Hausdorff distance turns the set of all non-empty compact subsets of into a metric space. One can show that is complete if is complete.
(A different notion of convergence of compact subsets is given by the Kuratowski convergence.)
One can then define the Gromov–Hausdorff distance between any two metric spaces by considering the minimal Hausdorff distance of isometrically embedded versions of the two spaces. Using this distance, the class of all (isometry classes of) compact metric spaces becomes a metric space in its own right.
Product metric spaces
If are metric spaces, and is the Euclidean norm on , then is a metric space, where the product metric is defined by
and the induced topology agrees with the product topology. By the equivalence of norms in finite dimensions, an equivalent metric is obtained if is the taxicab norm, a p-norm, the maximum norm, or any other norm which is non-decreasing as the coordinates of a positive -tuple increase (yielding the triangle inequality).
Similarly, a countable product of metric spaces can be obtained using the following metric
An uncountable product of metric spaces need not be metrizable. For example, is not first-countable and thus isn't metrizable.
Continuity of distance
In the case of a single space , the distance map (from the definition) is uniformly continuous with respect to any of the above product metrics , and in particular is continuous with respect to the product topology of .
Quotient metric spaces
If M is a metric space with metric , and is an equivalence relation on , then we can endow the quotient set with a pseudometric. Given two equivalence classes and , we define
where the infimum is taken over all finite sequences and with , , . In general this will only define a pseudometric, i.e. does not necessarily imply that . However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces), is a metric.
The quotient metric is characterized by the following universal property. If is a metric map between metric spaces (that is, for all , ) satisfying whenever then the induced function , given by , is a metric map
A topological space is sequential if and only if it is a quotient of a metric space.
Generalizations of metric spaces
Every metric space is a uniform space in a natural manner, and every uniform space is naturally a topological space. Uniform and topological spaces can therefore be regarded as generalizations of metric spaces.
Relaxing the requirement that the distance between two distinct points be non-zero leads to the concepts of a pseudometric space or a dislocated metric space. Removing the requirement of symmetry, we arrive at a quasimetric space. Replacing the triangle inequality with a weaker form leads to semimetric spaces.
If the distance function takes values in the extended real number line , but otherwise satisfies the conditions of a metric, then it is called an extended metric and the corresponding space is called an -metric space. If the distance function takes values in some (suitable) ordered set (and the triangle inequality is adjusted accordingly), then we arrive at the notion of generalized ultrametric.
Approach spaces are a generalization of metric spaces, based on point-to-set distances, instead of point-to-point distances.
A continuity space is a generalization of metric spaces and posets, that can be used to unify the notions of metric spaces and domains.
A partial metric space is intended to be the least generalisation of the notion of a metric space, such that the distance of each point from itself is no longer necessarily zero.
Metric spaces as enriched categories
The ordered set can be seen as a category by requesting exactly one morphism if and none otherwise. By using as the tensor product and as the identity, it becomes a monoidal category .
Every metric space can now be viewed as a category enriched over :
Set
For each set
The composition morphism will be the unique morphism in given from the triangle inequality
The identity morphism will be the unique morphism given from the fact that .
Since is a poset, all diagrams that are required for an enriched category commute automatically.
See the paper by F.W. Lawvere listed below.
See also
Assouad-Nagata dimension
References
Further reading
Victor Bryant, Metric Spaces: Iteration and Application, Cambridge University Press, 1985, .
Dmitri Burago, Yu D Burago, Sergei Ivanov, A Course in Metric Geometry, American Mathematical Society, 2001, .
Athanase Papadopoulos, Metric Spaces, Convexity and Nonpositive Curvature, European Mathematical Society, First edition 2004, . Second edition 2014, .
Mícheál Ó Searcóid, Metric Spaces, Springer Undergraduate Mathematics Series, 2006, .
Lawvere, F. William, "Metric spaces, generalized logic, and closed categories", [Rend. Sem. Mat. Fis. Milano 43 (1973), 135—166 (1974); (Italian summary)
This is reprinted (with author commentary) at Reprints in Theory and Applications of Categories
Also (with an author commentary) in Enriched categories in the logic of geometry and analysis. Repr. Theory Appl. Categ. No. 1 (2002), 1–37.
External links
Far and near — several examples of distance functions at cut-the-knot.
Mathematical analysis
Mathematical structures
Topology
Topological spaces |
20021 | https://en.wikipedia.org/wiki/Marine%20biology | Marine biology | Marine biology is the scientific study of the biology of marine life, organisms in the sea. Given that in biology many phyla, families and genera have some species that live in the sea and others that live on land, marine biology classifies species based on the environment rather than on taxonomy.
A large proportion of all life on Earth lives in the ocean. The exact size of this large proportion is unknown, since many ocean species are still to be discovered. The ocean is a complex three-dimensional world covering approximately 71% of the Earth's surface. The habitats studied in marine biology include everything from the tiny layers of surface water in which organisms and abiotic items may be trapped in surface tension between the ocean and atmosphere, to the depths of the oceanic trenches, sometimes 10,000 meters or more beneath the surface of the ocean. Specific habitats include estuaries, coral reefs, kelp forests, seagrass meadows, the surrounds of seamounts and thermal vents, tidepools, muddy, sandy and rocky bottoms, and the open ocean (pelagic) zone, where solid objects are rare and the surface of the water is the only visible boundary. The organisms studied range from microscopic phytoplankton and zooplankton to huge cetaceans (whales) in length. Marine ecology is the study of how marine organisms interact with each other and the environment.
Marine life is a vast resource, providing food, medicine, and raw materials, in addition to helping to support recreation and tourism all over the world. At a fundamental level, marine life helps determine the very nature of our planet. Marine organisms contribute significantly to the oxygen cycle, and are involved in the regulation of the Earth's climate. Shorelines are in part shaped and protected by marine life, and some marine organisms even help create new land.
Many species are economically important to humans, including both finfish and shellfish. It is also becoming understood that the well-being of marine organisms and other organisms are linked in fundamental ways. The human body of knowledge regarding the relationship between life in the sea and important cycles is rapidly growing, with new discoveries being made nearly every day. These cycles include those of matter (such as the carbon cycle) and of air (such as Earth's respiration, and movement of energy through ecosystems including the ocean). Large areas beneath the ocean surface still remain effectively unexplored.
Biological oceanography
Marine biology can be contrasted with biological oceanography. Marine life is a field of study both in marine biology and in biological oceanography. Biological oceanography is the study of how organisms affect and are affected by the physics, chemistry, and geology of the oceanographic system. Biological oceanography mostly focuses on the microorganisms within the ocean; looking at how they are affected by their environment and how that affects larger marine creatures and their ecosystem. Biological oceanography is similar to marine biology, but it studies ocean life from a different perspective. Biological oceanography takes a bottom up approach in terms of the food web, while marine biology studies the ocean from a top down perspective. Biological oceanography mainly focuses on the ecosystem of the ocean with an emphasis on plankton: their diversity (morphology, nutritional sources, motility, and metabolism); their productivity and how that plays a role in the global carbon cycle; and their distribution (predation and life cycle). Biological oceanography also investigates the role of microbes in food webs, and how humans impact the ecosystems in the oceans.
Marine habitats
Marine habitats can be divided into coastal and open ocean habitats. Coastal habitats are found in the area that extends from the shoreline to the edge of the continental shelf. Most marine life is found in coastal habitats, even though the shelf area occupies only seven percent of the total ocean area. Open ocean habitats are found in the deep ocean beyond the edge of the continental shelf. Alternatively, marine habitats can be divided into pelagic and demersal habitats. Pelagic habitats are found near the surface or in the open water column, away from the bottom of the ocean and affected by ocean currents, while demersal habitats are near or on the bottom. Marine habitats can be modified by their inhabitants. Some marine organisms, like corals, kelp and sea grasses, are ecosystem engineers which reshape the marine environment to the point where they create further habitat for other organisms.
Intertidal and near shore
Intertidal zones, the areas that are close to the shore, are constantly being exposed and covered by the ocean's tides. A huge array of life can be found within this zone. Shore habitats span from the upper intertidal zones to the area where land vegetation takes prominence. It can be underwater anywhere from daily to very infrequently. Many species here are scavengers, living off of sea life that is washed up on the shore. Many land animals also make much use of the shore and intertidal habitats. A subgroup of organisms in this habitat bores and grinds exposed rock through the process of bioerosion.
Estuaries
Estuaries are also near shore and influenced by the tides. An estuary is a partially enclosed coastal body of water with one or more rivers or streams flowing into it and with a free connection to the open sea. Estuaries form a transition zone between freshwater river environments and saltwater maritime environments. They are subject both to marine influences—such as tides, waves, and the influx of saline water—and to riverine influences—such as flows of fresh water and sediment. The shifting flows of both sea water and fresh water provide high levels of nutrients both in the water column and in sediment, making estuaries among the most productive natural habitats in the world.
Reefs
Reefs comprise some of the densest and most diverse habitats in the world. The best-known types of reefs are tropical coral reefs which exist in most tropical waters; however, reefs can also exist in cold water. Reefs are built up by corals and other calcium-depositing animals, usually on top of a rocky outcrop on the ocean floor. Reefs can also grow on other surfaces, which has made it possible to create artificial reefs. Coral reefs also support a huge community of life, including the corals themselves, their symbiotic zooxanthellae, tropical fish and many other organisms.
Much attention in marine biology is focused on coral reefs and the El Niño weather phenomenon. In 1998, coral reefs experienced the most severe mass bleaching events on record, when vast expanses of reefs across the world died because sea surface temperatures rose well above normal. Some reefs are recovering, but scientists say that between 50% and 70% of the world's coral reefs are now endangered and predict that global warming could exacerbate this trend.
Open ocean
The open ocean is relatively unproductive because of a lack of nutrients, yet because it is so vast, in total it produces the most primary productivity. The open ocean is separated into different zones, and the different zones each have different ecologies. Zones which vary according to their depth include the epipelagic, mesopelagic, bathypelagic, abyssopelagic, and hadopelagic zones. Zones which vary by the amount of light they receive include the photic and aphotic zones. Much of the aphotic zone's energy is supplied by the open ocean in the form of detritus.
Deep sea and trenches
The deepest recorded oceanic trench measured to date is the Mariana Trench, near the Philippines, in the Pacific Ocean at . At such depths, water pressure is extreme and there is no sunlight, but some life still exists. A white flatfish, a shrimp and a jellyfish were seen by the American crew of the bathyscaphe Trieste when it dove to the bottom in 1960. In general, the deep sea is considered to start at the aphotic zone, the point where sunlight loses its power of transference through the water. Many life forms that live at these depths have the ability to create their own light known as bio-luminescence. Marine life also flourishes around seamounts that rise from the depths, where fish and other sea life congregate to spawn and feed. Hydrothermal vents along the mid-ocean ridge spreading centers act as oases, as do their opposites, cold seeps. Such places support unique biomes and many new microbes and other lifeforms have been discovered at these locations.
Marine life
In biology many phyla, families and genera have some species that live in the sea and others that live on land. Marine biology classifies species based on the environment rather than on taxonomy. For this reason marine biology encompasses not only organisms that live only in a marine environment, but also other organisms whose lives revolve around the sea.
Microscopic life
As inhabitants of the largest environment on Earth, microbial marine systems drive changes in every global system. Microbes are responsible for virtually all the photosynthesis that occurs in the ocean, as well as the cycling of carbon, nitrogen, phosphorus and other nutrients and trace elements.
Microscopic life undersea is incredibly diverse and still poorly understood. For example, the role of viruses in marine ecosystems is barely being explored even in the beginning of the 21st century.
The role of phytoplankton is better understood due to their critical position as the most numerous primary producers on Earth. Phytoplankton are categorized into cyanobacteria (also called blue-green algae/bacteria), various types of algae (red, green, brown, and yellow-green), diatoms, dinoflagellates, euglenoids, coccolithophorids, cryptomonads, chrysophytes, chlorophytes, prasinophytes, and silicoflagellates.
Zooplankton tend to be somewhat larger, and not all are microscopic. Many Protozoa are zooplankton, including dinoflagellates, zooflagellates, foraminiferans, and radiolarians. Some of these (such as dinoflagellates) are also phytoplankton; the distinction between plants and animals often breaks down in very small organisms. Other zooplankton include cnidarians, ctenophores, chaetognaths, molluscs, arthropods, urochordates, and annelids such as polychaetes. Many larger animals begin their life as zooplankton before they become large enough to take their familiar forms. Two examples are fish larvae and sea stars (also called starfish).
Plants and algae
Microscopic algae and plants provide important habitats for life, sometimes acting as hiding places for larval forms of larger fish and foraging places for invertebrates.
Algal life is widespread and very diverse under the ocean. Microscopic photosynthetic algae contribute a larger proportion of the world's photosynthetic output than all the terrestrial forests combined. Most of the niche occupied by sub plants on land is actually occupied by macroscopic algae in the ocean, such as Sargassum and kelp, which are commonly known as seaweeds that create kelp forests.
Plants that survive in the sea are often found in shallow waters, such as the seagrasses (examples of which are eelgrass, Zostera, and turtle grass, Thalassia). These plants have adapted to the high salinity of the ocean environment. The intertidal zone is also a good place to find plant life in the sea, where mangroves or cordgrass or beach grass might grow.
Invertebrates
As on land, invertebrates make up a huge portion of all life in the sea. Invertebrate sea life includes Cnidaria such as jellyfish and sea anemones; Ctenophora; sea worms including the phyla Platyhelminthes, Nemertea, Annelida, Sipuncula, Echiura, Chaetognatha, and Phoronida; Mollusca including shellfish, squid, octopus; Arthropoda including Chelicerata and Crustacea; Porifera; Bryozoa; Echinodermata including starfish; and Urochordata including sea squirts or tunicates. Invertebrates have no backbone. There are over a million species.
Fungi
Over 10,000 species of fungi are known from marine environments. These are parasitic on marine algae or animals, or are saprobes on algae, corals, protozoan cysts, sea grasses, wood and other substrata, and can also be found in sea foam. Spores of many species have special appendages which facilitate attachment to the substratum. A very diverse range of unusual secondary metabolites is produced by marine fungi.
Vertebrates
Fish
A reported 33,400 species of fish, including bony and cartilaginous fish, had been described by 2016, more than all other vertebrates combined. About 60% of fish species live in saltwater.
Reptiles
Reptiles which inhabit or frequent the sea include sea turtles, sea snakes, terrapins, the marine iguana, and the saltwater crocodile. Most extant marine reptiles, except for some sea snakes, are oviparous and need to return to land to lay their eggs. Thus most species, excepting sea turtles, spend most of their lives on or near land rather than in the ocean. Despite their marine adaptations, most sea snakes prefer shallow waters nearby land, around islands, especially waters that are somewhat sheltered, as well as near estuaries. Some extinct marine reptiles, such as ichthyosaurs, evolved to be viviparous and had no requirement to return to land.
Birds
Birds adapted to living in the marine environment are often called seabirds. Examples include albatross, penguins, gannets, and auks. Although they spend most of their lives in the ocean, species such as gulls can often be found thousands of miles inland.
Mammals
There are five main types of marine mammals, namely cetaceans (toothed whales and baleen whales); sirenians such as manatees; pinnipeds including seals and the walrus; sea otters; and the
polar bear. All are air-breathing, and while some such as the sperm whale can dive for prolonged periods, all must return to the surface to breathe.
Subfields
The marine ecosystem is large, and thus there are many sub-fields of marine biology. Most involve studying specializations of particular animal groups, such as phycology, invertebrate zoology and ichthyology. Other subfields study the physical effects of continual immersion in sea water and the ocean in general, adaptation to a salty environment, and the effects of changing various oceanic properties on marine life. A subfield of marine biology studies the relationships between oceans and ocean life, and global warming and environmental issues (such as carbon dioxide displacement). Recent marine biotechnology has focused largely on marine biomolecules, especially proteins, that may have uses in medicine or engineering. Marine environments are the home to many exotic biological materials that may inspire biomimetic materials.
Related fields
Marine biology is a branch of biology. It is closely linked to oceanography, especially biological oceanography, and may be regarded as a sub-field of marine science. It also encompasses many ideas from ecology. Fisheries science and marine conservation can be considered partial offshoots of marine biology (as well as environmental studies). Marine Chemistry, Physical oceanography and Atmospheric sciences are closely related to this field.
Distribution factors
An active research topic in marine biology is to discover and map the life cycles of various species and where they spend their time. Technologies that aid in this discovery include pop-up satellite archival tags, acoustic tags, and a variety of other data loggers. Marine biologists study how the ocean currents, tides and many other oceanic factors affect ocean life forms, including their growth, distribution and well-being. This has only recently become technically feasible with advances in GPS and newer underwater visual devices.
Most ocean life breeds in specific places, nests or not in others, spends time as juveniles in still others, and in maturity in yet others. Scientists know little about where many species spend different parts of their life cycles especially in the infant and juvenile years. For example, it is still largely unknown where juvenile sea turtles and some year-1 sharks travel. Recent advances in underwater tracking devices are illuminating what we know about marine organisms that live at great Ocean depths. The information that pop-up satellite archival tags give aids in certain time of the year fishing closures and development of a marine protected area. This data is important to both scientists and fishermen because they are discovering that by restricting commercial fishing in one small area they can have a large impact in maintaining a healthy fish population in a much larger area.
History
The study of marine biology dates back to Aristotle (384–322 BC), who made many observations of life in the sea around Lesbos, laying the foundation for many future discoveries. In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves. The British naturalist Edward Forbes (1815–1854) is generally regarded as the founder of the science of marine biology. The pace of oceanographic and marine biology studies quickly accelerated during the course of the 19th century.
The observations made in the first studies of marine biology fueled the age of discovery and exploration that followed. During this time, a vast amount of knowledge was gained about the life that exists in the oceans of the world. Many voyages contributed significantly to this pool of knowledge. Among the most significant were the voyages of where Charles Darwin came up with his theories of evolution and on the formation of coral reefs. Another important expedition was undertaken by HMS Challenger, where findings were made of unexpectedly high species diversity among fauna stimulating much theorizing by population ecologists on how such varieties of life could be maintained in what was thought to be such a hostile environment. This era was important for the history of marine biology but naturalists were still limited in their studies because they lacked technology that would allow them to adequately examine species that lived in deep parts of the oceans.
The creation of marine laboratories was important because it allowed marine biologists to conduct research and process their specimens from expeditions. The oldest marine laboratory in the world, Station biologique de Roscoff, was established in Concarneau, France founded by the College of France in 1859. In the United States, the Scripps Institution of Oceanography dates back to 1903, while the prominent Woods Hole Oceanographic Institute was founded in 1930. The development of technology such as sound navigation ranging, scuba diving gear, submersibles and remotely operated vehicles allowed marine biologists to discover and explore life in deep oceans that was once thought to not exist.
See also
Acoustic ecology
Aquaculture
Bathymetry
Biological oceanography
Freshwater biology
Modular ocean model
Oceanic basin
Oceanic climate
Phycology
World Ocean Atlas
Lists
Glossary of ecology
Index of biology articles
Large marine ecosystem
List of ecologists
List of marine biologists
List of marine ecoregions (WWF)
Outline of biology
Outline of ecology
References
Further references
Morrissey J and Sumich J (2011) Introduction to the Biology of Marine Life Jones & Bartlett Publishers. .
Mladenov, Philip V., Marine Biology: A Very Short Introduction, 2nd edn (Oxford, 2020; online edn, Very Short Introductions online, Feb. 2020), http://dx.doi.org/10.1093/actrade/9780198841715.001.0001, accessed 21 Jun. 2020.
External links
Smithsonian Ocean Portal
Marine Conservation Society
Marine Ecology - an evolutionary perspective
Free special issue: Marine Biology in Time and Space
Creatures of the deep ocean – National Geographic documentary, 2010.
Exploris
Freshwater and Marine Image Bank - From the University of Washington Library
Marine Training Portal - Portal grouping training initiatives in the field of Marine Biology
Biological oceanography
Fisheries science
Oceanographical terminology |
20023 | https://en.wikipedia.org/wiki/Microkernel | Microkernel | In computer science, a microkernel (often abbreviated as μ-kernel) is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC).
If the hardware provides multiple rings or CPU modes, the microkernel may be the only software executing at the most privileged level, which is generally referred to as supervisor or kernel mode. Traditional operating system functions, such as device drivers, protocol stacks and file systems, are typically removed from the microkernel itself and are instead run in user space.
In terms of the source code size, microkernels are often smaller than monolithic kernels. The MINIX 3 microkernel, for example, has only approximately 12,000 lines of code.
History
Microkernels trace their roots back to Danish computer pioneer Per Brinch Hansen and his tenure in Danish computer company Regnecentralen where he led software development efforts for the RC 4000 computer. In 1967, Regnecentralen was installing a RC 4000 prototype in a Polish fertilizer plant in Puławy. The computer used a small real-time operating system tailored for the needs of the plant. Brinch Hansen and his team became concerned with the lack of generality and reusability of the RC 4000 system. They feared that each installation would require a different operating system so they started to investigate novel and more general ways of creating software for the RC 4000.
In 1969, their effort resulted in the completion of the RC 4000 Multiprogramming System. Its nucleus provided inter-process communication based on message-passing for up to 23 unprivileged processes, out of which 8 at a time were protected from one another. It further implemented scheduling of time slices of programs executed in parallel, initiation and control of program execution at the request of other running programs, and initiation of data transfers to or from peripherals. Besides these elementary mechanisms, it had no built-in strategy for program execution and resource allocation. This strategy was to be implemented by a hierarchy of running programs in which parent processes had complete control over child processes and acted as their operating systems.
Following Brinch Hansen's work, microkernels have been developed since the 1970s. The term microkernel itself first appeared no later than 1981. Microkernels were meant as a response to changes in the computer world, and to several challenges adapting existing "mono-kernels" to these new systems. New device drivers, protocol stacks, file systems and other low-level systems were being developed all the time. This code was normally located in the monolithic kernel, and thus required considerable work and careful code management to work on. Microkernels were developed with the idea that all of these services would be implemented as user-space programs, like any other, allowing them to be worked on monolithically and started and stopped like any other program. This would not only allow these services to be more easily worked on, but also separated the kernel code to allow it to be finely tuned without worrying about unintended side effects. Moreover, it would allow entirely new operating systems to be "built up" on a common core, aiding OS research.
Microkernels were a very hot topic in the 1980s when the first usable local area networks were being introduced.. The AmigaOS Exec kernel was an early example, introduced in 1986 and used in a PC with relative commercial success. The lack of memory protection, considered in other respects a flaw, allowed this kernel to have very high message-passing performance because it did not need to copy data while exchanging messages between user-space programs.
The same mechanisms that allowed the kernel to be distributed into user space also allowed the system to be distributed across network links. The first microkernels, notably Mach created by Richard Rashid, proved to have disappointing performance, but the inherent advantages appeared so great that it was a major line of research into the late 1990s. However, during this time the speed of computers grew greatly in relation to networking systems, and the disadvantages in performance came to overwhelm the advantages in development terms.
Many attempts were made to adapt the existing systems to have better performance, but the overhead was always considerable and most of these efforts required the user-space programs to be moved back into the kernel. By 2000, most large-scale Mach kernel efforts had ended, although Apple's macOS, released in 2001, still uses a hybrid kernel called XNU, which combines a heavily modified (hybrid) OSF/1's Mach kernel (OSFMK 7.3 kernel) with code from BSD UNIX, and this kernel is also used in iOS, tvOS, and watchOS. Windows NT, starting with NT 3.1 and continuing with Windows 10, uses a hybrid kernel design. , the Mach-based GNU Hurd is also functional and included in testing versions of Arch Linux and Debian.
Although major work on microkernels had largely ended, experimenters continued development. It has since been shown that many of the performance problems of earlier designs were not a fundamental limitation of the concept, but instead due to the designer's desire to use single-purpose systems to implement as many of these services as possible. Using a more pragmatic approach to the problem, including assembly code and relying on the processor to enforce concepts normally supported in software led to a new series of microkernels with dramatically improved performance.
Microkernels are closely related to exokernels. They also have much in common with hypervisors, but the latter make no claim to minimality and are specialized to supporting virtual machines; the L4 microkernel frequently finds use in a hypervisor capacity.
Introduction
Early operating system kernels were rather small, partly because computer memory was limited. As the capability of computers grew, the number of devices the kernel had to control also grew. Throughout the early history of Unix, kernels were generally small, even though they contained various device drivers and file system implementations. When address spaces increased from 16 to 32 bits, kernel design was no longer constrained by the hardware architecture, and kernels began to grow larger.
The Berkeley Software Distribution (BSD) of Unix began the era of larger kernels. In addition to operating a basic system consisting of the CPU, disks and printers, BSD added a complete TCP/IP networking system and a number of "virtual" devices that allowed the existing programs to work 'invisibly' over the network. This growth continued for many years, resulting in kernels with millions of lines of source code. As a result of this growth, kernels were prone to bugs and became increasingly difficult to maintain.
The microkernel was intended to address this growth of kernels and the difficulties that resulted. In theory, the microkernel design allows for easier management of code due to its division into user space services. This also allows for increased security and stability resulting from the reduced amount of code running in kernel mode. For example, if a networking service crashed due to buffer overflow, only the networking service's memory would be corrupted, leaving the rest of the system still functional.
Inter-process communication
Inter-process communication (IPC) is any mechanism which allows separate processes to communicate with each other, usually by sending messages. Shared memory is, strictly defined, also an inter-process communication mechanism, but the abbreviation IPC usually refers to message passing only, and it is the latter that is particularly relevant to microkernels. IPC allows the operating system to be built from a number of smaller programs called servers, which are used by other programs on the system, invoked via IPC. Most or all support for peripheral hardware is handled in this fashion, with servers for device drivers, network protocol stacks, file systems, graphics, etc.
IPC can be synchronous or asynchronous. Asynchronous IPC is analogous to network communication: the sender dispatches a message and continues executing. The receiver checks (polls) for the availability of the message, or is alerted to it via some notification mechanism. Asynchronous IPC requires that the kernel maintains buffers and queues for messages, and deals with buffer overflows; it also requires double copying of messages (sender to kernel and kernel to receiver). In synchronous IPC, the first party (sender or receiver) blocks until the other party is ready to perform the IPC. It does not require buffering or multiple copies, but the implicit rendezvous can make programming tricky. Most programmers prefer asynchronous send and synchronous receive.
First-generation microkernels typically supported synchronous as well as asynchronous IPC, and suffered from poor IPC performance. Jochen Liedtke assumed the design and implementation of the IPC mechanisms to be the underlying reason for this poor performance. In his L4 microkernel he pioneered methods that lowered IPC costs by an order of magnitude. These include an IPC system call that supports a send as well as a receive operation, making all IPC synchronous, and passing as much data as possible in registers. Furthermore, Liedtke introduced the concept of the direct process switch, where during an IPC execution an (incomplete) context switch is performed from the sender directly to the receiver. If, as in L4, part or all of the message is passed in registers, this transfers the in-register part of the message without any copying at all. Furthermore, the overhead of invoking the scheduler is avoided; this is especially beneficial in the common case where IPC is used in an remote procedure call (RPC) type fashion by a client invoking a server. Another optimization, called lazy scheduling, avoids traversing scheduling queues during IPC by leaving threads that block during IPC in the ready queue. Once the scheduler is invoked, it moves such threads to the appropriate waiting queue. As in many cases a thread gets unblocked before the next scheduler invocation, this approach saves significant work. Similar approaches have since been adopted by QNX and MINIX 3.
In a series of experiments, Chen and Bershad compared memory cycles per instruction (MCPI) of monolithic Ultrix with those of microkernel Mach combined with a 4.3BSD Unix server running in user space. Their results explained Mach's poorer performance by higher MCPI and demonstrated that IPC alone is not responsible for much of the system overhead, suggesting that optimizations focused exclusively on IPC will have a limited effect. Liedtke later refined Chen and Bershad's results by making an observation that the bulk of the difference between Ultrix and Mach MCPI was caused by capacity cache-misses and concluding that drastically reducing the cache working set of a microkernel will solve the problem.
In a client-server system, most communication is essentially synchronous, even if using asynchronous primitives, as the typical operation is a client invoking a server and then waiting for a reply. As it also lends itself to more efficient implementation, most microkernels generally followed L4's lead and only provided a synchronous IPC primitive. Asynchronous IPC could be implemented on top by using helper threads. However, experience has shown that the utility of synchronous IPC is dubious: synchronous IPC forces a multi-threaded design onto otherwise simple systems, with the resulting synchronization complexities. Moreover, an RPC-like server invocation sequentializes client and server, which should be avoided if they are running on separate cores. Versions of L4 deployed in commercial products have therefore found it necessary to add an asynchronous notification mechanism to better support asynchronous communication. This signal-like mechanism does not carry data and therefore does not require buffering by the kernel. By having two forms of IPC, they have nonetheless violated the principle of minimality. Other versions of L4 have switched to asynchronous IPC completely.
As synchronous IPC blocks the first party until the other is ready, unrestricted use could easily lead to deadlocks. Furthermore, a client could easily mount a denial-of-service attack on a server by sending a request and never attempting to receive the reply. Therefore, synchronous IPC must provide a means to prevent indefinite blocking. Many microkernels provide timeouts on IPC calls, which limit the blocking time. In practice, choosing sensible timeout values is difficult, and systems almost inevitably use infinite timeouts for clients and zero timeouts for servers. As a consequence, the trend is towards not providing arbitrary timeouts, but only a flag which indicates that the IPC should fail immediately if the partner is not ready. This approach effectively provides a choice of the two timeout values of zero and infinity. Recent versions of L4 and MINIX have gone down this path (older versions of L4 used timeouts). QNX avoids the problem by requiring the client to specify the reply buffer as part of the message send call. When the server replies the kernel copies the data to the client's buffer, without having to wait for the client to receive the response explicitly.
Servers
Microkernel servers are essentially daemon programs like any others, except that the kernel grants some of them privileges to interact with parts of physical memory that are otherwise off limits to most programs. This allows some servers, particularly device drivers, to interact directly with hardware.
A basic set of servers for a general-purpose microkernel includes file system servers, device driver servers, networking servers, display servers, and user interface device servers. This set of servers (drawn from QNX) provides roughly the set of services offered by a Unix monolithic kernel. The necessary servers are started at system startup and provide services, such as file, network, and device access, to ordinary application programs. With such servers running in the environment of a user application, server development is similar to ordinary application development, rather than the build-and-boot process needed for kernel development.
Additionally, many "crashes" can be corrected by simply stopping and restarting the server. However, part of the system state is lost with the failing server, hence this approach requires applications to cope with failure. A good example is a server responsible for TCP/IP connections: If this server is restarted, applications will experience a "lost" connection, a normal occurrence in a networked system. For other services, failure is less expected and may require changes to application code. For QNX, restart capability is offered as the QNX High Availability Toolkit.
Device drivers
Device drivers frequently perform direct memory access (DMA), and therefore can write to arbitrary locations of physical memory, including various kernel data structures. Such drivers must therefore be trusted. It is a common misconception that this means that they must be part of the kernel. In fact, a driver is not inherently more or less trustworthy by being part of the kernel.
While running a device driver in user space does not necessarily reduce the damage a misbehaving driver can cause, in practice it is beneficial for system stability in the presence of buggy (rather than malicious) drivers: memory-access violations by the driver code itself (as opposed to the device) may still be caught by the memory-management hardware. Furthermore, many devices are not DMA-capable, their drivers can be made untrusted by running them in user space. Recently, an increasing number of computers feature IOMMUs, many of which can be used to restrict a device's access to physical memory. This also allows user-mode drivers to become untrusted.
User-mode drivers actually predate microkernels. The Michigan Terminal System (MTS), in 1967, supported user space drivers (including its file system support), the first operating system to be designed with that capability. Historically, drivers were less of a problem, as the number of devices was small and trusted anyway, so having them in the kernel simplified the design and avoided potential performance problems. This led to the traditional driver-in-the-kernel style of Unix, Linux, and Windows NT. With the proliferation of various kinds of peripherals, the amount of driver code escalated and in modern operating systems dominates the kernel in code size.
Essential components and minimality
As a microkernel must allow building arbitrary operating system services on top, it must provide some core functionality. At a minimum, this includes:
Some mechanisms for dealing with address spaces, required for managing memory protection
Some execution abstraction to manage CPU allocation, typically threads or scheduler activations
Inter-process communication, required to invoke servers running in their own address spaces
This minimal design was pioneered by Brinch Hansen's Nucleus and the hypervisor of IBM's VM. It has since been formalised in Liedtke's minimality principle:
A concept is tolerated inside the microkernel only if moving it outside the kernel, i.e., permitting competing implementations, would prevent the implementation of the system's required functionality.
Everything else can be done in a usermode program, although device drivers implemented as user programs may on some processor architectures require special privileges to access I/O hardware.
Related to the minimality principle, and equally important for microkernel design, is the separation of mechanism and policy, it is what enables the construction of arbitrary systems on top of a minimal kernel. Any policy built into the kernel cannot be overwritten at user level and therefore limits the generality of the microkernel. Policy implemented in user-level servers can be changed by replacing the servers (or letting the application choose between competing servers offering similar services).
For efficiency, most microkernels contain schedulers and manage timers, in violation of the minimality principle and the principle of policy-mechanism separation.
Start up (booting) of a microkernel-based system requires device drivers, which are not part of the kernel. Typically this means that they are packaged with the kernel in the boot image, and the kernel supports a bootstrap protocol that defines how the drivers are located and started; this is the traditional bootstrap procedure of L4 microkernels. Some microkernels simplify this by placing some key drivers inside the kernel (in violation of the minimality principle), LynxOS and the original Minix are examples. Some even include a file system in the kernel to simplify booting. A microkernel-based system may boot via multiboot compatible boot loader. Such systems usually load statically-linked servers to make an initial bootstrap or mount an OS image to continue bootstrapping.
A key component of a microkernel is a good IPC system and virtual-memory-manager design that allows implementing page-fault handling and swapping in usermode servers in a safe way. Since all services are performed by usermode programs, efficient means of communication between programs are essential, far more so than in monolithic kernels. The design of the IPC system makes or breaks a microkernel. To be effective, the IPC system must not only have low overhead, but also interact well with CPU scheduling.
Performance
On most mainstream processors, obtaining a service is inherently more expensive in a microkernel-based system than a monolithic system. In the monolithic system, the service is obtained by a single system call, which requires two mode switches (changes of the processor's ring or CPU mode). In the microkernel-based system, the service is obtained by sending an IPC message to a server, and obtaining the result in another IPC message from the server. This requires a context switch if the drivers are implemented as processes, or a function call if they are implemented as procedures. In addition, passing actual data to the server and back may incur extra copying overhead, while in a monolithic system the kernel can directly access the data in the client's buffers.
Performance is therefore a potential issue in microkernel systems. The experience of first-generation microkernels such as Mach and ChorusOS showed that systems based on them performed very poorly. However, Jochen Liedtke showed that Mach's performance problems were the result of poor design and implementation, specifically Mach's excessive cache footprint. Liedtke demonstrated with his own L4 microkernel that through careful design and implementation, and especially by following the minimality principle, IPC costs could be reduced by more than an order of magnitude compared to Mach. L4's IPC performance is still unbeaten across a range of architectures.
While these results demonstrate that the poor performance of systems based on first-generation microkernels is not representative for second-generation kernels such as L4, this constitutes no proof that microkernel-based systems can be built with good performance. It has been shown that a monolithic Linux server ported to L4 exhibits only a few percent overhead over native Linux. However, such a single-server system exhibits few, if any, of the advantages microkernels are supposed to provide by structuring operating system functionality into separate servers.
A number of commercial multi-server systems exist, in particular the real-time systems QNX and Integrity. No comprehensive comparison of performance relative to monolithic systems has been published for those multiserver systems. Furthermore, performance does not seem to be the overriding concern for those commercial systems, which instead emphasize reliably quick interrupt handling response times (QNX) and simplicity for the sake of robustness. An attempt to build a high-performance multiserver operating system was the IBM Sawmill Linux project. However, this project was never completed.
It has been shown in the meantime that user-level device drivers can come close to the performance of in-kernel drivers even for such high-throughput, high-interrupt devices as Gigabit Ethernet. This seems to imply that high-performance multi-server systems are possible.
Security
The security benefits of microkernels have been frequently discussed. In the context of security the minimality principle of microkernels is, some have argued, a direct consequence of the principle of least privilege, according to which all code should have only the privileges needed to provide required functionality. Minimality requires that a system's trusted computing base (TCB) should be kept minimal. As the kernel (the code that executes in the privileged mode of the hardware) has unvetted access to any data and can thus violate its integrity or confidentiality, the kernel is always part of the TCB. Minimizing it is natural in a security-driven design.
Consequently, microkernel designs have been used for systems designed for high-security applications, including KeyKOS, EROS and military systems. In fact common criteria (CC) at the highest assurance level (Evaluation Assurance Level (EAL) 7) has an explicit requirement that the target of evaluation be "simple", an acknowledgment of the practical impossibility of establishing true trustworthiness for a complex system. Again, the term "simple" is misleading and ill-defined. At least the Department of Defense Trusted Computer System Evaluation Criteria introduced somewhat more precise verbiage at the B3/A1 classes:
In 2018, a paper presented at the Asia-Pacific Systems Conference claimed that microkernels were demonstrably safer than monolithic kernels by investigating all published critical CVEs for the Linux kernel at the time. The study concluded that 40% of the issues could not occur at all in a formally verified microkernel, and only 4% of the issues would remain entirely unmitigated in such a system.
Third generation
More recent work on microkernels has been focusing on formal specifications of the kernel API, and formal proofs of the API's security properties and implementation correctness. The first example of this is a mathematical proof of the confinement mechanisms in EROS, based on a simplified model of the EROS API. More recently (in 2007) a comprehensive set of machine-checked proofs was performed of the properties of the protection model of seL4, a version of L4.
This has led to what is referred to as third-generation microkernels, characterised by a security-oriented API with resource access controlled by capabilities, virtualization as a first-class concern, novel approaches to kernel resource management, and a design goal of suitability for formal analysis, besides the usual goal of high performance. Examples are Coyotos, seL4, Nova, Redox and Fiasco.OC.
In the case of seL4, complete formal verification of the implementation has been achieved, i.e. a mathematical proof that the kernel's implementation is consistent with its formal specification. This provides a guarantee that the properties proved about the API actually hold for the real kernel, a degree of assurance which goes beyond even CC EAL7. It was followed by proofs of security-enforcement properties of the API, and a proof demonstrating that the executable binary code is a correct translation of the C implementation, taking the compiler out of the TCB. Taken together, these proofs establish an end-to-end proof of security properties of the kernel.
Examples
Some examples of microkernels are:
The L4 microkernel family
Zircon
Horizon
Nanokernel
The term nanokernel or picokernel historically referred to:
A kernel where the total amount of kernel code, i.e. code executing in the privileged mode of the hardware, is very small. The term picokernel was sometimes used to further emphasize small size. The term nanokernel was coined by Jonathan S. Shapiro in the paper The KeyKOS NanoKernel Architecture. It was a sardonic response to Mach, which claimed to be a microkernel while Shapiro considered it monolithic, essentially unstructured, and slower than the systems it sought to replace. Subsequent reuse of and response to the term, including the picokernel coinage, suggest that the point was largely missed. Both nanokernel and picokernel have subsequently come to have the same meaning expressed by the term microkernel.
A virtualization layer underneath an operating system, which is more correctly referred to as a hypervisor.
A hardware abstraction layer that forms the lowest-level part of a kernel, sometimes used to provide real-time functionality to normal operating systems, like Adeos.
There is also at least one case where the term nanokernel is used to refer not to a small kernel, but one that supports a nanosecond clock resolution.
See also
Kernel (operating system)
Exokernel
Hybrid kernel
Loadable kernel module
Monolithic kernel
Microservices
Tanenbaum–Torvalds debate
Trusted computing base
Unikernel
Multi-Environment Real-Time
References
Further reading
Scientific articles about microkernels (on CiteSeerX), including:
– the basic QNX reference.
-the basic reliable reference.
– the basic Mach reference.
* An assessment of the present and future state of microkernel based OSes as of January 1994
MicroKernel page from the Portland Pattern Repository
The Tanenbaum–Torvalds debate
The Tanenbaum-Torvalds Debate, 1992.01.29
Tanenbaum, A. S. "Can We Make Operating Systems Reliable and Secure?".
Torvalds, L. Linus Torvalds about the microkernels again, 2006.05.09
Shapiro, J. "Debunking Linus's Latest".
Tanenbaum, A. S. "Tanenbaum-Torvalds Debate: Part II".
Microkernels
fr:Noyau de système d'exploitation#Systèmes à micro-noyaux
it:Kernel#Microkernel
fi:Käyttöjärjestelmän ydin#Mikroydin |
20024 | https://en.wikipedia.org/wiki/Mach | Mach | Mach may refer to:
Computing
Mach (kernel), an operating systems kernel technology
ATI Mach, a 2D GPU chip by ATI
GNU Mach, the microkernel upon which GNU Hurd is based
mach, a computer program for building RPM packages in a chroot environment
Places
Machh or Mach, a town in Pakistan
Machynlleth or Mach, a town in Wales
Mach (crater), a lunar crater
3949 Mach, an asteroid
Other uses
Mach number, a measure of speed based on the speed of sound
Mach (surname)
"Mach" (song), a 2010 song by Rainbow
Mach (Transformers), a Multiforce character in Transformers: Victory
M.A.C.H. (video game)
Muscarinic acetylcholine receptor (mACh)
Fly Castelluccio Mach, an Italian paramotor design
Vietnamese mạch, an obsolete Vietnamese currency unit
Hayato Sakurai or Mach (born 1975), mixed martial artist
M.A.C.H., a fictional series of cyborg and robot agents in M.A.C.H. 1
See also
Mac (disambiguation)
Mach O (disambiguation)
Mach 1 (disambiguation)
Mach 2 (disambiguation)
Mach 3 (disambiguation)
Mach 4 (disambiguation)
Mach 5 (disambiguation)
Mach 6 (disambiguation)
Mach 7 (disambiguation)
Mach 8 (disambiguation)
Mach 9 (disambiguation)
Mach 10 (disambiguation)
Mache (unit), an obsolete unit of volumic radioactivity
Mack (disambiguation)
Mak (disambiguation) |
20025 | https://en.wikipedia.org/wiki/Multihull | Multihull | A multihull is a ship or boat with more than one hull, whereas a vessel with a single hull is a monohull.
Multihull ships can be classified by the number of hulls, by their arrangement and by their shapes and sizes.
Multihull history
Single-outrigger boats, double-canoes (catamarans), and double-outrigger boats (trimarans) of the Austronesian peoples are the direct antecedents of modern multihull vessels. They were developed during the Austronesian Expansion (c. 3000 to 1500 BC) which allowed Austronesians to colonize maritime Southeast Asia, Micronesia, Island Melanesia, Madagascar, and Polynesia. These Austronesian vessels are still widely used today by traditional fishermen in Austronesian regions in maritime Southeast Asia, Oceania and Madagascar; as well as areas they were introduced to by Austronesians in ancient times like in the East African coast and in South Asia.
Greek sources also describe large third-century BC catamarans, one built under the supervision of Archimedes, the Syracusia, and another reportedly built by Ptolemy IV Philopator of Egypt, the Tessarakonteres.
Modern developers
Modern pioneers of multihull design include James Wharram (UK), Derek Kelsall (UK), Tom Lack (UK), Lock Crowther (Aust), Hedly Nicol (Aust), Malcolm Tennant (NZ), Jim Brown (USA), Arthur Piver (USA), Chris White (US), Ian Farrier (NZ), LOMOcean (NZ), and Dick Newick (USA).
Multihull types
Single-outrigger ("proa")
A single-outrigger canoe is a canoe with a slender outrigger ("ama") attached by two or more struts ("akas"). This craft will normally be propelled by paddles. Single-outrigger canoes that use sails are usually inaccurately referred to by the name "proa". While single-outrigger canoes and proas both derive stability from the outrigger, the proa has the greater need of the outrigger to counter the heeling effect of the sail. The outrigger on a proa can either be on the lee or windward side, or in a tacking proa, interchangeable. However, more recently, proas tend to keep the outrigger either to leeward or to wind which means that instead of tacking, a "shunt" is required, whereby the bow becomes the stern, and the stern becomes the bow. see Pacific, Atlantic, Harry and tacking proas
Catamaran (double-hull)
A catamaran is a vessel with twin hulls. Commercial catamarans began in 17th century England. Separate attempts at steam-powered catamarans were carried out by the middle of the 20th century. However, success required better materials and more developed hydrodynamic technologies. During the second half of the 20th century catamaran designs flourished. Catamaran configurations are used for racing, sailing, tourist and fishing boats.
The hulls of a catamaran are typically connected by a bridgedeck, although some simpler cruising catamarans simply have a trampoline stretched between the crossbeams (or "akas"). Small beachable catamarans, such as the Hobie Cat, also have only a trampoline between the hulls.
Catamarans derive stability from the distance between the hulls—transverse clearance—the greater this distance, the greater the stability. Typically, catamaran hulls are slim, although they may flare above the waterline to give reserve buoyancy. The vertical clearance between the design waterplane and the bottom of the bridge deck determines the likelihood of contact with waves. Increased vertical clearance diminishes such contact and increases seaworthiness, within limits.
Trimaran (double-outrigger)
A trimaran (or double-outrigger) is a vessel with two outrigger floats attached on either side of a main hull by a crossbeam, wing, or other form of superstructure. They are derived from traditional double-outrigger vessels of maritime Southeast Asia. Despite not being traditionally Polynesian, western trimarans use traditional Polynesian terms for the hull (vaka), the floats (ama), and connectors (aka). The word "trimaran" is a portmanteau of "tri" and "(cata)maran", a term that is thought to have been coined by Victor Tchetchet, a pioneering, Ukrainian-born modern multihull designer.
Some trimaran configurations use the outlying hulls to enhance stability and allow for shallow draft, examples include the experimental ship RV Triton and the Independence class of littoral combat ships (US).
Four and five hulls
Some multihulls with four (quadrimaran) or five (pentamaran) hulls have been proposed; few have been built. A Swiss entrepreneur is attempting to raise €25 million to build a sail-driven quadrimaran that would use solar power to scoop plastic from the ocean; the project is scheduled for launch in 2020. A French manufacturer, Tera-4, produces motor quadrimarans which use aerodynamic lift between the four hulls to promote planing and reduce power consumption.
Design concepts for vessels with two pair of outriggers have been referred to as pentamarans. The design concept comprises a narrow, long hull that cuts through waves. The outriggers then provide the stability that such a narrow hull needs. While the aft sponsons act as trimaran sponsons do, the front sponsons do not touch the water normally; only if the ship rolls to one side do they provide added buoyancy to correct the roll. BMT Group, a shipbuilding and engineering company in the UK, has proposed a fast cargo ship and a yacht using this kind of hull.
SWATH multihulls
Multihull designs may have hull beams that are slimmer at the water surface ("waterplane") than underwater. This arrangement allows good wave-piercing, while keeping a buoyant hydrodynamic hull beneath the waterplane. In a catamaran configuration this is called a small waterplane area twin hull, or SWATH. While SWATHs are stable in rough seas, they have the drawbacks, compared with other catamarans, of having a deeper draft, being more sensitive to loading, and requiring more power because of their higher underwater surface areas. Triple-hull configurations of small waterplane area craft had been studied, but not built, as of 2008.
Performance
Each hull of a multihull vessel can be narrower than that of a monohull with the same displacement and long, narrow hulls, a multihull typically produces very small bow waves and wakes, a consequence of a favorable Froude number. Vessels with beamy hulls (typically monohulls) normally create a large bow wave and wake. Such a vessel is limited by its "hull speed", being unable to "climb over" its bow wave unless it changes from displacement mode to planing mode. Vessels with slim hulls (typically multihulls) will normally create no appreciable bow wave to limit their progress.
In 1978, 101 years after catamarans like Amaryllis were banned from yacht racing they returned to the sport. This started with the victory of the trimaran Olympus Photo, skippered by Mike Birch in the first Route du Rhum. Thereafter, no open ocean race was won by a monohull. Winning times dropped by 70%, since 1978. Olympus Photo's 23-day 6 hr 58' 35" success dropped to Gitana 11's 7d 17h 19'6", in 2006. Around 2016 the first large wind driven foil-borne racing catamarans were built. These cats rise onto foils and T-foiled rudders only at higher speeds.
Sailing multihulls and workboats
The increasing popularity of catamaran since the 1960s is down to the added space, speed, shallow draft, and lack of heeling underway. The stability of a multihull makes sailing much less tiring for the crew, and is particularly suitable for families. Having no need for ballast for stability, multihulls are much lighter than monohull sailboats; but a multihull's fine hull sections mean that one must take care not to overload the vessel. Powerboats catamarans are increasingly used for racing, cruising and as workboats and fishing boats. Speed, the stable working platform, safety, and added space are the prime advantages for power cats.
"The weight of a multihull, of this length, is probably not much more than half the weight of a monohull of the same length and it can be sailed with less crew effort."
Racing catamarans and trimarans are popular in France, New Zealand and Australia. Cruising cats are commonest in the Caribbean and Mediterranean (where they form the bulk of the charter business) and Australia. Multihulls are less common in the US, perhaps because their increased beam require wider dock/slips. Smaller multihulls may be collapsible and trailerable, and thus suitable for daybooks and racers. Until the 1960s most multihull sailboats (except for beach cats) were built either by their owners or by boat builders; since then companies have been selling mass-produced boats, of which there are more than 150 models.
Small sailing catamarans are also called beach catamarans. The Malibu Outrigger is one of the first beach launched multihull sailboat (1950). The most recognised racing classes are the Hobie Cat 14, Formula 18 cats, A-cats, the current Olympic Nacra 17, the former Olympic multihull Tornado and New Zealand's Weta trimaran.
Mega or super catamarans are those over 60 feet in length. These often receive substantial customisation following the request of the owner. Builders include Corsair Marine (mid-sized trimarans) and HanseYachts' Privilège brand (large catamarans). The largest manufacturer of large multihulls is Fountaine-Pajot in France.
Powerboats range from small single pilot Formula 1s to large multi-engined or gas turbined power boats that are used in off-shore racing and employ 2 to 4 pilots.
See also
Notes
References and Bibliography
Harvey, Derek, Multihulls for Cruising and Racing, Adlard Coles, London 1990,
External links
The Multihull Offshore Cruising & Racing Association
The UK Catamaran Racing Association
The Multihull Yacht Club of Queensland (Australia)
Multihull Boatbuilding Information / Community
Articles and news on multihulls, profiles of boats, designers, yards, etc.
Multihulls designer & builder
International Sailing Federation
The multihulls reference magazine
The multihulls reference magazine (Australia) |
20029 | https://en.wikipedia.org/wiki/Multics%20Relational%20Data%20Store | Multics Relational Data Store | The Multics Relational Data Store, or MRDS for short, was the first commercial relational database management system. It was written in PL/1 by Honeywell for the Multics operating system and first sold in June 1976. Unlike the SQL systems that emerged in the late 1970s and early 80's, MRDS used a command language only for basic data manipulation, equivalent to the SELECT or UPDATE statements in SQL. Other operations, like creating a new database, or general file management, required the use of a separate command program.
References
Paul McJones, "Multics Relational Data Store (MRDS)"
Multics software
Proprietary database management systems |
20032 | https://en.wikipedia.org/wiki/Mike%20Oldfield | Mike Oldfield | Michael Gordon Oldfield (born 15 May 1953) is a British musician, songwriter, and producer best known for his debut studio album Tubular Bells (1973), which became an unexpected critical and commercial success and propelled him to worldwide fame. Though primarily a guitarist, Oldfield plays a range of instruments, which includes keyboards, percussion, and vocals. He has adopted a range of musical styles throughout his career, including progressive rock, world, folk, classical, electronic, ambient, and new age music.
Oldfield took up the guitar at age ten and left school in his teens to embark on a music career. From 1967 to 1970, he and his sister Sally Oldfield were a folk duo The Sallyangie, after which he performed with Kevin Ayers. In 1971, Oldfield started work on Tubular Bells which caught the attention of Richard Branson, who agreed to release it on his new label, Virgin Records. Its opening was used in the horror film The Exorcist and the album went on to sell over 2.7 million copies in the UK. Oldfield followed it with Hergest Ridge (1974), Ommadawn (1975), and Incantations (1978), all of which feature longform and mostly instrumental pieces.
In the late 1970s, Oldfield began to tour and release more commercial and song-based music, beginning with Platinum (1979), QE2 (1980), and Five Miles Out (1982). His most successful album of this period was Crises (1983), which features the worldwide hit single "Moonlight Shadow" with vocalist Maggie Reilly. After signing with WEA in the early 1990s, Oldfield's most significant album of the decade was Tubular Bells II (1992) and experimented with virtual reality and gaming content with his MusicVR project. In 2012, he performed at the opening ceremony for the 2012 Olympic Games held in London. Oldfield's discography includes 26 studio albums, nine of which have reached the UK top-ten. His most recent album is Return to Ommadawn (2017).
Early life
Oldfield was born on 15 May 1953 in Reading, Berkshire, to Raymond Oldfield, a general practitioner, and Maureen (née Liston), an Irish woman. He has two elder siblings, sister Sally and brother Terence. When Oldfield was seven, his mother gave birth to a younger brother, David, who had Down syndrome and died in infancy. His mother was prescribed barbiturates, to which she became addicted. She suffered from mental health problems and would spend much of the rest of her life in mental institutions, dying in early 1975, shortly after Oldfield had started writing Ommadawn.
Oldfield attended (what was then called) St. Joseph's Convent School, Highlands Junior School, St. Edward's Preparatory School (still located in Tilehurst Road) and Presentation College (Bath Road), all in Reading. When he was thirteen, the family moved to Harold Wood, then in Essex, and Oldfield attended Hornchurch Grammar School where, having already displayed musical talent, he earned one GCE qualification in English.
Oldfield took up the guitar aged ten, first learning on a 6-string acoustic instrument which his father had given to him. He learned technique by copying parts from songs, by folk guitarists Bert Jansch and John Renbourn, that he played on a portable record player. He tried to learn musical notation but was a "very, very slow" learner, saying: "If I have to, I can write things down. But I don't like to." By the time he was 12, Oldfield played the electric guitar and performed in local folk and youth clubs and dances, earning as much as £4 per gig. During a six-month break from music that Oldfield had around this time, he took up painting. In May 1968, when Oldfield turned fifteen, his school headmaster requested that he cut his long hair. Oldfield refused and left abruptly. He then decided to pursue music on a full-time, professional basis.
Career
1968–1972: Early career
After leaving school Oldfield accepted an invitation from his sister Sally to form a folk duo The Sallyangie, taking its name from her name and Oldfield's favourite Jansch tune, "Angie". They toured England and Paris and struck a deal with Transatlantic Records, for which they recorded one album, Children of the Sun (1969). After they split in the following year Oldfield suffered a nervous breakdown. He auditioned as bassist for Family in 1969 following the departure of Ric Grech, but the group did not share Roger Chapman's enthusiasm towards Oldfield's performance. Oldfield spent much of the next year living off his father and performing in an electric rock band named Barefoot that included his brother Terry on flute, until the group disbanded in early 1970.
In February 1970, Oldfield auditioned as the bassist in The Whole World, a new backing band that former Soft Machine vocalist Kevin Ayers was putting together. He landed the position despite the bass being a new instrument for him, but he also played occasional lead guitar and later looked back on this time as providing valuable training on the bass. Oldfield went on to play on Ayers's albums Shooting at the Moon (1970) and Whatevershebringswesing (1971), and played mandolin on Edgar Broughton Band (1971). All three albums were recorded at Abbey Road Studios, where Oldfield familiarised himself with a variety of instruments, such as orchestral percussion, piano, Mellotron, and harpsichord, and started to write and put down musical ideas of his own. While doing so Oldfield took up work as a reserve guitarist in a stage production of Hair at the Shaftesbury Theatre, where he played and gigged with Alex Harvey. After ten performances Oldfield grew bored of the job and was fired after he decided to play his part for "Let the Sunshine In" in 7/8 time.
1971–1991: Virgin years
Tubular Bells
By mid-1971, Oldfield had assembled a demo tape containing sections of a longform instrumental piece initially titled "Opus One". Attempts to secure a recording deal to record it professionally came to nothing. In September 1971, Oldfield, now a session musician and bassist for the Arthur Louis Band, attended recording sessions at The Manor Studio near Kidlington, Oxfordshire, owned by businessman Richard Branson and run by engineers Tom Newman and Simon Heyworth. Branson already had several business ventures and was about to launch Virgin Records with Simon Draper. Newman and Heyworth heard some of Oldfield's demos and took them to Branson and Draper, who eventually gave Oldfield one week of recording time at The Manor, after which Oldfield had completed what became "Part One" of his composition, Tubular Bells. He recorded "Part Two" from February to April 1973. Branson agreed to release Tubular Bells as the first record on the Virgin label and secured Oldfield a six-album deal with an additional four albums as optional.
Tubular Bells was released on 25 May 1973. Oldfield played more than twenty different instruments in the multi-layered recording, and its style moved through diverse musical genres. Its 2,630,000 UK sales puts it at No. 34 on the list of the best-selling albums in the country. The title track became a top 10 hit single in the US after the opening was used in the film The Exorcist in 1973. It is today considered to be a forerunner of the new-age music movement.
Hergest Ridge to Incantations
In 1974, Oldfield played the guitar on the critically acclaimed album Rock Bottom by Robert Wyatt.
In late 1974, his follow-up LP, Hergest Ridge, was No. 1 in the UK for three weeks before being dethroned by Tubular Bells. Although Hergest Ridge was released over a year after Tubular Bells, it reached No. 1 first. Tubular Bells spent 11 weeks (10 of them consecutive) at No. 2 before its one week at the top. Like Tubular Bells, Hergest Ridge is a two-movement instrumental piece, this time evoking scenes from Oldfield's Herefordshire country retreat. It was followed in 1975 by the pioneering world music piece Ommadawn released after the death of his mother Maureen.
In 1975, Oldfield recorded a version of the Christmas piece "In Dulci Jubilo" which charted at No. 4 in the UK.
In 1975, Oldfield received a Grammy award for Best Instrumental Composition in "Tubular Bells – Theme from The Exorcist".
In 1976, Oldfield and his sister joined his friend and band member Pekka Pohjola to play on his album Mathematician's Air Display, which was released in 1977. The album was recorded and edited at Oldfield's Througham Slad Manor in Gloucestershire by Oldfield and Paul Lindsay. Oldfield's 1976 rendition of "Portsmouth" remains his best-performing single on the UK Singles Chart, reaching No. 3.
Oldfield recorded the double album Incantations between December 1977 and September 1978. This introduced more diverse choral performances from Sally Oldfield, Maddy Prior, and the Queen's College Girls Choir. When it was released on 1 December 1978, the album went to No. 14 in the UK and reached platinum certification for 300,000 copies sold.
In June 1978, during the recording of Incantations, Oldfield and his siblings completed a three-day Exegesis seminar, a controversial self-assertiveness program based on Werner Erhard's EST training program. The experience had a significant effect on Oldfield's personality, who recalled that he underwent a "rebirth experience" by reliving past fears. "It was like opening some huge cathedral doors and facing the monster, and I saw that the monster was myself as a newborn infant, because I'd started life in a panic." Following the Exegesis seminar, the formerly reclusive Oldfield granted press interviews, posed nude for a promotional photo shoot for Incantations, and went drinking with news reporters. He had also conquered his fear of flying, gained a pilot's license, and bought his own plane.
In 1979, Oldfield supported Incantations with a European tour that spanned 21 dates between March and May 1979. The tour was documented with the live album and concert film, Exposed. Initially marketed as a limited pressing of 100,000 copies, the strength of sales for the album were strong enough for Virgin to abandon the idea shortly after, transferring it to regular production. During the tour Oldfield released the disco-influenced non-album single "Guilty", for which he went to New York City to find the best session musicians and write a song with them in mind. He wrote a chord chart for the song and presented it to the group, who completed it in the studio. Released in April 1979, the song went to No. 22 in the UK and Oldfield performed the song on the national television show Top of the Pops.
Oldfield's music was used for the score of The Space Movie (1980), a Virgin Films production that celebrated the tenth anniversary of the Apollo 11 mission. In 1979, he recorded a version of the signature tune for the BBC children's television programme Blue Peter, which was used by the show for 10 years.
Platinum to Heaven's Open
Oldfield's fifth album, Platinum, was released in November 1979 and marked the start of his transition from long compositions towards mainstream and pop music. Oldfield performed across Europe between April and December 1980 with the In Concert 1980 tour.
In 1980, Oldfield released QE2, named after the ocean liner, which features a variety of guest musicians including Phil Collins on drums. This was followed by the European Adventure Tour 1981, during which Oldfield accepted an invitation to perform at a free concert celebrating the wedding of Prince Charles and Lady Diana in Guildhall. He wrote a new track, "Royal Wedding Anthem", for the occasion.
His next album, Five Miles Out, followed in March 1982, which features the 24-minute track "Taurus II" occupying side one. The Five Miles Out World Tour 1982 saw Oldfield perform from April to December of that year. Crises saw Oldfield continue the pattern of one long composition with shorter songs. The first single from the album, "Moonlight Shadow", with Maggie Reilly on vocals, became Oldfield's most successful single, reaching No. 4 in the UK and No. 1 in nine other countries. The subsequent Crises Tour in 1983 concluded with a concert at Wembley Arena to commemorate the tenth anniversary of Tubular Bells. The next album, Discovery, continues with this trend, being the first single "To France" and subsequent Discovery Tour 1984.
Oldfield later turned to film and video, writing the score for Roland Joffé's acclaimed film The Killing Fields and producing substantial video footage for his album Islands. Islands continued what Oldfield had been doing on the past couple of albums, with an instrumental piece on one side and rock/pop singles on the other. Of these, "Islands", sung by Bonnie Tyler and "Magic Touch", with vocals by Max Bacon (in the US version) and Glasgow vocalist Jim Price (Southside Jimmy) in the rest of the world, were the major hits. In the US "Magic Touch" reached the top 10 on the Billboard album rock charts in 1988. During the 1980s, Oldfield's then-wife, Norwegian singer Anita Hegerland, contributed vocals to many songs including "Pictures in the Dark".
Released in July 1989, Earth Moving features seven vocalists across the album's nine tracks. It is Oldfield's first to consist solely of rock and pop songs, several of which were released as singles: "Innocent" and "Holy" in Europe, and "Hostage" in the US.
For his next instrumental album, Virgin insisted that Oldfield use the title Tubular Bells 2. Oldfield's rebellious response was Amarok, an hour-long work featuring rapidly changing themes, unpredictable bursts of noise and a hidden Morse code insult, stating "Fuck off RB", allegedly directed at Branson. Oldfield did everything in his power to make it impossible to make extracts and Virgin returned the favour by barely promoting the album.
in February 1991, Oldfield released his final album for Virgin, Heaven's Open, under the name "Michael Oldfield". It marks the first time he handles all lead vocals. In 2013, Oldfield invited Branson to the opening of St. Andrew's International School of The Bahamas, where two of Oldfield's children were pupils. This was the occasion of the debut of Tubular Bells for Schools, a piano solo adaptation of Oldfield's work.
1992–2003: Warner years
By early 1992, Oldfield had secured Clive Banks as his new manager and had several record label owners listen to his demo of Tubular Bells II at his house. Oldfield signed with Rob Dickins of WEA Warner and recorded the album with Trevor Horn as producer. Released in August 1992, the album went to No. 1 in the UK. Its live premiere followed on 4 September at Edinburgh Castle which was released on home video as Tubular Bells II Live. Oldfield supported the album with his Tubular Bells II 20th Anniversary Tour in 1992 and 1993, his first concert tour since 1984. By April 1993, the album had sold over three million copies worldwide.
Oldfield continued to embrace new musical styles, with The Songs of Distant Earth (based on Arthur C. Clarke's novel of the same name) exhibiting a softer new-age sound. In 1994, he also had an asteroid, 5656 Oldfield, named after him.
In 1995, Oldfield continued to embrace new musical styles by producing the Celtic-themed album Voyager. In 1992, Oldfield met Luar na Lubre, a Galician Celtic-folk band (from A Coruña, Spain), with the singer Rosa Cedrón. The band's popularity grew after Oldfield covered their song "O son do ar" ("The sound of the air") on his Voyager album.
In 1998, Oldfield produced the third Tubular Bells album (also premiered at a concert, this time in Horse Guards Parade, London), drawing on the dance music scene at his then new home on the island of Ibiza. This album was inspired by themes from Tubular Bells, but differed in lacking a clear two-part structure.
During 1999, Oldfield released two albums. The first, Guitars, used guitars as the source for all the sounds on the album, including percussion. The second, The Millennium Bell, consisted of pastiches of a number of styles of music that represented various historical periods over the past millennium. The work was performed live in Berlin for the city's millennium celebrations in 1999–2000.
He added to his repertoire the MusicVR project, combining his music with a virtual reality-based computer game. His first work on this project is Tr3s Lunas launched in 2002, a virtual game where the player can interact with a world full of new music. This project appeared as a double CD, one with the music, and the other with the game.
In 2002 and 2003, Oldfield re-recorded Tubular Bells using modern equipment to coincide the 30th anniversary of the original. He had wanted to do it years before but his contract with Virgin kept him from doing so. This new version features John Cleese as the Master of Ceremonies as Viv Stanshall, who spoke on the original, died in the interim. Tubular Bells 2003 was released in May 2003.
2004–present: Mercury years
On 12 April 2004 Oldfield launched his next virtual reality project, Maestro, which contains music from the Tubular Bells 2003 album and some new chillout melodies. The games have since been made available free of charge on Tubular.net.
In 2005, Oldfield signed a deal with Mercury Records UK, who secured the rights to his catalogue when the rights had reverted to himself. Mercury acquired the rights to Oldfield's back catalogue, in July 2007. Oldfield released his first album on the Mercury label, Light + Shade, in September 2005. It is a double album of music of contrasting mood: relaxed (Light) and upbeat and moody (Shade). In 2006 and 2007, Oldfield headlined the Night of the Proms tour, consisting of 21 concerts across Europe. Also in 2007, Oldfield released his autobiography, Changeling.
In March 2008 Oldfield released his first classical album, Music of the Spheres; Karl Jenkins assisted with the orchestration. In the first week of release the album topped the UK Classical chart and reached number 9 on the main UK Album Chart. A single "Spheres", featuring a demo version of pieces from the album, was released digitally. The album was nominated for a Classical Brit Award, the NS&I Best Album of 2009.
In 2008, when Oldfield's original 35-year deal with Virgin Records ended, the rights to Tubular Bells and his other Virgin releases were returned to him, and were then transferred to Mercury Records. Mercury announced that his Virgin albums will be reissued with bonus content from 2009. In 2009, Mercury released the compilation album The Mike Oldfield Collection 1974–1983, that went to No. 11 in the UK chart.
In 2008, Oldfield contributed a new track, "Song for Survival", to the charity album Songs for Survival in support of Survival International. Oldfield's daughter Molly played a large part in the project. In 2010, lyricist Don Black said that he had been working with Oldfield. In 2012, Oldfield was featured on Journey into Space, an album by his brother Terry, and on the track "Islanders" by German producer Torsten Stenzel's York project. In 2013, Oldfield and York released a remix album entitled Tubular Beats.
Oldfield performed live at the 2012 Summer Olympics opening ceremony in London. His set included renditions of Tubular Bells, "Far Above the Clouds" and "In Dulci Jubilo" during a segment about the National Health Service. This track appears on the officially released soundtrack album Isles of Wonder. Later in 2012, the compilation album Two Sides: The Very Best of Mike Oldfield, was released which reached No. 6 in the UK.
In October 2013, the BBC broadcast Tubular Bells: The Mike Oldfield Story, a documentary on Oldfield's life and career.
Oldfield's latest rock-themed album of songs, titled Man on the Rocks, was released on 3 March 2014 by Virgin EMI. The album was produced by Steve Lipson. The album marks a return of Oldfield to a Virgin branded label, through the merger of Mercury Records UK and Virgin Records after Universal Music's purchase of EMI. The track "Nuclear" was used for the E3 trailer of Metal Gear Solid V: The Phantom Pain.
In 2015, Oldfield told Steve Wright on his BBC radio show that a sequel album to Tubular Bells was in early development, which he aimed to record on analogue equipment. Later in 2015, Oldfield revealed that he had started on a sequel to Ommadawn. The album, named Return to Ommadawn, was finished in 2016 and released in January 2017. It went to No. 4 in the UK. Oldfield again hinted at a fourth Tubular Bells album when he posted photos of his new equipment, including a new Telecaster guitar.
Musicianship
Although Oldfield considers himself primarily a guitarist, he is also one of popular music's most skilled and diverse multi-instrumentalists. His 1970s recordings were characterised by a very broad variety of instrumentation predominantly played by himself, plus assorted guitar sound treatments to suggest other instrumental timbres (such as the bagpipe, mandolin, "Glorfindel" and varispeed guitars on the original Tubular Bells).
During the 1980s Oldfield became expert in the use of digital synthesizers and sequencers (notably the Fairlight CMI) which began to dominate the sound of his recordings: from the late 1990s onwards, he became a keen user of software synthesizers. He has, however, regularly returned to projects emphasising detailed, manually played and part-acoustic instrumentation (such as 1990's Amarok, 1996's Voyager and 1999's Guitars).
Oldfield has played over forty distinct and different instruments on record, including:
a wide variety of electric and acoustic six-string guitars and bass guitars (plus electric sitar and guitar synthesizer) in a variety of different styles including folk, rock, pop and flamenco and taking in techniques such as bowing
other fretted instruments (banjo, mandolin, bouzouki, ukulele, Chapman Stick)
keyboards (piano, assorted electric/electronic organs and synthesizers, spinet)
electronic instruments (Fairlight CMI plus other digital samplers and sequencers; assorted drum programs, vocoder, software synthesizers)
wind instruments (flageolet, recorder, penny and bass whistles, Northumbrian bagpipes)
free-reed instruments (accordion, melodica)
string instruments (violin, harp, psaltery)
unpitched percussion (including bodhrán, African drums, timpani, rhythm sticks, tambourine, shaker, cabasa)
tuned percussion (tubular bells, glockenspiel, marimba, gong, sleigh bells, bell tree, Rototoms, Simmons electronic drums, triangle)
plucked idiophones (kalimba, jaw harp)
occasional found instruments (such as nutcrackers)
While generally preferring the sound of guest vocalists, Oldfield has frequently sung both lead and backup parts for his songs and compositions. He has also contributed experimental vocal effects such as fake choirs and the notorious "Piltdown Man" impression on Tubular Bells.
Although recognised as a highly skilled guitarist, Oldfield is self-deprecating about his other instrumental skills, describing them as having been developed out of necessity to perform and record the music he composes. He has been particularly dismissive of his violin-playing and singing abilities.
Guitars
Over the years, Oldfield has used a range of guitars. Among the more notable of these are:
1963 Fender Stratocaster Serial no. L08044, in salmon pink (fiesta red). Used by Oldfield from 1984 (the Discovery album) until 2006 (Night of the Proms, rehearsals in Antwerp). Subsequently, sold for £30,000 at Chandler Guitars.
1989 PRS Artist Custom 24 In amber, used by Oldfield from the late 1980s to the present day.
1966 Fender Telecaster Serial no. 180728, in blonde. Previously owned by Marc Bolan, this was the only electric guitar used on Tubular Bells. The guitar was unsold at auction by Bonhams in 2007, 2008 and 2009 at estimated values of, respectively, £25,000–35,000, £10,000–15,000 and £8,000–12,000; Oldfield has since sold it and donated the £6500 received to the charity SANE.
Various Gibson Les Paul, Zemaitis and SG guitars Used extensively by Oldfield in the 1970s and 80s. The most notable Gibson guitar Oldfield favoured in this time period was a 1962 Les Paul/SG Junior model, which was his primary guitar for the recording of Ommadawn, among other works. Oldfield is also known to have owned and used an L6-S during that model's production run in the mid-1970s. On occasion, Oldfield was also seen playing a black Les Paul Custom, an early reissue model built around 1968.
Oldfield used a modified Roland GP8 effects processor in conjunction with his PRS Artist to get many of his heavily overdriven guitar sounds from the Earth Moving album onwards. Oldfield has also been using guitar synthesizers since the mid-1980s, using a 1980s Roland GR-300/G-808 type system, then a 1990s Roland GK2 equipped red PRS Custom 24 (sold in 2006) with a Roland VG8, and most recently a Line 6 Variax.
Oldfield has an unusual playing style, using fingers and long right-hand fingernails and different ways of creating vibrato: a "very fast side-to-side vibrato" and "violinist's vibrato". Oldfield has stated that his playing style originates from his musical roots playing folk music and the bass guitar.
Keyboards
Over the years, Oldfield has owned and used a vast number of synthesizers and other keyboard instruments. In the 1980s, he composed the score for the film The Killing Fields on a Fairlight CMI. Some examples of keyboard and synthesised instruments which Oldfield has made use of include Sequential Circuits Prophet-5s (notably on Platinum and The Killing Fields), Roland JV-1080/JV-2080 units (1990s), a Korg M1 (as seen in the "Innocent" video), a Clavia Nord Lead and Steinway pianos. In recent years, he has also made use of software synthesis products, such as Native Instruments.
Lead vocalists
Oldfield has occasionally sung himself on his records and live performances, sometimes using a vocoder as a resource. It is not unusual for him to collaborate with diverse singers and to hold auditions before deciding the most appropriate for a particular song or album. Featured lead vocalists who have collaborated with him include:
Amar
Jon Anderson
Kevin Ayers
Max Bacon
Rosa Cedrón
Roger Chapman
Pepsi Demacque
Cara Dillon
Anita Hegerland
Sally Oldfield
Barry Palmer
Maddy Prior
Maggie Reilly
Luke Spiller
Chris Thompson
Bonnie Tyler
Recording
Oldfield has self-recorded and produced many of his albums, and played the majority of the featured instruments, largely at his home studios. In the 1990s and 2000s he mainly used DAWs such as Apple Logic, Avid Pro Tools and Steinberg Nuendo as recording suites. For composing orchestral music Oldfield has been quoted as using the software notation program Sibelius running on Apple Macintoshes. He also used the FL Studio DAW on his 2005 double album Light + Shade. Among the mixing consoles Oldfield has owned are an AMS Neve Capricorn 33238, a Harrison Series X, and a Euphonix System 5-MC.
Personal life
Family
Oldfield has been married four times and has seven children. In 1978 he married Diana Fuller, a relative of the Exegesis group leader, which lasted for three months. Oldfield recalled that he phoned Branson the day after the ceremony and said he had made a mistake. From 1979 to 1986, Oldfield was married to Sally Cooper, who he met through Virgin. They had three children, daughter Molly and sons Dougal (1981–2015) and Luke. Shortly before Luke's birth in 1986, the relationship had broken down and they amicably split. By this time, Oldfield had entered a relationship with Norwegian singer Anita Hegerland, lasting until 1991. The pair had met backstage at one of Oldfield's gigs while touring Germany in 1984. They lived in Switzerland, France, and England. They have two children: Greta and Noah.
In the late 1990s, Oldfield posted in a lonely hearts column in a local Ibiza newspaper. It was answered by Amy Lauer and the pair dated, but the relationship was troubled by Oldfield's bouts of alcohol and substance abuse and it ended after two months. In 2001, Oldfield began counselling and psychotherapy. Between 2002 and 2013, Oldfield was married to Fanny Vandekerckhove, whom he had met while living in Ibiza. They have two sons, Jake and Eugene.
Other
Oldfield and his siblings were raised as Catholic, their mother's faith. He used drugs in his early life including LSD, which he claimed affected his mental health. In the early 1990s Oldfield set up Tonic, a foundation that sponsored people to receive counselling and therapy.
In 1980 Oldfield, a longtime fan of model aircraft, acquired his pilot license. He later became a motorcycle enthusiast and has been inspired to write songs from riding them. He has owned various models, including a BMW R1200GS, Suzuki GSX-R750, Suzuki GSX-R1000, and a Yamaha R1.
Oldfield has lived in Nassau, Bahamas, since 2009 and is a Bahamian citizen. He has also lived in Spain, Ibiza, Los Angeles and Monaco. In 2012, Oldfield stated that he had decided to leave England after feeling that the country had become a "nanny state" with too much surveillance and state control. Oldfield has remarked that while he is within close proximity to other celebrity residents in the Bahamas, he chose not to live within a wealthy gated community with staff and described his lifestyle as "austere."
In 2017, Oldfield expressed support for then US President Donald Trump and said he would have played at Trump's inauguration if he had been invited to do so. In the same interview, he also stated that he was in favour of Brexit and supports Britain's withdrawal from the EU.
Awards and nominations
{| class="wikitable sortable plainrowheaders"
|-
! scope="col" | Award
! scope="col" | Year
! scope="col" | Nominee(s)
! scope="col" | Category
! scope="col" | Result
! scope="col" class="unsortable"|
|-
! scope="row" | APRS Annual Sound Fellowships Lunch
| 2015
| Himself
| Honour Fellowship
|
|
|-
! scope="row" | British Academy Film Awards
| 1985
| The Killing Fields
| Best Original Music
|
|
|-
! scope="row" | Brit Awards
| | 1977
| Tubular Bells
| British Album of the Year
|
|
|-
! scope="row" | Golden Globe Awards
| 1985
| The Killing Fields
| Best Original Score
|
|
|-
! scope="row" rowspan=2|Goldene Europa
| 1987
| rowspan=2|Himself
| rowspan=2|Best International Artist
|
| rowspan=2|
|-
| 1998
|
|-
! scope="row" rowspan=2|Grammy Awards
| 1975
| "Tubular Bells"
| Best Instrumental Composition
|
| rowspan=2|
|-
| 1998
| Voyager
| Best New Age Album
|
|-
! scope="row"|Hungarian Music Awards
| 1997
| Voyager
| Best Foreign Album
|
|
|-
! scope="row" | Ivor Novello Awards
| 1984
| "Moonlight Shadow"
| Most Performed Work
|
|
|-
! scope="row" rowspan=3| NME Awards
| 1975
| rowspan=3|Himself
| rowspan=3|Best Miscellaneous Instrumentalist
|
| rowspan=3|
|-
| 1976
|
|-
| 1977
|
|-
! scope="row" | Online Film & Television Association
| 1999
| The X-Files
| Best Music, Original Sci-Fi/Fantasy/Horror Score
|
|
Honours
In 1981 He was awarded the Freedom of the City of London.
Discography
Studio albums
Tubular Bells (1973)
Hergest Ridge (1974)
Ommadawn (1975)
Incantations (1978)
Platinum (1979)
QE2 (1980)
Five Miles Out (1982)
Crises (1983)
Discovery (1984)
The Killing Fields (1984)
Islands (1987)
Earth Moving (1989)
Amarok (1990)
Heaven's Open (1991)
Tubular Bells II (1992)
The Songs of Distant Earth (1994)
Voyager (1996)
Tubular Bells III (1998)
Guitars (1999)
The Millennium Bell (1999)
Tr3s Lunas (2002)
Tubular Bells 2003 (2003)
Light + Shade (2005)
Music of the Spheres (2008)
Man on the Rocks (2014)
Return to Ommadawn (2017)
Concert tours
Tour of Europe 1979 (March–May 1979)
In Concert 1980 (April–December 1980)
European Adventure Tour '81 (March–August 1981)
Five Miles Out World Tour 1982 (April–December 1982)
Crises Tour 1983 (May–July 1983)
Discovery Tour 1984 (August–November 1984)
Tubular Bells II 20th Anniversary Tour (March–October 1993)
Live Then & Now '99 (June–July 1999)
Nokia Night of the Proms (December 2006)
Night of the Proms Spain (March 2007)
Bibliography
Campos, Héctor (2018). Mike Oldfield: La música de los Sueños. Editorial Círculo Rojo.
Capitani, Ettore - Paolucci, Stefano (2020). Mike Oldfield. In Italia. Passamonti Editore. .
Musical scores
.Copyright 1973. Text written by Karl Dallas. Analysis by David Bedford. The text of this book originally appeared in "Let It Rock" magazine, December 1974, under the title of "Balm for the Walking Dead".
Notes
References
Sources
External links
1953 births
Living people
Bodhrán players
British aviators
British buskers
British composers
British expatriates in Spain
British expatriates in the Bahamas
British male composers
British male guitarists
British multi-instrumentalists
British people of Irish descent
British rock guitarists
British Roman Catholics
British songwriters
Caroline Records artists
Fingerstyle guitarists
Grammy Award winners
Mercury Records artists
Minimalist composers
New-age composers
People educated at Elvian School
People educated at St Joseph's Convent School
People educated at The Highlands School, Reading
People from Reading, Berkshire
Progressive rock guitarists
Reprise Records artists
Virgin Records artists
Vocaloid musicians
Warner Records artists |
20034 | https://en.wikipedia.org/wiki/Mutual%20recursion | Mutual recursion | In mathematics and computer science, mutual recursion is a form of recursion where two mathematical or computational objects, such as functions or datatypes, are defined in terms of each other. Mutual recursion is very common in functional programming and in some problem domains, such as recursive descent parsers, where the datatypes are naturally mutually recursive.
Examples
Datatypes
The most important basic example of a datatype that can be defined by mutual recursion is a tree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically:
f: [t[1], ..., t[k]]
t: v f
A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types. Further, it matches many algorithms on trees, which consist of doing one thing with the value, and another thing with the children.
This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest:
t: v [t[1], ..., t[k]]
A tree t consists of a pair of a value v and a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list of another, which require disentangling to prove results about.
In Standard ML, the tree and forest datatypes can be mutually recursively defined as follows, allowing empty trees:
datatype 'a tree = Empty | Node of 'a * 'a forest
and 'a forest = Nil | Cons of 'a tree * 'a forest
Computer functions
Just as algorithms on recursive datatypes can naturally be given by recursive functions, algorithms on mutually recursive data structures can be naturally given by mutually recursive functions. Common examples include algorithms on trees, and recursive descent parsers. As with direct recursion, tail call optimization is necessary if the recursion depth is large or unbounded, such as using mutual recursion for multitasking. Note that tail call optimization in general (when the function called is not the same as the original function, as in tail-recursive calls) may be more difficult to implement than the special case of tail-recursive call optimization, and thus efficient implementation of mutual tail recursion may be absent from languages that only optimize tail-recursive calls. In languages such as Pascal that require declaration before use, mutually recursive functions require forward declaration, as a forward reference cannot be avoided when defining them.
As with directly recursive functions, a wrapper function may be useful, with the mutually recursive functions defined as nested functions within its scope if this is supported. This is particularly useful for sharing state across a set of functions without having to pass parameters between them.
Basic examples
A standard example of mutual recursion, which is admittedly artificial, determines whether a non-negative number is even or odd by defining two separate functions that call each other, decrementing by 1 each time. In C:
bool is_even(unsigned int n) {
if (n == 0)
return true;
else
return is_odd(n - 1);
}
bool is_odd(unsigned int n) {
if (n == 0)
return false;
else
return is_even(n - 1);
}
These functions are based on the observation that the question is 4 even? is equivalent to is 3 odd?, which is in turn equivalent to is 2 even?, and so on down to 0. This example is mutual single recursion, and could easily be replaced by iteration. In this example, the mutually recursive calls are tail calls, and tail call optimization would be necessary to execute in constant stack space. In C, this would take O(n) stack space, unless rewritten to use jumps instead of calls. This could be reduced to a single recursive function is_even. In that case, is_odd, which could be inlined, would call is_even, but is_even would only call itself.
As a more general class of examples, an algorithm on a tree can be decomposed into its behavior on a value and its behavior on children, and can be split up into two mutually recursive functions, one specifying the behavior on a tree, calling the forest function for the forest of children, and one specifying the behavior on a forest, calling the tree function for the tree in the forest. In Python:
def f_tree(tree) -> None:
f_value(tree.value)
f_forest(tree.children)
def f_forest(forest) -> None:
for tree in forest:
f_tree(tree)
In this case the tree function calls the forest function by single recursion, but the forest function calls the tree function by multiple recursion.
Using the Standard ML datatype above, the size of a tree (number of nodes) can be computed via the following mutually recursive functions:
fun size_tree Empty = 0
| size_tree (Node (_, f)) = 1 + size_forest f
and size_forest Nil = 0
| size_forest (Cons (t, f')) = size_tree t + size_forest f'
A more detailed example in Scheme, counting the leaves of a tree:
(define (count-leaves tree)
(if (leaf? tree)
1
(count-leaves-in-forest (children tree))))
(define (count-leaves-in-forest forest)
(if (null? forest)
0
(+ (count-leaves (car forest))
(count-leaves-in-forest (cdr forest)))))
These examples reduce easily to a single recursive function by inlining the forest function in the tree function, which is commonly done in practice: directly recursive functions that operate on trees sequentially process the value of the node and recurse on the children within one function, rather than dividing these into two separate functions.
Advanced examples
A more complicated example is given by recursive descent parsers, which can be naturally implemented by having one function for each production rule of a grammar, which then mutually recurse; this will in general be multiple recursion, as production rules generally combine multiple parts. This can also be done without mutual recursion, for example by still having separate functions for each production rule, but having them called by a single controller function, or by putting all the grammar in a single function.
Mutual recursion can also implement a finite-state machine, with one function for each state, and single recursion in changing state; this requires tail call optimization if the number of state changes is large or unbounded. This can be used as a simple form of cooperative multitasking. A similar approach to multitasking is to instead use coroutines which call each other, where rather than terminating by calling another routine, one coroutine yields to another but does not terminate, and then resumes execution when it is yielded back to. This allows individual coroutines to hold state, without it needing to be passed by parameters or stored in shared variables.
There are also some algorithms which naturally have two phases, such as minimax (min and max), which can be implemented by having each phase in a separate function with mutual recursion, though they can also be combined into a single function with direct recursion.
Mathematical functions
In mathematics, the Hofstadter Female and Male sequences are an example of a pair of integer sequences defined in a mutually recursive manner.
Fractals can be computed (up to a given resolution) by recursive functions. This can sometimes be done more elegantly via mutually recursive functions; the Sierpiński curve is a good example.
Prevalence
Mutual recursion is very common in functional programming, and is often used for programs written in LISP, Scheme, ML, and similar programming languages. For example, Abelson and Sussman describe how a meta-circular evaluator can be used to implement LISP with an eval-apply cycle. In languages such as Prolog, mutual recursion is almost unavoidable.
Some programming styles discourage mutual recursion, claiming that it can be confusing to distinguish the conditions which will return an answer from the conditions that would allow the code to run forever without producing an answer. Peter Norvig points to a design pattern which discourages the use entirely, stating:
Terminology
Mutual recursion is also known as indirect recursion, by contrast with direct recursion, where a single function calls itself directly. This is simply a difference of emphasis, not a different notion: "indirect recursion" emphasises an individual function, while "mutual recursion" emphasises the set of functions, and does not single out an individual function. For example, if f calls itself, that is direct recursion. If instead f calls g and then g calls f, which in turn calls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, g is indirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions.
Conversion to direct recursion
Mathematically, a set of mutually recursive functions are primitive recursive, which can be proven by course-of-values recursion, building a single function F that lists the values of the individual recursive function in order: and rewriting the mutual recursion as a primitive recursion.
Any mutual recursion between two procedures can be converted to direct recursion by inlining the code of one procedure into the other. If there is only one site where one procedure calls the other, this is straightforward, though if there are several it can involve code duplication. In terms of the call stack, two mutually recursive procedures yield a stack ABABAB..., and inlining B into A yields the direct recursion (AB)(AB)(AB)...
Alternately, any number of procedures can be merged into a single procedure that takes as argument a variant record (or algebraic data type) representing the selection of a procedure and its arguments; the merged procedure then dispatches on its argument to execute the corresponding code and uses direct recursion to call self as appropriate. This can be seen as a limited application of defunctionalization. This translation may be useful when any of the mutually recursive procedures can be called by outside code, so there is no obvious case for inlining one procedure into the other. Such code then needs to be modified so that procedure calls are performed by bundling arguments into a variant record as described; alternately, wrapper procedures may be used for this task.
See also
Cycle detection (graph theory)
Recursion (computer science)
Circular dependency
References
External links
Mutual recursion at Rosetta Code
"Example demonstrating good use of mutual recursion", "Are there any example of Mutual recursion?", Stack Overflow
Theory of computation
Recursion |
20036 | https://en.wikipedia.org/wiki/Metasyntactic%20variable | Metasyntactic variable | A metasyntactic variable is a specific word or set of words identified as a placeholder in computer science and specifically computer programming. These words are commonly found in source code and are intended to be modified or substituted before real-world usage. The words foo and bar are good examples as they are used in over 330 Internet Engineering Task Force Requests for Comments, the documents which define foundational internet technologies like HTTP (web), TCP/IP, and email protocols.
By mathematical analogy, a metasyntactic variable is a word that is a variable for other words, just as in algebra letters are used as variables for numbers.
Metasyntactic variables are used to name entities such as variables, functions, and commands whose exact identity is unimportant and serve only to demonstrate a concept, which is useful for teaching programming.
Common metasyntactic variables
Due to English being the foundation-language, or lingua franca, of most computer programming languages, these variables are commonly seen even in programs and examples of programs written for other spoken-language audiences.
The typical names may depend however on the subculture that has developed around a given programming language.
General usage
Metasyntactic variables used commonly across all programming languages include foobar, foo, bar, baz, , , , , , , , , , , and thud; several of these words are references to the game Colossal Cave Adventure.
A complete reference can be found in a MIT Press book titled The Hacker's Dictionary.
Japanese
In Japanese, the words (ほげ) and (ぴよ) are commonly used, with other common words and variants being (ふが), (ほげら), and (ほげほげ). Note that -ra is a pluralizing ending in Japanese, and reduplication is also used for pluralizing. The origin of as a metasyntactic variable is not known, but it is believed to date to the early 1980s.
French
In France, the word toto is widely used, with variants tata, titi, tutu as related placeholders. One commonly-raised source for the use of toto is a reference to the stock character used to tell jokes with Tête à Toto.
Usage examples
C
In the following example the function name foo and the variable name bar are both metasyntactic variables. Lines beginning with // are comments.
// The function named foo
int foo(void)
{
// Declare the variable bar and set the value to 1
int bar = 1;
return bar;
}
C++
Function prototypes with examples of different argument passing mechanisms:
void Foo(Fruit bar);
void Foo(Fruit* bar);
void Foo(const Fruit& bar);
Example showing the function overloading capabilities of the C++ language
void Foo(int bar);
void Foo(int bar, int baz);
void Foo(int bar, int baz, int qux);
Python
Spam, ham, and eggs are the principal metasyntactic variables used in the Python programming language. This is a reference to the famous comedy sketch, "Spam", by Monty Python, the eponym of the language.
In the following example spam, ham, and eggs are metasyntactic variables and lines beginning with # are comments.
# Define a function named spam
def spam():
# Define the variable ham
ham = "Hello World!"
# Define the variable eggs
eggs = 1
return
IETF Requests for Comments
Both the IETF RFCs and computer programming languages are rendered in plain text, making it necessary to distinguish metasyntactic variables by a naming convention, since it would not be obvious from context.
Here is an example from the official IETF document explaining the e-mail protocols (from RFC 772 - cited in RFC 3092):
All is well; now the recipients can be specified.
S: MRCP TO:<Foo@Y> <CRLF>
R: 200 OK
S: MRCP TO:<Raboof@Y> <CRLF>
R: 553 No such user here
S: MRCP TO:<bar@Y> <CRLF>
R: 200 OK
S: MRCP TO:<@Y,@X,fubar@Z> <CRLF>
R: 200 OK
Note that the failure of "Raboof" has no effect on the storage of
mail for "Foo", "bar" or the mail to be forwarded to "fubar@Z"
through host "X".
(The documentation for texinfo emphasizes the distinction between metavariables and mere variables used in a programming language being documented in some texinfo file as: "Use the @var command to indicate metasyntactic variables. A metasyntactic variable is something that stands for another piece of text. For example, you should use a metasyntactic variable in the documentation of a function to describe the arguments that are passed to that function. Do not use @var for the names of particular variables in programming languages. These are specific names from a program, so @code is correct for them.")
Another point reflected in the above example is the convention that a metavariable is to be uniformly substituted with the same instance in all its appearances in a given schema. This is in contrast with nonterminal symbols in formal grammars where the nonterminals on the right of a production can be substituted by different instances.
Example data
SQL
It is common to use the name ACME in example SQL Databases and as placeholder company-name for the purpose of teaching. The term 'ACME Database' is commonly used to mean a training or example-only set of database data used solely for training or testing.
ACME is also commonly used in documentation which shows SQL usage examples, a common practice with in many educational texts as well as technical documentation from companies such as Microsoft and Oracle.
See also
Metavariable (logic)
xyzzy
Alice and Bob
John Doe
Fnord
Free variables and bound variables
Gadget
Lorem ipsum
Nonce word
Placeholder name
Widget
Smurf
References
External links
Definition of metasyntactic variable, with examples.
Examples of metasyntactic variables used in Commonwealth Hackish, such as wombat.
Variable "foo" and Other Programming Oddities
Placeholder names
Metalogic
Variable (computer science)
Syntax (logic) |
20038 | https://en.wikipedia.org/wiki/Mondegreen | Mondegreen | A mondegreen is a mishearing or misinterpretation of a phrase in a way that gives it a new meaning. Mondegreens are most often created by a person listening to a poem or a song; the listener, being unable to hear a lyric clearly, substitutes words that sound similar and make some kind of sense. American writer Sylvia Wright coined the term in 1954, writing that as a girl, when her mother read to her from Thomas Percy's 1765 book Reliques of Ancient English Poetry, she had misheard the lyric "layd him on the green" as "Lady Mondegreen" in the fourth line of the Scottish ballad "The Bonny Earl of Murray".
"Mondegreen" was included in the 2000 edition of the Random House Webster's College Dictionary, and in the Oxford English Dictionary in 2002. Merriam-Webster's Collegiate Dictionary added the word in 2008.
Etymology
In a 1954 essay in Harper's Magazine, Wright described how, as a young girl, she misheard the last line of the first stanza from the seventeenth-century ballad The Bonnie Earl O' Moray. She wrote:
The correct fourth line is, "And laid him on the green". Wright explained the need for a new term:
Psychology
People are more likely to notice what they expect than things not part of their everyday experiences; this is known as confirmation bias. Similarly, one may mistake an unfamiliar stimulus for a familiar and more plausible version. For example, to consider a well-known mondegreen in the song "Purple Haze", one would be more likely to hear Jimi Hendrix singing that he is about to kiss this guy than that he is about to kiss the sky. Similarly, if a lyric uses words or phrases that the listener is unfamiliar with, they may be misheard as using more familiar terms.
The creation of mondegreens may be driven in part by cognitive dissonance, as the listener finds it psychologically uncomfortable to listen to a song and not make out the words. Steven Connor suggests that mondegreens are the result of the brain's constant attempts to make sense of the world by making assumptions to fill in the gaps when it cannot clearly determine what it is hearing. Connor sees mondegreens as the "wrenchings of nonsense into sense". This dissonance will be most acute when the lyrics are in a language in which the listener is fluent.
On the other hand, Steven Pinker has observed that mondegreen mishearings tend to be less plausible than the original lyrics, and that once a listener has "locked in" to a particular misheard interpretation of a song's lyrics, it can remain unquestioned, even when that plausibility becomes strained (see mumpsimus). Pinker gives the example of a student "stubbornly" mishearing the chorus to "Venus" ("I'm your Venus") as "I'm your penis", and being surprised that the song was allowed on the radio. The phenomenon may, in some cases, be triggered by people hearing "what they want to hear", as in the case of the song "Louie Louie": parents heard obscenities in the Kingsmen recording where none existed.
James Gleick claims that the mondegreen is a distinctly modern phenomenon. Without the improved communication and language standardization brought about by radio, he believes there would have been no way to recognize and discuss this shared experience. Just as mondegreens transform songs based on experience, a folk song learned by repetition often is transformed over time when sung by people in a region where some of the song's references have become obscure. A classic example is "The Golden Vanity", which contains the line "As she sailed upon the lowland sea". British immigrants carried the song to Appalachia, where singers, not knowing what the term lowland sea refers to, transformed it over generations from "lowland" to "lonesome".
Notable examples
Notable collections
The classicist and linguist Steve Reece has collected examples of English mondegreens in song lyrics, religious creeds and liturgies, commercials and advertisements, and jokes and riddles. He has used this collection to shed light on the process of "junctural metanalysis" during the oral transmission of the ancient Greek epics, the Iliad and Odyssey.
In songs
The national anthem of the United States is highly susceptible to the creation of mondegreens, two in the first line. Francis Scott Key's "Star-Spangled Banner" begins with the line "O say can you see, by the dawn's early light". This has been accidentally and deliberately misinterpreted as "José, can you see", another example of the Hobson-Jobson effect, countless times. The second half of the line has been misheard as well, as "by the donzerly light", or other variants. This has led to many people believing that "donzerly" is an actual word.
Religious songs, learned by ear (and often by children), are another common source of mondegreens. The most-cited example is "Gladly, the cross-eyed bear" (from the line in the hymn "Keep Thou My Way" by Fanny Crosby and Theodore E. Perkins, "Kept by Thy tender care, gladly the cross I'll bear"). Jon Carroll and many others quote it as "Gladly the cross I'd bear"; also, here, hearers are confused by the sentence with the unusual object-subject-verb (OSV) word order.
Mondegreens expanded as a phenomenon with radio, and, especially, the growth of rock and roll (and even more so with rap). Amongst the most-reported examples are:
"There's a bathroom on the right" (the line at the end of each verse of "Bad Moon Rising" by Creedence Clearwater Revival: "There's a bad moon on the rise").
"Scuse me while I kiss this guy" (from a lyric in the song "Purple Haze" by The Jimi Hendrix Experience: "'Scuse me while I kiss the sky").
"The girl with colitis goes by" (from a lyric in the Beatles song "Lucy in the Sky with Diamonds": "The girl with kaleidoscope eyes")
Both Creedence's John Fogerty and Hendrix eventually acknowledged these mishearings by deliberately singing the "mondegreen" versions of their songs in concert.
"Blinded by the Light", a cover of a Bruce Springsteen song by Manfred Mann's Earth Band, contains what has been called "probably the most misheard lyric of all time". The phrase "revved up like a deuce", altered from Springsteen's original "cut loose like a deuce", both lyrics referring to the hot rodders slang deuce (short for deuce coupé) for a 1932 Ford coupé, is frequently misheard as "wrapped up like a douche". Springsteen himself has joked about the phenomenon, claiming that it was not until Manfred Mann rewrote the song to be about a "feminine hygiene product" that the song became popular.
Another commonly-cited example of a song susceptible to mondegreens is Nirvana's "Smells Like Teen Spirit", with the line "here we are now, entertain us" variously being misinterpreted as "here we are now, in containers", and "here we are now, hot potatoes", amongst other renditions.
Rap and hip hop lyrics may be particularly susceptible to being misheard because they do not necessarily follow standard pronunciations. The delivery of rap lyrics relies heavily upon an often regional pronunciation or non-traditional accenting of words and their phonemes to adhere to the artist's stylizations and the lyrics' written structure. This issue is exemplified in controversies over alleged transcription errors in Yale University Press's 2010 Anthology of Rap.
Standardized and recorded mondegreens
Sometimes, the modified version of a lyric becomes standard, as is the case with "The Twelve Days of Christmas". The original has "four colly birds" (colly means black; cf. A Midsummer Night's Dream: "Brief as the lightning in the collied night".); by the turn of the twentieth century, these became calling birds, which is the lyric used in the 1909 Frederic Austin version.
A number of misheard lyrics have been recorded, turning a mondegreen into a real title. The song "Sea Lion Woman", recorded in 1939 by Christine and Katherine Shipp, was performed by Nina Simone under the title, "See Line Woman". According to the liner notes from the compilation A Treasury of Library of Congress Field Recordings, the correct title of this playground song might also be "See [the] Lyin' Woman" or "C-Line Woman". Jack Lawrence's misinterpretation of the French phrase "pauvre Jean" ("poor John") as the identically pronounced "pauvres gens" ("poor people") led to the translation of La Goualante du pauvre Jean ("The Ballad of Poor John") as "The Poor People of Paris", a hit song in 1956.
In literature
A Monk Swimming by author Malachy McCourt is so titled because of a childhood mishearing of a phrase from the Catholic rosary prayer, Hail Mary. "Amongst women" became "a monk swimmin'".
The title and plot of the short science fiction story "Come You Nigh: Kay Shuns" ("Com-mu-ni-ca-tions") by Lawrence A. Perkins, in Analog Science Fiction and Fact magazine (April 1970), deals with securing interplanetary radio communications by encoding them with mondegreens.
Olive, the Other Reindeer is a 1997 children's book by Vivian Walsh, which borrows its title from a mondegreen of the line, "all of the other reindeer" in the song "Rudolph the Red-Nosed Reindeer". The book was adapted into an animated Christmas special in 1999.
The travel guide book series Lonely Planet is named after the misheard phrase "lovely planet" sung by Joe Cocker in Matthew Moore's song "Space Captain".
In film
A monologue of mondegreens appears in the 1971 film Carnal Knowledge. The camera focuses on actress Candice Bergen laughing as she recounts various phrases that fooled her as a child, including "Round John Virgin" (instead of “‘Round yon virgin...”) and "Gladly, the cross-eyed bear" (instead of “Gladly the cross I’d bear”). The title of the 2013 film Ain't Them Bodies Saints is a misheard lyric from a folk song; director David Lowery decided to use it because it evoked the "classical, regional" feel of 1970s rural Texas.
In the 1994 film The Santa Clause, a child identifies a ladder that Santa uses to get to the roof from its label: The Rose Suchak Ladder Company. He states that this is "just like the poem", misinterpreting "out on the lawn there arose such a clatter" from A Visit from St. Nicholas as "Out on the lawn, there's a Rose Suchak ladder".
In television
Mondegreens have been used in many television advertising campaigns, including:
An advertisement for the 2012 Volkswagen Passat touting the car's audio system shows a number of people singing incorrect versions of the line "Burning out his fuse up here alone" from the Elton John/Bernie Taupin song "Rocket Man", until a woman listening to the song in a Passat realizes the correct words.
A 2002 advertisement for T-Mobile shows spokeswoman Catherine Zeta-Jones helping to correct a man who has misunderstood the chorus of Def Leppard's "Pour Some Sugar On Me" as "pour some shook up ramen".
A series of advertisements for Maxell audio cassette tapes, produced by Howell Henry Chaldecott Lury, shown in 1989 and 1990, featured misheard versions of "Israelites" (e.g., "Me ears are alight") by Desmond Dekker and "Into the Valley" by The Skids as heard by users of other brands of tape.
A 1987 series of advertisements for Kellogg's Nut 'n Honey Crunch featured a joke in which one person asks "What's for breakfast?" and is told "Nut 'N' Honey", which is misheard as "Nothing, honey".
Other notable examples
The traditional game Chinese whispers ("Telephone" or "Gossip" in North America) involves mishearing a whispered sentence to produce successive mondegreens that gradually distort the original sentence as it is repeated by successive listeners.
Among schoolchildren in the US, daily rote recitation of the Pledge of Allegiance has long provided opportunities for the genesis of mondegreens.
Speech-to-text functionality in modern smartphone messaging apps and search or assist functions may be hampered by faulty speech recognition. It has been noted that in text messaging, users often leave uncorrected mondegreens as a joke or puzzle for the recipient to solve. This wealth of mondegreens has proven to be a fertile ground for study by speech scientists and psychologists.
Reverse mondegreen
A reverse mondegreen is the intentional production, in speech or writing, of words or phrases that seem to be gibberish but disguise meaning. A prominent example is Mairzy Doats, a 1943 novelty song by Milton Drake, Al Hoffman, and Jerry Livingston. The lyrics are a reverse mondegreen, made up of same-sounding words or phrases (sometimes also referred to as "oronyms"), so pronounced (and written) as to challenge the listener (or reader) to interpret them:
Mairzy doats and dozy doats and liddle lamzy divey
A kiddley divey too, wouldn't you?
The clue to the meaning is contained in the bridge of the song:
If the words sound queer and funny to your ear, a little bit jumbled and jivey,
Sing "Mares eat oats and does eat oats and little lambs eat ivy."
This makes it clear that the last line is "A kid'll eat ivy, too; wouldn't you?"
Deliberate mondegreen
Two authors have written books of supposed foreign-language poetry that are actually mondegreens of nursery rhymes in English. Luis van Rooten's pseudo-French Mots D'Heures: Gousses, Rames includes critical, historical, and interpretive apparatus, as does John Hulme's Mörder Guss Reims, attributed to a fictitious German poet. Both titles sound like the phrase "Mother Goose Rhymes". Both works can also be considered soramimi, which produces different meanings when interpreted in another language. The genre of animutation is based on deliberate mondegreen.
Wolfgang Amadeus Mozart produced a similar effect in his canon "Difficile Lectu" (Difficult to Read), which, though ostensibly in Latin, is actually an opportunity for scatological humor in both German and Italian.
Some performers and writers have used deliberate mondegreens to create double entendres. The phrase "if you see Kay" (F-U-C-K) has been employed many times, notably as a line from James Joyce's 1922 novel Ulysses.
"Mondegreen" is a song by Yeasayer on their 2010 album, Odd Blood. The lyrics are intentionally obscure (for instance, "Everybody sugar in my bed" and "Perhaps the pollen in the air turns us into a stapler") and spoken hastily to encourage the mondegreen effect.
Anguish Languish is an ersatz language created by Howard L. Chace. A play on the words "English Language," it is based on homophonic transformations of English words and consists entirely of deliberate mondegreens that seem nonsensical in print but are readily understood when spoken aloud. A notable example is the story "Ladle Rat Rotten Hut" ("Little Red Riding Hood"), which appears in his collection of stories and poems, Anguish Languish (Prentice-Hall, 1956).
Related linguistic phenomena
Closely related categories are Hobson-Jobson, where a word from a foreign language is homophonically translated into one's own language, e.g. "cockroach" from Spanish cucaracha, and soramimi, a Japanese term for deliberate homophonic misinterpretation of words for humor.
An unintentionally incorrect use of similar-sounding words or phrases, resulting in a changed meaning, is a malapropism. If there is a connection in meaning, it may be called an eggcorn. If a person stubbornly continues to mispronounce a word or phrase after being corrected, that person has committed a mumpsimus.
Earworm
Eggcorn
Holorime
Homophonic translation
Hypercorrection
Phono-semantic matching
Spoonerism
Syntactic ambiguity
Non-English languages
Croatian
Queen's song "Another one bites the dust" has a long-standing history as a mondegreen in Croatian, misheard as Radovan baca daske which means "Radovan (personal name) throws planks". This might also be a soramimi.
Dutch
In Dutch, mondegreens are popularly referred to as Mama appelsap ("Mommy applejuice"), from the Michael Jackson song Wanna Be Startin' Somethin' which features the lyrics Mama-se mama-sa ma-ma-coo-sa, and was once misheard as Mama say mama sa mam[a]appelsap. The Dutch radio station 3FM had a show Superrradio (originally Timur Open Radio) run by Timur Perlin and Ramon with an item in which listeners were encouraged to send in mondegreens under the name "Mama appelsap". The segment was popular for years.
French
In French, the phenomenon is also known as hallucination auditive, especially when referring to pop songs.
The title of the film La Vie en rose ("Life in pink") depicting the life of Édith Piaf can be mistaken for L'Avion rose' ("The pink airplane").
The title of the 1983 French novel Le Thé au harem d'Archi Ahmed ("Tea in the Harem of Archi Ahmed") by Mehdi Charef (and the 1985 movie of the same name) is based on the main character mishearing le théorème d'Archimède ("the theorem of Archimedes") in his mathematics class.
A classic example in French is similar to the "Lady Mondegreen" anecdote: in his 1962 collection of children's quotes La Foire aux cancres, the humorist Jean-Charles refers to a misunderstood lyric of "La Marseillaise" (the French national anthem): Entendez-vous ... mugir ces féroces soldats ("Do you hear those savage soldiers roar?") is misheard as ...Séféro, ce soldat ("that soldier Séféro").
German
Mondegreens are a well-known phenomenon in German, especially where non-German songs are concerned. They are sometimes called, after a well-known example, Agathe Bauer-songs ("I got the power", a song by Snap!, misinterpreted as a German female name). Journalist Axel Hacke published a series of books about them, beginning with Der weiße Neger Wumbaba ("The White Negro Wumbaba", a mishearing of the line der weiße Nebel wunderbar from "Der Mond ist aufgegangen").
In urban legend, children's paintings of nativity scenes, occasionally include next to the Child, Mary, Joseph, and so on, an additional, laughing creature known as the Owi. The reason is to be found in the line Gottes Sohn! O wie lacht / Lieb' aus Deinem göttlichen Mund ("God's Son! Oh, how does love laugh out of Thy divine mouth!") from the song "Silent Night". The subject is Lieb, a poetic contraction of die Liebe leaving off the final -e and the definite article, so that the phrase might be misunderstood as being about a person named Owi laughing "in a loveable manner". Owi lacht has been used as the title of at least one book about Christmas and Christmas songs.
Hebrew
Ghil'ad Zuckermann mentions the example mukhrakhím liyót saméakh (, which means "we must be happy", with a grammatical error) as a mondegreen of the original úru 'akhím belév saméakh (, which means "wake up, brothers, with a happy heart"). Although this line is taken from the extremely well-known song "Háva Nagíla" ("Let’s be happy"), given the Hebrew high-register of úru ( "wake up!"), Israelis often mishear it.
An Israeli site dedicated to Hebrew mondegreens has coined the term avatiach (, Hebrew for "watermelon") for "mondegreen", named for a common mishearing of Shlomo Artzi's award-winning 1970 song "Ahavtia" ("I loved her", using a form uncommon in spoken Hebrew).
Polish
A paper in phonology cites memoirs of the poet Antoni Słonimski, who confessed that in the recited poem Konrad Wallenrod he used to hear zwierz Alpuhary ("a beast of Alpujarras") rather than z wież Alpuhary ("from the towers of Alpujarras").
Russian
In 1875 Fyodor Dostoyevsky cited a line from Fyodor Glinka's song "Troika" (1825), колокольчик, дар Валдая ("the bell, gift of Valday"), stating that it is usually understood as колокольчик, дарвалдая ("the bell darvaldaying" — supposedly an onomatopoeia of ringing sounds).
See also
Am I Right – website with a large collection of misheard lyrics
Ambiguity
Bushism
Folk etymology
Mad Gab
McGurk effect
Pareidolia
Parody music
Subverted rhyme
Yanny or Laurel
Notes and references
Notes
Citations
Further reading
Connor, Steven. Earslips: Of Mishearings and Mondegreens, 2009. Earslips: Of Mishearings and Mondegreens
Edwards, Gavin. Scuse Me While I Kiss This Guy, 1995.
Edwards, Gavin. When a Man Loves a Walnut, 1997.
Edwards, Gavin. He's Got the Whole World in His Pants, 1996.
Edwards, Gavin. Deck The Halls With Buddy Holly, 1998.
Gwynne, Fred. Chocolate Moose for Dinner, 1988.
Norman, Philip. Your Walrus Hurt the One You Love: malapropisms, mispronunciations, and linguistic cock-ups, 1988.
External links
Snopes.com: "The Lady and the Mondegreen" (misheard Christmas songs).
Pamela Licalzi O'Connell: "Sweet Slips Of the Ear: Mondegreens", New York Times'', 9 April 1998.
1950s neologisms
Auditory perception
Humour
Phonology
Semantics (linguistics) |
20039 | https://en.wikipedia.org/wiki/Merge%20sort | Merge sort | In computer science, merge sort (also commonly spelled as mergesort) is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same in the input and output. Merge sort is a divide and conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up merge sort appeared in a report by Goldstine and von Neumann as early as 1948.
Algorithm
Conceptually, a merge sort works as follows:
Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
Top-down implementation
Example C-like code using indices for top-down merge sort algorithm that recursively splits the list (called runs in this example) into sublists until sublist size is 1, then merges those sublists to produce a sorted list. The copy back step is avoided with alternating the direction of the merge with each level of recursion (except for an initial one-time copy, that can be avoided too). To help understand this, consider an array with two elements. The elements are copied to B[], then merged back to A[]. If there are four elements, when the bottom of the recursion level is reached, single element runs from A[] are merged to B[], and then at the next higher level of recursion, those two-element runs are merged to A[]. This pattern continues with each level of recursion.
// Array A[] has the items to sort; array B[] is a work array.
void TopDownMergeSort(A[], B[], n)
{
CopyArray(A, 0, n, B); // one time copy of A[] to B[]
TopDownSplitMerge(B, 0, n, A); // sort data from B[] into A[]
}
// Split A[] into 2 runs, sort both runs into B[], merge both runs from B[] to A[]
// iBegin is inclusive; iEnd is exclusive (A[iEnd] is not in the set).
void TopDownSplitMerge(B[], iBegin, iEnd, A[])
{
if (iEnd - iBegin <= 1) // if run size == 1
return; // consider it sorted
// split the run longer than 1 item into halves
iMiddle = (iEnd + iBegin) / 2; // iMiddle = mid point
// recursively sort both runs from array A[] into B[]
TopDownSplitMerge(A, iBegin, iMiddle, B); // sort the left run
TopDownSplitMerge(A, iMiddle, iEnd, B); // sort the right run
// merge the resulting runs from array B[] into A[]
TopDownMerge(B, iBegin, iMiddle, iEnd, A);
}
// Left source half is A[ iBegin:iMiddle-1].
// Right source half is A[iMiddle:iEnd-1 ].
// Result is B[ iBegin:iEnd-1 ].
void TopDownMerge(A[], iBegin, iMiddle, iEnd, B[])
{
i = iBegin, j = iMiddle;
// While there are elements in the left or right runs...
for (k = iBegin; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.
if (i < iMiddle && (j >= iEnd || A[i] <= A[j])) {
B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}
void CopyArray(A[], iBegin, iEnd, B[])
{
for (k = iBegin; k < iEnd; k++)
B[k] = A[k];
}
Sorting the entire array is accomplished by .
Bottom-up implementation
Example C-like code using indices for bottom-up merge sort algorithm which treats the list as an array of n sublists (called runs in this example) of size 1, and iteratively merges sub-lists back and forth between two buffers:
// array A[] has the items to sort; array B[] is a work array
void BottomUpMergeSort(A[], B[], n)
{
// Each 1-element run in A is already "sorted".
// Make successively longer sorted runs of length 2, 4, 8, 16... until the whole array is sorted.
for (width = 1; width < n; width = 2 * width)
{
// Array A is full of runs of length width.
for (i = 0; i < n; i = i + 2 * width)
{
// Merge two runs: A[i:i+width-1] and A[i+width:i+2*width-1] to B[]
// or copy A[i:n-1] to B[] ( if (i+width >= n) )
BottomUpMerge(A, i, min(i+width, n), min(i+2*width, n), B);
}
// Now work array B is full of runs of length 2*width.
// Copy array B to array A for the next iteration.
// A more efficient implementation would swap the roles of A and B.
CopyArray(B, A, n);
// Now array A is full of runs of length 2*width.
}
}
// Left run is A[iLeft :iRight-1].
// Right run is A[iRight:iEnd-1 ].
void BottomUpMerge(A[], iLeft, iRight, iEnd, B[])
{
i = iLeft, j = iRight;
// While there are elements in the left or right runs...
for (k = iLeft; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.
if (i < iRight && (j >= iEnd || A[i] <= A[j])) {
B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}
void CopyArray(B[], A[], n)
{
for (i = 0; i < n; i++)
A[i] = B[i];
}
Top-down implementation using lists
Pseudocode for top-down merge sort algorithm which recursively divides the input list into smaller sublists until the sublists are trivially sorted, and then merges the sublists while returning up the call chain.
function merge_sort(list m) is
// Base case. A list of zero or one elements is sorted, by definition.
if length of m ≤ 1 then
return m
// Recursive case. First, divide the list into equal-sized sublists
// consisting of the first half and second half of the list.
// This assumes lists start at index 0.
var left := empty list
var right := empty list
for each x with index i in m do
if i < (length of m)/2 then
add x to left
else
add x to right
// Recursively sort both sublists.
left := merge_sort(left)
right := merge_sort(right)
// Then merge the now-sorted sublists.
return merge(left, right)
In this example, the function merges the left and right sublists.
function merge(left, right) is
var result := empty list
while left is not empty and right is not empty do
if first(left) ≤ first(right) then
append first(left) to result
left := rest(left)
else
append first(right) to result
right := rest(right)
// Either left or right may have elements left; consume them.
// (Only one of the following loops will actually be entered.)
while left is not empty do
append first(left) to result
left := rest(left)
while right is not empty do
append first(right) to result
right := rest(right)
return result
Bottom-up implementation using lists
Pseudocode for bottom-up merge sort algorithm which uses a small fixed size array of references to nodes, where array[i] is either a reference to a list of size 2i or nil. node is a reference or pointer to a node. The merge() function would be similar to the one shown in the top-down merge lists example, it merges two already sorted lists, and handles empty lists. In this case, merge() would use node for its input parameters and return value.
function merge_sort(node head) is
// return if empty list
if head = nil then
return nil
var node array[32]; initially all nil
var node result
var node next
var int i
result := head
// merge nodes into array
while result ≠ nil do
next := result.next;
result.next := nil
for (i = 0; (i < 32) && (array[i] ≠ nil); i += 1) do
result := merge(array[i], result)
array[i] := nil
// do not go past end of array
if i = 32 then
i -= 1
array[i] := result
result := next
// merge array into single list
result := nil
for (i = 0; i < 32; i += 1) do
result := merge(array[i], result)
return result
Natural merge sort
A natural merge sort is similar to a bottom-up merge sort except that any naturally occurring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (alternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being convenient data structures (used as FIFO queues or LIFO stacks). In the bottom-up merge sort, the starting point assumes each run is one item long. In practice, random input data will have many short runs that just happen to be sorted. In the typical case, the natural merge sort may not need as many passes because there are fewer runs to merge. In the best case, the input is already sorted (i.e., is one run), so the natural merge sort need only make one pass through the data. In many practical cases, long natural runs are present, and for that reason natural merge sort is exploited as the key component of Timsort. Example:
Start : 3 4 2 1 7 5 8 9 0 6
Select runs : (3 4)(2)(1 7)(5 8 9)(0 6)
Merge : (2 3 4)(1 5 7 8 9)(0 6)
Merge : (1 2 3 4 5 7 8 9)(0 6)
Merge : (0 1 2 3 4 5 6 7 8 9)
Formally, the natural merge sort is said to be Runs-optimal, where is the number of runs in , minus one.
Tournament replacement selection sorts are used to gather the initial runs for external sorting algorithms.
Analysis
In sorting n objects, merge sort has an average and worst-case performance of O(n log n). If the running time of merge sort for a list of length n is T(n), then the recurrence relation T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add the n steps taken to merge the resulting two lists). The closed form follows from the master theorem for divide-and-conquer recurrences.
The number of comparisons made by merge sort in the worst case is given by the sorting numbers. These numbers are equal to or slightly smaller than (n ⌈lg n⌉ − 2⌈lg n⌉ + 1), which is between (n lg n − n + 1) and (n lg n + n + O(lg n)). Merge sort's best case takes about half as many iterations as its worst case.
For large n and a randomly ordered input list, merge sort's expected (average) number of comparisons approaches α·n fewer than the worst case, where
In the worst case, merge sort uses approximately 39% fewer comparisons than quicksort does in its average case, and in terms of moves, merge sort's worst case complexity is O(n log n) - the same complexity as quicksort's best case.
Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such as Lisp, where sequentially accessed data structures are very common. Unlike some (efficient) implementations of quicksort, merge sort is a stable sort.
Merge sort's most common implementation does not sort in place; therefore, the memory size of the input must be allocated for the sorted output to be stored in (see below for variations that need only n/2 extra spaces).
Variants
Variants of merge sort are primarily concerned with reducing the space complexity and the cost of copying.
A simple alternative for reducing the space overhead to n/2 is to maintain left and right as a combined structure, copy only the left part of m into temporary space, and to direct the merge routine to place the merged output into m. With this version it is better to allocate the temporary space outside the merge routine, so that only one allocation is needed. The excessive copying mentioned previously is also mitigated, since the last pair of lines before the return result statement (function merge in the pseudo code above) become superfluous.
One drawback of merge sort, when implemented on arrays, is its working memory requirement. Several in-place variants have been suggested:
Katajainen et al. present an algorithm that requires a constant amount of working memory: enough storage space to hold one element of the input array, and additional space to hold pointers into the input array. They achieve an time bound with small constants, but their algorithm is not stable.
Several attempts have been made at producing an in-place merge algorithm that can be combined with a standard (top-down or bottom-up) merge sort to produce an in-place merge sort. In this case, the notion of "in-place" can be relaxed to mean "taking logarithmic stack space", because standard merge sort requires that amount of space for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is possible in time using a constant amount of scratch space, but their algorithm is complicated and has high constant factors: merging arrays of length and can take moves. This high constant factor and complicated in-place algorithm was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston presented a straightforward linear time algorithm practical in-place merge to merge a sorted list using fixed amount of additional space. They both have used the work of Kronrod and others. It merges in linear time and constant extra space. The algorithm takes little more average time than standard merge sort algorithms, free to exploit O(n) temporary extra memory cells, by less than a factor of two. Though the algorithm is much faster in a practical way but it is unstable also for some lists. But using similar concepts, they have been able to solve this problem. Other in-place algorithms include SymMerge, which takes time in total and is stable. Plugging such an algorithm into merge sort increases its complexity to the non-linearithmic, but still quasilinear, .
A modern stable linear and in-place merging is block merge sort.
An alternative to reduce the copying into multiple lists is to associate a new field of information with each key (the elements in m are called keys). This field will be used to link the keys and any associated information together in a sorted list (a key and its related information is called a record). Then the merging of the sorted lists proceeds by changing the link values; no records need to be moved at all. A field which contains only a link will generally be smaller than an entire record so less space will also be used. This is a standard sorting technique, not restricted to merge sort.
Use with tape drives
An external merge sort is practical to run using disk or tape drives when the data to be sorted is too large to fit into memory. External sorting explains how merge sort is implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is sequential (except for rewinds at the end of each pass). A minimal implementation can get by with just two record buffers and a few program variables.
Naming the four tape drives as A, B, C, D, with the original data on A, and using only two record buffers, the algorithm is similar to the bottom-up implementation, using pairs of tape drives instead of arrays in memory. The basic algorithm can be described as follows:
Merge pairs of records from A; writing two-record sublists alternately to C and D.
Merge two-record sublists from C and D into four-record sublists; writing these alternately to A and B.
Merge four-record sublists from A and B into eight-record sublists; writing these alternately to C and D
Repeat until you have one list containing all the data, sorted—in log2(n) passes.
Instead of starting with very short runs, usually a hybrid algorithm is used, where the initial pass will read many records into memory, do an internal sort to create a long run, and then distribute those long runs onto the output set. The step avoids many early passes. For example, an internal sort of 1024 records will save nine passes. The internal sort is often large because it has such a benefit. In fact, there are techniques that can make the initial runs longer than the available internal memory. One of them, the Knuth's 'snowplow' (based on a binary min-heap), generates runs twice as long (on average) as a size of memory used.
With some overhead, the above algorithm can be modified to use three tapes. O(n log n) running time can also be achieved using two queues, or a stack and a queue, or three stacks. In the other direction, using k > two tapes (and O(k) items in memory), we can reduce the number of tape operations in O(log k) times by using a k/2-way merge.
A more sophisticated merge sort that optimizes tape (and disk) drive usage is the polyphase merge sort.
Optimizing merge sort
On modern computers, locality of reference can be of paramount importance in software optimization, because multilevel memory hierarchies are used. Cache-aware versions of the merge sort algorithm, whose operations have been specifically chosen to minimize the movement of pages in and out of a machine's memory cache, have been proposed. For example, the algorithm stops partitioning subarrays when subarrays of size S are reached, where S is the number of data items fitting into a CPU's cache. Each of these subarrays is sorted with an in-place sorting algorithm such as insertion sort, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance on machines that benefit from cache optimization.
suggested an alternative version of merge sort that uses constant additional space. This algorithm was later refined.
Also, many applications of external sorting use a form of merge sorting where the input get split up to a higher number of sublists, ideally to a number for which merging them still makes the currently processed set of pages fit into main memory.
Parallel merge sort
Merge sort parallelizes well due to the use of the divide-and-conquer method. Several different parallel variants of the algorithm have been developed over the years. Some parallel merge sort algorithms are strongly related to the sequential top-down merge algorithm while others have a different general structure and use the K-way merge method.
Merge sort with parallel recursion
The sequential merge sort procedure can be described in two phases, the divide phase and the merge phase. The first consists of many recursive calls that repeatedly perform the same division process until the subsequences are trivially sorted (containing one or no element). An intuitive approach is the parallelization of those recursive calls. Following pseudocode describes the merge sort with parallel recursion using the fork and join keywords:
// Sort elements lo through hi (exclusive) of array A.
algorithm mergesort(A, lo, hi) is
if lo+1 < hi then // Two or more elements.
mid := ⌊(lo + hi) / 2⌋
fork mergesort(A, lo, mid)
mergesort(A, mid, hi)
join
merge(A, lo, mid, hi)
This algorithm is the trivial modification of the sequential version and does not parallelize well. Therefore, its speedup is not very impressive. It has a span of , which is only an improvement of compared to the sequential version (see Introduction to Algorithms). This is mainly due to the sequential merge method, as it is the bottleneck of the parallel executions.
Merge sort with parallel merging
Better parallelism can be achieved by using a parallel merge algorithm. Cormen et al. present a binary variant that merges two sorted sub-sequences into one sorted output sequence.
In one of the sequences (the longer one if unequal length), the element of the middle index is selected. Its position in the other sequence is determined in such a way that this sequence would remain sorted if this element were inserted at this position. Thus, one knows how many other elements from both sequences are smaller and the position of the selected element in the output sequence can be calculated. For the partial sequences of the smaller and larger elements created in this way, the merge algorithm is again executed in parallel until the base case of the recursion is reached.
The following pseudocode shows the modified parallel merge sort method using the parallel merge algorithm (adopted from Cormen et al.).
/**
* A: Input array
* B: Output array
* lo: lower bound
* hi: upper bound
* off: offset
*/
algorithm parallelMergesort(A, lo, hi, B, off) is
len := hi - lo + 1
if len == 1 then
B[off] := A[lo]
else let T[1..len] be a new array
mid := ⌊(lo + hi) / 2⌋
mid' := mid - lo + 1
fork parallelMergesort(A, lo, mid, T, 1)
parallelMergesort(A, mid + 1, hi, T, mid' + 1)
join
parallelMerge(T, 1, mid', mid' + 1, len, B, off)
In order to analyze a recurrence relation for the worst case span, the recursive calls of parallelMergesort have to be incorporated only once due to their parallel execution, obtaining
For detailed information about the complexity of the parallel merge procedure, see Merge algorithm.
The solution of this recurrence is given by
This parallel merge algorithm reaches a parallelism of , which is much higher than the parallelism of the previous algorithm. Such a sort can perform well in practice when combined with a fast stable sequential sort, such as insertion sort, and a fast sequential merge as a base case for merging small arrays.
Parallel multiway merge sort
It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there are usually p > 2 processors available. A better approach may be to use a K-way merge method, a generalization of binary merge, in which sorted sequences are merged. This merge variant is well suited to describe a sorting algorithm on a PRAM.
Basic Idea
Given an unsorted sequence of elements, the goal is to sort the sequence with available processors. These elements are distributed equally among all processors and sorted locally using a sequential Sorting algorithm. Hence, the sequence consists of sorted sequences of length . For simplification let be a multiple of , so that for .
These sequences will be used to perform a multisequence selection/splitter selection. For , the algorithm determines splitter elements with global rank . Then the corresponding positions of in each sequence are determined with binary search and thus the are further partitioned into subsequences with .
Furthermore, the elements of are assigned to processor , means all elements between rank and rank , which are distributed over all . Thus, each processor receives a sequence of sorted sequences. The fact that the rank of the splitter elements was chosen globally, provides two important properties: On the one hand, was chosen so that each processor can still operate on elements after assignment. The algorithm is perfectly load-balanced. On the other hand, all elements on processor are less than or equal to all elements on processor . Hence, each processor performs the p-way merge locally and thus obtains a sorted sequence from its sub-sequences. Because of the second property, no further p-way-merge has to be performed, the results only have to be put together in the order of the processor number.
Multi-sequence selection
In its simplest form, given sorted sequences distributed evenly on processors and a rank , the task is to find an element with a global rank in the union of the sequences. Hence, this can be used to divide each in two parts at a splitter index , where the lower part contains only elements which are smaller than , while the elements bigger than are located in the upper part.
The presented sequential algorithm returns the indices of the splits in each sequence, e.g. the indices in sequences such that has a global rank less than and .
algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is
for i = 1 to p do
(l_i, r_i) = (0, |S_i|-1)
while there exists i: l_i < r_i do
// pick Pivot Element in S_j[l_j], .., S_j[r_j], chose random j uniformly
v := pickPivot(S, l, r)
for i = 1 to p do
m_i = binarySearch(v, S_i[l_i, r_i]) // sequentially
if m_1 + ... + m_p >= k then // m_1+ ... + m_p is the global rank of v
r := m // vector assignment
else
l := m
return l
For the complexity analysis the PRAM model is chosen. If the data is evenly distributed over all , the p-fold execution of the binarySearch method has a running time of . The expected recursion depth is as in the ordinary Quickselect. Thus the overall expected running time is .
Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel such that all splitter elements of rank for are found simultaneously. These splitter elements can then be used to partition each sequence in parts, with the same total running time of .
Pseudocode
Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We assume that there is a barrier synchronization before and after the multisequence selection such that every processor can determine the splitting elements and the sequence partition properly.
/**
* d: Unsorted Array of Elements
* n: Number of Elements
* p: Number of Processors
* return Sorted Array
*/
algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is
o := new Array[0, n] // the output array
for i = 1 to p do in parallel // each processor in parallel
S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p
sort(S_i) // sort locally
synch
v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank i * n/p
synch
(S_i,1, ..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i into subsequences
o[(i-1) * n/p, i * n/p] := kWayMerge(s_1,i, ..., s_p,i) // merge and assign to output array
return o
Analysis
Firstly, each processor sorts the assigned elements locally using a sorting algorithm with complexity . After that, the splitter elements have to be calculated in time . Finally, each group of splits have to be merged in parallel by each processor with a running time of using a sequential p-way merge algorithm. Thus, the overall running time is given by
.
Practical adaption and application
The multiway merge sort algorithm is very scalable through its high parallelization capability, which allows the use of many processors. This makes the algorithm a viable candidate for sorting large amounts of data, such as those processed in computer clusters. Also, since in such systems memory is usually not a limiting resource, the disadvantage of space complexity of merge sort is negligible. However, other factors become important in such systems, which are not taken into account when modelling on a PRAM. Here, the following aspects need to be considered: Memory hierarchy, when the data does not fit into the processors cache, or the communication overhead of exchanging data between processors, which could become a bottleneck when the data can no longer be accessed via the shared memory.
Sanders et al. have presented in their paper a bulk synchronous parallel algorithm for multilevel multiway mergesort, which divides processors into groups of size . All processors sort locally first. Unlike single level multiway mergesort, these sequences are then partitioned into parts and assigned to the appropriate processor groups. These steps are repeated recursively in those groups. This reduces communication and especially avoids problems with many small messages. The hierarchical structure of the underlying real network can be used to define the processor groups (e.g. racks, clusters,...).
Further variants
Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with Richard Cole using a clever subsampling algorithm to ensure O(1) merge. Other sophisticated parallel sorting algorithms can achieve the same or better time bounds with a lower constant. For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in O(log n) time on a CRCW parallel random-access machine (PRAM) with n processors by performing partitioning implicitly. Powers further shows that a pipelined version of Batcher's Bitonic Mergesort at O((log n)2) time on a butterfly sorting network is in practice actually faster than his O(log n) sorts on a PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix and parallel sorting.
Comparison with other sort algorithms
Although heapsort has the same time bounds as merge sort, it requires only Θ(1) auxiliary space instead of merge sort's Θ(n). On typical modern architectures, efficient quicksort implementations generally outperform merge sort for sorting RAM-based arrays. On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-access sequential media. Merge sort is often the best choice for sorting a linked list: in this situation it is relatively easy to implement a merge sort in such a way that it requires only Θ(1) extra space, and the slow random-access performance of a linked list makes some other algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely impossible.
As of Perl 5.8, merge sort is its default sorting algorithm (it was quicksort in previous versions of Perl). In Java, the Arrays.sort() methods use merge sort or a tuned quicksort depending on the datatypes and for implementation efficiency switch to insertion sort when fewer than seven array elements are being sorted. The Linux kernel uses merge sort for its linked lists. Python uses Timsort, another tuned hybrid of merge sort and insertion sort, that has become the standard sort algorithm in Java SE 7 (for arrays of non-primitive types), on the Android platform, and in GNU Octave.
Notes
References
. Also Practical In-Place Mergesort. Also
External links
– graphical demonstration
Open Data Structures - Section 11.1.1 - Merge Sort, Pat Morin
Sorting algorithms
Comparison sorts
Stable sorts
Articles with example pseudocode
Divide-and-conquer algorithms |
20040 | https://en.wikipedia.org/wiki/Maule%20Air | Maule Air | Maule Air, Inc. is a manufacturer of light, single-engined, short take-off and landing (STOL) aircraft, based in Moultrie, Georgia, USA. The company delivered 2,500 aircraft in its first 50 years of business.
History
Belford D. Maule (1911–1995) designed his first aircraft, the M-1 starting at age 19. He founded the company Mechanical Products Co. in Napoleon, Michigan to market his own starter design. In 1941 the B.D. Maule Co. was founded, and Maule produced tailwheels and fabric testers. In 1953 he began design work, and started aircraft production with the "Bee-Dee" M-4 in 1957.
The company is a family-owned enterprise. Its owner, June Maule, widow of B. D. Maule, remained directly involved with factory production until her death in 2009 at the age of 92.
Products
The aircraft produced by Maule Air are tube-and-fabric designs and are popular with bush pilots, thanks to their very low stall speed, their tundra tires and oleo strut landing gear. Most Maules are built with tailwheel or amphibious configurations, although the newer MXT models have tricycle gear.
Aircraft models
Gallery
References
External links
Maule Air, Inc. website
Maule aircraft models
Aircraft manufacturers of the United States
Companies based in Colquitt County, Georgia
Vehicle manufacturing companies established in 1941
1941 establishments in Georgia (U.S. state) |
20041 | https://en.wikipedia.org/wiki/Shoma%20Morita | Shoma Morita | , also read as Shōma Morita, was a contemporary of Sigmund Freud and the founder of Morita therapy, a branch of clinical psychology strongly influenced by Zen Buddhism. In his capacity as the head of psychiatry for a large Tokyo hospital, Morita began developing his methods while working with sufferers of shinkeishitsu, or anxiety disorders with a hypochondriac base.
Theory and methods
According to Morita, how a person feels is important as a sensation and as an indicator for the present moment, but is uncontrollable: we don't create feelings, feelings happen to us. Since feelings do not cause our behavior, we can coexist with unpleasant feelings while still taking constructive action.
The essence of Morita's method maybe summarized in three rules: Accept all your feelings, know your purpose(s), and do what needs to be done. When once asked what shy people should do, Morita replied, "Sweat."
Accept your feelings - Accepting feelings is not ignoring them or avoiding them, but welcoming them; Vietnamese poet and writer Thich Nhat Hanh recommends we say, "Hello Loneliness, how are you today? Come, sit by me and I will take care of you." Morita's advice: "In feelings, it is best to be wealthy and generous" - that is, have many and let them fly as they wish.
Know your purpose - Implicit in Morita's method, and the traditional Buddhist psychological principles which he adapted, is an independence of thought and action, something a little alien to the Western ideal to "follow our whims and moods". Morita held that we can no more control our thoughts than we can control the weather, as both are phenomena of most amazingly complex natural systems. And if we have no hope of controlling our emotions, we can hardly be held responsible any more than we can be held responsible for feeling hot or cold. We do, however, have complete dominion over our behavior, and for Morita, that is a sacred responsibility. "What needs doing now?" is like a mantra in his methods.
Do what needs doing - One can feel crushed and alone or hurt and homicidal while pulling up the weeds in your garden, but one would not be doing it at all if one had not intended to raise flowers. Morita's way of treatment is very different from the Western diagnosis/disease model. Morita's methods lead his "students" through experiments, and in each assignment, the lesson is not explained by a master, but learned first hand, through the "doing" or taiken, that knowledge gained by direct experience.
Influence
David K. Reynolds, an American author, synthesized parts of Morita therapy along with the practice of Naikan into Constructive Living, an educational method intended for English-speaking Westerners. Constructive Living has since become extremely popular in Japan.
Fritz Perls spent a week in a Morita Hospital in Japan.
References
1874 births
1938 deaths
Japanese psychologists
Zen |
20042 | https://en.wikipedia.org/wiki/Montezuma | Montezuma | Montezuma or Moctezuma may refer to:
People
Moctezuma I (1398–1469), the second Aztec emperor and fifth king of Tenochtitlan
Moctezuma II (c. 1460–1520), ninth Aztec emperor
Pedro Moctezuma, a son of Montezuma II
Isabel Moctezuma (1509/1510–1550/1551), a daughter of Montezuma II
Leonor Cortés Moctezuma (c. 1528–?), daughter of Hernán Cortés and Isabel Montezuma
Isabel de Tolosa Cortés de Moctezuma (1568–1619/1620), Mexican heiress, great-granddaughter of Montezuma II
Duke of Moctezuma de Tultengo, a Spanish hereditary title held by descendants of Moctezuma II
Carlos Montezuma (c. 1860–1923), Yavapai/Apache Native American activist
Carlos López Moctezuma (1909–1980), Mexican film actor
Eduardo Matos Moctezuma (born 1940), Mexican archaeologist
Esteban Moctezuma (born 1954), Mexican politician
Julio Rodolfo Moctezuma (1927–2000), Mexican lawyer, politician and banker
Leonidas de Montezuma (1869–1937), English cricketer
Moctesuma Esparza (born 1949), American film director
Moctezuma Serrato (born 1976), Mexican football player
Montezuma Fuller (1858–1926), American architect
Places
Mexico
Moctezuma, Sonora, a municipality
Moctezuma, San Luis Potosí, a municipality
Moctezuma River
Moctezuma River (Sonora)
Moctezuma metro station, a station on the Mexico City Metro
Moctezuma (Mexico City Metrobús, Line 4), a BRT station in Mexico City
Moctezuma (Mexico City Metrobús, Line 5), a BRT station in Mexico City
United States
Inhabited places
Montezuma, Arizona, an unincorporated community
Montezuma, California, a ghost town
Montezuma Hills, California
Montezuma, Colorado, a Statutory Town
Montezuma County, Colorado
Montezuma, Georgia, a city
Montezuma Township, Pike County, Illinois
Montezuma, Indiana, a town
Montezuma, Iowa, a city
Montezuma Township, Gray County, Kansas
Montezuma, Kansas, a city
Montezuma, New Mexico, an unincorporated community
Montezuma, New York, a town
Montezuma, North Carolina, an unincorporated community
Montezuma, Ohio, a village
Montezuma, Virginia, an unincorporated community
Buildings
Montezuma (Norwood, Virginia), a home on the National Register of Historic Places
Montezuma Castle (hotel), Las Vegas, New Mexico
Natural formations
Montezuma Creek (Utah), a creek in Utah
Montezuma Marsh, Cayuga Lake, New York
Montezuma National Forest, Colorado
Montezuma National Wildlife Refuge, New York
Montezuma Range, Nevada, a mountain range
Montezuma Well, a natural limestone sinkhole near Rimrock, Arizona
Other countries
Montezuma, Minas Gerais, Brazil
Montezuma, Costa Rica
Montezuma Falls, Tasmania, Australia
Music
Montezuma, hero of a 1695 semi-opera The Indian Queen by Henry Purcell
Motezuma, a 1733 opera by Antonio Vivaldi (until recently known under the title Montezuma)
Montezuma (Graun), a 1755 opera by Carl Heinrich Graun
Motezuma, a 1765 opera by Gian Francesco de Majo
Motezuma (Mysliveček), a 1771 opera by Josef Mysliveček
Montezuma, a 1775 opera by Antonio Sacchini
Montezuma, a 1780 opera by Giacomo Insanguine
Montesuma, a 1781 opera by Niccolò Antonio Zingarelli
Montezuma, by Ignaz von Seyfried (1804)
Montezuma, an 1884 opera by Frederick Grant Gleason
Montezuma (Sessions opera), a 1963 opera by Roger Sessions
Montezuma, or La Conquista, a 2005 opera by Lorenzo Ferrero
Montezuma, a 1980 film score by Hans Werner Henze
"Montezuma", a song from the 1994 album Apurimac II by Cusco
"Montezuma", a song from the 2011 album Helplessness Blues by Fleet Foxes
Ships
, three ships of the United States Navy
Montezuma (1804 ship), later Moctezuma of the Chilean navy
, launched 1899, later RFA Abadol and RFA Oakleaf
Other uses
Montezuma (TV programme), a 2009 British documentary
Montezuma (mythology), in the mythology of certain Amerindian tribes of the Southwest United States
U.D. Moctezuma de Orizaba, a defunct Mexican football team
Montezuma, a brand of tequila by Barton Brands
See also
Montezuma Affair, an 1835 naval battle between Mexico and the US
Montezuma's revenge (disambiguation)
Halls of Montezuma (disambiguation)
Montezuma leopard frog, a species of frog
Montezuma oropendola, a species of bird
Montezuma, a synonym of the plant genus Thespesia
Montezuma pine, a species of conifer |
20048 | https://en.wikipedia.org/wiki/Mooney | Mooney | Mooney is a family name, which is probably predominantly derived from the Irish Ó Maonaigh. It can also be spelled Moony, Moonie, Mainey, Mauney, Meaney and Meeney depending on the dialectic pronunciation that was Anglicised.
Origins
The origin of the Moony or Mooney families is lost in antiquity. The name is derived from maoin, a Gaelic word meaning wealth or treasure of treasure, hence when O'Maonaigh was anglicised to Mooney it meant the descendant of the wealthy one.
According to Irish lore, the Mooney family comes from one of the largest and most noble Irish lines. They are said to be descendants of the ancient Irish King Heremon, who, along with his brother Herber, conquered Ireland. Heremon slew his brother shortly after their invasion, took the throne for himself, and fathered a line of kings of Ireland that include Malachi II, and King Niall of the Nine Hostages.
Baptismal records, parish records, ancient land grants, the Annals of the Four Masters, and books by O'Hart, McLysaght, and O'Brien were all used in researching the history of the Mooney family name. These varied and often ancient records indicate that distant septs of the name arose in several places throughout Ireland. The most known and most numerous sept came from the county of Offaly. The members of this sept were from Chieftain Monach, son of Ailill Mor, Lord of Ulster, who was descended from the Kings of Connacht. These family members gave their name to town lands called Ballymooney both in that county and in the neighbouring county of Leix.
People with the surname
Albert Mooney, aircraft designer and founder of Mooney Airplane Company
Alex X. Mooney, Member of Congress from West Virginia
Bel Mooney, English journalist and broadcaster
Brian Mooney, professional football player
Cameron Mooney, Australian rules footballer
Carol Ann Mooney, President of Saint Mary's College in Notre Dame, Indiana
Charles ("Chuck") W. Mooney Jr., American, the Charles A. Heimbold, Jr. Professor of Law, and former interim Dean, at the University of Pennsylvania Law School
Chris Mooney (basketball) (born 1972), American basketball coach
Darnell Mooney (born 1997), American football player
Dave Mooney, professional football player
Debra Mooney, American actress
Edward Aloysius Mooney, Roman Catholic Cardinal Archbishop of Detroit, former Bishop of Rochester
Edward F. Mooney, noted Kierkegaard scholar and Professor of Religion at Syracuse University
Edward Mooney (footballer)
Francie Mooney, musician; fiddler
Grahame Mooney, RAF Chinook Pilot and Squadron Leader
Hercules Mooney, American Revolutionary War Colonel
James Mooney, anthropologist whose major works were about Native American Indians
Jason Mooney (disambiguation), multiple people
John Mooney (disambiguation), multiple people
Kathi Mooney, American scientist
Kevin Mooney, Irish musician
Kyle Mooney, American comic actor, Saturday Night Live
Malcolm Mooney, original lead singer of rock group |Can
Matt Mooney (born 1995), American basketball player
Melvin Mooney, American physicist, developed the Mooney Viscometer and other testing equipment used in the rubber industry
Paschal Mooney, Irish politician
Peter Mooney (conductor), Scottish educationalist and conductor (music)
Paul Mooney (writer), son of James Mooney
Paul Mooney (comedian) (1941–2021), American comedian, writer, and actor
Ralph Mooney, well known Bakersfield sound steel-guitar player who backed Buck Owens, Merle Haggard and others
Robert Mooney, Canadian politician
Sean Mooney, sports reporter and former WWF announcer
Shay Mooney, singer of country duo Dan + Shay
Shona Mooney, Scottish fiddler
Ted Mooney, American author
Thomas Mooney, American labour leader from San Francisco, California
Tim Mooney (1958–2012), American musician
Tom Mooney, Australian rugby league footballer
Tommy Mooney, professional football player
Tony Mooney, Australian politician
Walter E. Mooney, pilot and model aircraft designer
See also
Mooney (disambiguation)
References
Surnames of Irish origin |
20050 | https://en.wikipedia.org/wiki/Minnesota%20Twins | Minnesota Twins | The Minnesota Twins are an American professional baseball team based in Minneapolis. The Twins compete in Major League Baseball (MLB) as a member club of the American League (AL) Central Division. The team is named after the Twin Cities area which includes the two adjoining cities of Minneapolis and St. Paul.
The franchise was founded in Washington, D.C., in 1901 as the Washington Senators. The team moved to Minnesota and was renamed the Minnesota Twins for the start of the 1961 season. The Twins played in Metropolitan Stadium from 1961 to 1981 and in the Hubert H. Humphrey Metrodome from 1982 to 2009. The team played its inaugural game at Target Field on April 12, 2010. The franchise won the World Series in 1924 as the Senators, and in 1987 and 1991 as the Twins.
From 1901 to 2021, the Senators/Twins franchise's overall regular-season win-loss-tie record is 9,012–9,716–109 (); as the Twins (through 2021), it is 4,789–4,852–8 ().
Team history
Washington Nationals/Senators: 1901–1960
The team was founded in Washington, D.C., in as one of the eight original teams of the American League. It was named the Washington Senators from 1901 to 1904, the Washington Nationals from 1905 to 1955, and the Senators again from 1956 to 1960. But the team was commonly referred to as the Senators throughout its history (and unofficially as the "Grifs" during Clark Griffith's tenure as manager from 1912 to 1920). The name "Nationals" appeared on uniforms for only two seasons, and then was replaced with the "W" logo. The media often shortened the nickname to "Nats" — even for the 1961 expansion team. The names "Nationals" and "Nats" were revived in 2005, when the Montreal Expos moved to Washington to become the Nationals.
The Washington Senators spent the first decade of their existence finishing near the bottom of the American League standings. The team's long bouts of mediocrity were immortalized in the 1955 Broadway musical Damn Yankees. Their fortunes began to improve with the arrival of 19-year-old pitcher, Walter Johnson, in 1907. Johnson blossomed in 1911 with 25 victories, although the team still finished the season in seventh place. In 1912, the Senators improved dramatically, as their pitching staff led the league in team earned run average and in strikeouts. Johnson won 33 games while teammate Bob Groom added another 24 wins to help the Senators finish the season in second place. Griffith joined the team in 1912 and became the team's owner in 1920. (The franchise remained under Griffith family ownership until 1984.) The Senators continued to perform respectably in 1913 with Johnson posting a career-high 35 victories, as the team once again finished in second place. The Senators then fell into another decline for the next decade.
The team had a period of prolonged success in the 1920s and 1930s, led by Walter Johnson, as well as fellow Hall-of-Famers Bucky Harris, Goose Goslin, Sam Rice, Heinie Manush, and Joe Cronin. In particular, a rejuvenated Johnson rebounded in 1924 to win 23 games with the help of his catcher, Muddy Ruel, as the Senators won the American League pennant for the first time in its history. The Senators then faced John McGraw's heavily favored New York Giants in the 1924 World Series. The two teams traded wins back and forth with three games of the first six being decided by one run. In the deciding 7th game, the Senators were trailing the Giants 3–1 in the 8th inning when Bucky Harris hit a routine ground ball to third that hit a pebble and took a bad hop over Giants third baseman Freddie Lindstrom. Two runners scored on the play, tying the score at three. An aging Walter Johnson came in to pitch the ninth inning and held the Giants scoreless into extra innings. In the bottom of the twelfth inning, Ruel hit a high, foul ball directly over home plate. The Giants' catcher, Hank Gowdy, dropped his protective mask to field the ball but, failing to toss the mask aside, stumbled over it and dropped the ball, thus giving Ruel another chance to bat. On the next pitch, Ruel hit a double; he proceeded to score the winning run when Earl McNeely hit a ground ball that took another bad hop over Lindstrom's head. This would mark the only World Series triumph for the franchise during their 60-year tenure in Washington.
The following season they repeated as American League champions but ultimately lost the 1925 World Series to the Pittsburgh Pirates. After Walter Johnson retired in 1927, he was hired as manager of the Senators. After enduring a few losing seasons, the team returned to contention in 1930. In 1933, Senators owner Griffith returned to the formula that worked for him nine years earlier: 26-year-old shortstop Joe Cronin became player-manager. The Senators posted a 99–53 record and cruised to the pennant seven games ahead of the New York Yankees, but in the 1933 World Series the Giants exacted their revenge, winning in five games. Following the loss, the Senators sank all the way to seventh place in 1934 and attendance began to fall. Despite the return of Harris as manager from 1935 to 1942 and again from 1950 to 1954, Washington was mostly a losing ball club for the next 25 years contending for the pennant only during World War II. Washington came to be known as "first in war, first in peace, and last in the American League"; their hard luck drove the plot of the musical and film Damn Yankees. Cecil Travis, Buddy Myer (1935 A.L. batting champion), Roy Sievers, Mickey Vernon (batting champion in 1946 and 1953), and Eddie Yost were notable Senators players whose careers were spent in obscurity on losing teams. In 1954, the Senators signed future Hall of Fame member Harmon Killebrew. By 1959, he was the Senators’ regular third baseman and led the league with 42 home runs, earning him a starting spot on the American League All-Star team.
After Griffith's death in 1955, his nephew and adopted son Calvin took over the team presidency. Calvin sold Griffith Stadium to the city of Washington and leased it back. This led to speculation that the team was planning to move, as the Boston Braves, St. Louis Browns, and Philadelphia Athletics had done in recent years. By 1957, after an early flirtation with San Francisco (where the New York Giants would move after the season), Griffith began courting Minneapolis–St. Paul, a prolonged process that resulted in his rejecting the Twin Cities' first offer before agreeing to move. Home attendance in Washington, D.C., steadily increased from 425,238 in 1955 to 475,288 in 1958, and then jumped to 615,372 in 1959. However, part of the Minnesota deal guaranteed a million fans a year for three years, plus the potential to double TV and radio money.
The American League opposed the move at first, but in 1960 a deal was reached. Major League Baseball agreed to let Griffith move his team to the Minneapolis-St. Paul region and allowed a new Senators team to be formed in Washington for the 1961 season.
Asked nearly two decades later why he moved the team, Griffith replied, "I’ll tell you why we came to Minnesota, it was when I found out you only had 15,000 blacks here. Black people don’t go to ball games, but they’ll fill up a rassling ring and put up such a chant it’ll scare you to death. It’s unbelievable. We came here because you’ve got good, hard-working, white people here."
Minnesota Twins: 1961–present
Renamed the Minnesota Twins, the team set up shop in Metropolitan Stadium. Success came quickly to the team in Minnesota. Sluggers Harmon Killebrew and Bob Allison, who had been stars in Washington, were joined by Tony Oliva and Zoilo Versalles, and later second baseman Rod Carew and pitchers Jim Kaat and Jim Perry, winning the American League pennant in 1965. A second wave of success came in the late 1980s and early 1990s under manager Tom Kelly, led by Kent Hrbek, Bert Blyleven, Frank Viola, and Kirby Puckett, winning the franchise's second and third World Series (and first and second in Minnesota).
The name "Twins" was derived from "Twin Cities", a popular nickname for the Minneapolis-St. Paul region. The NBA's Minneapolis Lakers had moved to Los Angeles in 1960 due to poor attendance, blamed in part on a perceived reluctance of fans in St. Paul to support the team. Griffith was determined not to alienate fans in either city by naming the team after one city or the other. He proposed to name the team the "Twin Cities Twins", but MLB objected and Griffith therefore named the team the Minnesota Twins. The team was allowed to keep its original "TC" (for Twin Cities) insignia for its caps. The team's logo shows two men, one in a Minneapolis Millers uniform and one in a St. Paul Saints uniform, shaking hands across the Mississippi River within an outline of the state of Minnesota. The "TC" remained on the Twins' caps until 1987, when they adopted new uniforms. By this time, the team felt it was established enough to put an "M" on its cap without having St. Paul fans think it stood for Minneapolis. The "TC" logo was moved to a sleeve on the jerseys, occasionally appeared as an alternate cap design, and then was reinstated as the main cap logo in 2010. Both the "TC" and "Minnie & Paul" logos remain the team's primary insignia.
1960s
The Twins were eagerly greeted in Minnesota when they arrived in 1961. They brought a nucleus of talented players: Harmon Killebrew, Bob Allison, Camilo Pascual, Zoilo Versalles, Jim Kaat, Earl Battey, and Lenny Green. Tony Oliva, who would go on to win American League batting championships in 1964, 1965 and 1971, made his major league debut in 1962. That year, the Twins won 91 games, the most by the franchise since 1933. Behind Mudcat Grant's 21 victories, Versalles' A.L. MVP season and Oliva's batting title, the Twins won 102 games and the American League Pennant in 1965, but they were defeated in the World Series by the Los Angeles Dodgers in seven games (behind the Series MVP, Sandy Koufax, who compiled a 2–1 record, including winning the seventh game).
In 1962, the Minnesota State Commission on Discrimination filed a complaint against the Twins, which was the only MLB team still segregating players during spring training and when traveling in the southern United States.
Heading into the final weekend of the 1967 season, when Rod Carew was named the A.L. Rookie of the Year, the Twins, Boston Red Sox, Chicago White Sox, and Detroit Tigers all had a shot at clinching the American League championship. The Twins and the Red Sox started the weekend tied for 1st place and played against each other in Boston for the final three games of the season. The Red Sox won two out of the three games, seizing their first pennant since 1946 with a 92–70 record. The Twins and Tigers both finished one game back, with 91–71 records, while the White Sox finished three games back, at 89–73. In 1969, the new manager of the Twins, Billy Martin, pushed aggressive base running all-around, with Carew stealing home seven times in the season (1 short of Ty Cobb's Major League Record) in addition to winning the first of seven A.L. batting championships. With Killebrew slugging 49 homers and winning the AL MVP Award, these 1969 Twins won the very first American League Western Division Championship, but they lost three straight games to the Baltimore Orioles, winners of 109 games, in the first American League Championship Series. The Orioles would go on to be upset by the New York Mets in the World Series. Martin was fired after the season, in part due to an August fight in Detroit with 20-game winner Dave Boswell and outfielder Bob Allison, in an alley outside the Lindell A.C. bar. Bill Rigney led the Twins to a repeat division title in 1970, behind the star pitching of Jim Perry (24–12), the A.L. Cy Young Award winner, while the Orioles again won the Eastern Division Championship behind the star pitching of Jim Palmer. Once again, the Orioles won the A.L. Championship Series in a three-game sweep, and this time they would win the World Series.
1970s
After winning the division again in 1970, the team entered an eight-year dry spell, finishing around the .500 mark. Killebrew departed after 1974. Owner Calvin Griffith faced financial difficulty with the start of free agency, costing the Twins the services of Lyman Bostock and Larry Hisle, who left as free agents after the 1977 season, and Carew, who was traded after the 1978 season. In 1975, Carew won his fourth consecutive AL batting title, having already joined Ty Cobb as the only players to lead the major leagues in batting average for three consecutive seasons. In , Carew batted .388, which was the highest in baseball since Boston's Ted Williams hit .406 in ; he won the 1977 AL MVP Award. He won another batting title in 1978, hitting .333.
1980s–90s
In 1982, the Twins moved into the Hubert H. Humphrey Metrodome, which they shared with the Minnesota Vikings. After a 16–54 start, the Twins were on the verge on becoming the worst team in MLB history. They turned the season around somewhat, but still lost 102 games, finishing with what is currently the second-worst record in Twins history (beaten only by the 2016 team, which lost 103 games), despite the .301 average, 23 homers and 92 RBI from rookie Kent Hrbek. In 1984, Griffith sold the Twins to multi-billionaire banker/financier Carl Pohlad. Pohlad beat a larger offer by New York businessman Donald Trump by promising to keep the club in Minnesota. The Metrodome hosted the 1985 Major League Baseball All-Star Game. After several losing seasons, the 1987 team, led by Hrbek, Gary Gaetti, Frank Viola (A.L. Cy Young winner in 1988), Bert Blyleven, Jeff Reardon, Tom Brunansky, Dan Gladden, and rising star Kirby Puckett, returned to the World Series after defeating the favored Detroit Tigers in the ALCS, 4 games to 1. Tom Kelly managed the Twins to World Series victories over the St. Louis Cardinals in 1987 and the Atlanta Braves in 1991. The 1988 Twins were the first team in American League history to draw more than 3 million fans. On July 17, 1990, the Twins became the only team in major league history to pull off two triple plays in the same game. Twins' pitcher and Minnesota native Jack Morris was the star of the series in 1991, going 2–0 in his three starts with a 1.17 ERA. 1991 also marked the first time that any team that finished in last place in their division would advance to the World Series the following season; both the Twins and the Braves did this in 1991. Contributors to the 1991 Twins' improvement from 74 wins to 95 included Chuck Knoblauch, the A.L. Rookie of the Year; Scott Erickson, 20-game winner; new closer Rick Aguilera and new designated hitter Chili Davis.
The World Series in 1991 is regarded by many as one of the classics of all time. In this Series, four games were won during the teams' final at-bat, and three of these were in extra innings. The Atlanta Braves won all three of their games in Atlanta, and the Twins won all four of their games in Minnesota. The sixth game was a legendary one for Puckett, who tripled in a run, made a sensational leaping catch against the wall, and finally in the 11th inning hit the game-winning home run. The seventh game was tied 0–0 after the regulation nine innings, and marked only the second time that the seventh game of the World Series had ever gone into extra innings. The Twins won on a walk-off RBI single by Gene Larkin in the bottom of the 10th inning, after Morris had pitched ten shutout innings against the Braves. The seventh game of the 1991 World Series is widely regarded as one of the greatest games in the history of professional baseball.
After a winning season in 1992 but falling short of Oakland in the division, the Twins fell into a years-long stretch of mediocrity, posting a losing record each season for the next eight: 71–91 in 1993, 50–63 in 1994, 56–88 in 1995, 78–84 in 1996, 68–94 in 1997, 70–92 in 1998, 63–97 in 1999 and 69–93 in 2000. From 1994 to 1997, a long sequence of retirements and injuries hurt the team badly, and Tom Kelly spent the remainder of his managerial career attempting to rebuild the Twins. In 1997, owner Carl Pohlad almost sold the Twins to North Carolina businessman Don Beaver, who would have moved the team to the Piedmont Triad area.
Puckett was forced to retire at age 35 due to loss of vision in one eye from a central retinal vein occlusion. The 1989 A.L. batting champion, he retired as the Twins' all-time leader in career hits, runs, doubles, and total bases. At the time of his retirement, his .318 career batting average was the highest by any right-handed American League batter since Joe DiMaggio. Puckett was the fourth baseball player during the 20th century to record 1,000 hits in his first five full calendar years in Major League Baseball, and was the second to record 2,000 hits during his first 10 full calendar years. He was elected to the Baseball Hall of Fame in 2001, his first year of eligibility.
2000s
The Twins dominated the Central Division in the first decade of the new century, winning the division in six of those ten years ('02, '03, '04, '06, '09 and '10), and nearly winning it in '08 as well. From 2001 to 2006, the Twins compiled the longest streak of consecutive winning seasons since moving to Minnesota.
Threatened with closure by league contraction, the 2002 team battled back to reach the American League Championship Series before being eliminated 4–1 by that year's World Series champion Anaheim Angels. The Twins have not won a playoff series since the 2002 ALDS against Oakland, despite the team winning several division championships in the decade.
In 2006, the Twins won the division on the last day of the regular season (the only day all season they held sole possession of first place) but lost to the Oakland Athletics in the ALDS. Ozzie Guillén coined a nickname for this squad, calling the Twins "little piranhas". The Twins players embraced the label, and in response, the Twins Front office started a "Piranha Night", with piranha finger puppets given out to the first 10,000 fans. Scoreboard operators sometimes played an animated sequence of piranhas munching under that caption in situations where the Twins were scoring runs playing "small ball", and the stadium vendors sold T-shirts and hats advertising "The Little Piranhas". The Twins also had the AL MVP in Justin Morneau, the AL batting champion in Joe Mauer, and the AL Cy Young Award winner in Johan Santana.
In 2008, the Twins finished the regular season tied with the White Sox on top of the AL Central, forcing a one-game playoff in Chicago to determine the division champion. The Twins lost that game and missed the playoffs. The game location was determined by rule of a coin flip that was conducted in mid-September. This rule was changed for the start of the 2009 season, making the site for any tiebreaker game to be determined by the winner of the regular season head-to-head record between the teams involved.
After a year where the Twins played .500 baseball for most of the season, the team won 17 of their last 21 games to tie the Detroit Tigers for the lead in the Central Division. The Twins were able to use the play-in game rule to their advantage when they won the AL Central at the end of the regular season by way of a 6–5 tiebreaker game that concluded with a 12th-inning walk-off hit by Alexi Casilla to right field, that scored Carlos Gómez. However, they failed to advance to the American League Championship Series as they lost the American League Divisional Series in three straight games to the eventual World Series champion New York Yankees. That year, Joe Mauer became only the second catcher in 33 years to win the AL MVP award. Iván Rodríguez won for the Texas Rangers in 1999, previous to that, the last catcher to win an AL MVP was the New York Yankees Thurman Munson in 1976.
2010 marked Minnesota's inaugural season played at Target Field, where the Twins finished the regular season with a record of 94–68, clinching the AL Central Division title for the 6th time in 9 years under manager Ron Gardenhire. New regular players included rookie Danny Valencia at third base, designated hitter Jim Thome, closer Matt Capps, infielder J. J. Hardy, and infielder Orlando Hudson. In relief pitching roles were late additions Brian Fuentes and Randy Flores. On July 7, the team suffered a major blow when Justin Morneau sustained a concussion, which knocked him out for the rest of the season. In the divisional series, the Twins lost to the Yankees in a three-game sweep for the second consecutive year. Following the season, Ron Gardenhire received AL Manager of the Year honors after finishing as a runner up in several prior years.
2017–present
In 2017, the Twins went 85–77, finishing 2nd In the AL Central. Following Brian Dozier's 34 home runs, Miguel Sanó, Byron Buxton, and Eddie Rosario all had breakout years, while Joe Mauer hit .305. They ended up making the playoffs, which made them the first ever team to lose 100 games the previous year and make the playoffs the next season. They lost to the Yankees in the wild card round.
The 2018 season did not go as well. The Twins went 78–84, and did not return to the post-season. Sanó and Buxton were injured most of the year and eventually both sent down to the minors, while long-time Twin Brian Dozier was traded at the deadline. One bright spot came at the end of the season, when hometown hero Joe Mauer returned to catcher (his original position) for his final game, ending his career with a signature double and standing ovation. Another highlight was the team's two-game series against the Cleveland Indians in San Juan, Puerto Rico. After the season, manager Paul Molitor was fired. Free agent signing Logan Morrison and long-time veteran Ervin Santana declared free agency.
In 2019, the Twins clinched the AL Central Division for the first time since 2010, finishing the season with the second-most wins in franchise history with 101, one short of the 1965 season. The team combined for a total of 307 home runs, the most in MLB history for a single season. The team's slugging prowess has earned them the nickname the Bomba Squad. In the 2019 ALDS, the Twins opponents were the New York Yankees, who finished one home run behind at 306 and the second team to break the 300 home run mark. The Twins were swept again, and extend their postseason losing streak to 16, dating back to the 2004 ALDS. On September 17, 2019, Miguel Sanó hit a 482-foot home run to make the Twins the first team in major league history to have five players with at least 30 home runs in a season.
Threats to move or disband the team
The quirks of the Hubert H. Humphrey Metrodome, including the turf floor and the white roof, gave the Twins a home-field advantage that helped them win the World Series in 1987 and 1991, at least in the opinion of their opponents. The Twins went 12–1 in postseason home games during those two seasons, becoming the first and second teams to sweep all four home games in a World Series. (The feat was repeated by the Arizona Diamondbacks in 2001.) Nevertheless, the Twins argued that the Metrodome was obsolete. Furthermore, they said sharing a stadium with the NFL's Minnesota Vikings, as they had been doing since their 1961 move to Minnesota, limited the team's revenue and made it difficult to sustain a top-notch, competitive team. The team was rumored to contemplate moving to New Jersey, Las Vegas, Portland, Oregon, the Greensboro/Winston-Salem, North Carolina area, and elsewhere in search of a more financially competitive market. In 2002, the team was nearly disbanded when Major League Baseball selected the Twins and the Montreal Expos (now the Washington Nationals franchise) for elimination due to their financial weakness. The impetus for league contraction diminished after a court decision forced the Twins to play out their lease on the Metrodome. However, Twins owner Carl Pohlad continued his efforts to move, pursuing litigation against the Metropolitan Stadium Commission and obtaining a state court ruling that his team was not obligated to play in the Metrodome after the 2006 season. This cleared the way for the Twins to move or disband before the 2007 season if a new deal was not reached.
Target Field
In response to the threatened loss of the Twins, the Minnesota private and public sector negotiated and approved a financing package for a replacement stadium— a baseball-only outdoor, natural turf ballpark in the Warehouse District of downtown Minneapolis— owned by a new entity known as the Minnesota Ballpark Authority. Target Field was constructed at a cost of $544.4 million (including site acquisition and infrastructure), utilizing the proceeds of a $392 million public bond offering based on a 0.15% sales tax in Hennepin County and private financing of $185 million provided by the Pohlad family. As part of the deal, the Twins also signed a 30-year lease of the new stadium, effectively guaranteeing the continuation of the team in Minnesota for a long time to come. Construction of the new field began in 2007, and was completed in December 2009, in time for the 2010 season. Commissioner Bud Selig, who earlier had threatened to disband the team, observed that without the new stadium the Twins could not have committed to sign their star player, catcher Joe Mauer, to an 8-year, $184 million contract extension. The first regular-season game in Target Field was played against the Boston Red Sox on April 12, 2010, with Mauer driving in two runs and going 3-for-5 to help the Twins defeat the Red Sox, 5–2.
On May 18, 2011, Target Field was named "The Best Place To Shop" by Street and Smith's SportsBusiness Journal at the magazine's 2011 Sports Business Awards Ceremony in New York City. It was also named "The Best Sports Stadium in North America" by ESPN The Magazine in a ranking that included over 120 different stadiums, ballparks and arenas from around North America.
In July 2014, Target Field hosted the 85th Major League Baseball All-Star Game and the Home Run Derby.
In June 2020, following protests over the murder of George Floyd, a statue of former owner Calvin Griffith was removed from Target Plaza outside of the stadium because of his history of racist comments.
Uniforms
Current
The Twins' white home uniform, first used in 2015, features the current "Twins" script (with an underline below "win") in navy outlined in red with Kasota gold drop shadows. Letters and numerals also take on the same color as the "Twins" script. The modern "Minnie and Paul" alternate logo (with the state of Minnesota in navy outlined in Kasota gold) appears on the left sleeve. Caps are in all-navy with the interlocking "TC" outlined in Kasota gold.
The Twins' red alternate home uniform, first used in 2016, features the "TC" insignia outlined in Kasota gold on the left chest. Letters and numerals are in navy outlined in white with Kasota gold drop shadows. The "Minnie and Paul" alternate logo appears on the left sleeve. The uniform is paired with a navy-brimmed red cap with the "TC" outlined in Kasota gold.
The Twins' navy alternate home uniform, first used in 2019, features the classic "Twins" script (with a tail underline accent after the letter "s") in red outlined in navy and Kasota gold. Letters and numerals also take on the same color as the "Twins" script. As with the home white uniforms, it is paired with the all-navy Kasota gold "TC" cap. The gold-trimmed "TC" insignia also appears on the left sleeve.
The Twins' powder blue alternate uniform, first used in 2020, is a modern buttoned version of the road uniform the team used from 1973 to 1986. The set contains the classic "Twins" script in red outlined in navy, along with red letters on the back and red numerals (both on the chest and on the back) outlined in navy. The "Minnie and Paul" alternate logo appears on the left sleeve. The uniform is paired with the primary all-navy "TC" cap minus the Kasota gold accents, which is also used on the helmets regardless of uniform.
The Twins' grey road uniform, first used in 2010, features the current "Minnesota" script (with an underline below "innesot") in red trimmed in navy. Letters are in navy while numerals (both on the chest and on the back) are in red trimmed in navy. The team's primary logo appears on the left sleeve. The uniform is paired with either the all-navy or the red-brimmed navy "TC" cap.
The Twins' navy alternate road uniform, first used in 2011, shares the same look as the regular road uniforms, but with a few differences. The "Minnesota" script is in red outlined in white, letters and chest numerals are in white outlined in red, and back numerals are in red outlined in white. Red piping is also added. The uniform is paired with either the all-navy or the red-brimmed navy "TC" cap.
Past uniforms
From 1961 to 1971 the Twins sported uniforms bearing the classic "Twins" script and numerals in navy outlined in red. They wore navy caps with an interlocking "TC" on the front; this was adopted because Griffith was well aware of the bitter rivalry between St. Paul and Minneapolis and didn't want to alienate fans in either city. The original "Minnie and Paul" alternate logo appears on the left sleeve of both the pinstriped white home uniform and grey road uniform.
For the 1972 season the Twins updated their uniforms. The color scheme on the "Twins" script and numerals were reversed, pinstripes were removed from the home uniform, and an updated "Minnie and Paul" roundel patch replaced the originals on the left sleeve.
In 1973 the Twins switched to polyester pullover uniforms, which included a powder blue road uniform. Chest numerals were added while a navy-brimmed red cap was used with the home uniform. The original "Minnie and Paul" logo returned to the left sleeve. Player names in red were added to the road uniform in 1977.
In 1987 the Twins updated their look. Home white uniforms brought back the pinstripes along with the modern-day "Twins" script. By this time, the franchise felt it was established enough in the area that it could put a stylized "M" on its cap without having fans in St. Paul think it stood for Minneapolis. The "TC" insignia adorned the left sleeve, later replaced by the modern "Minnie and Paul" alternate in 2002. Road grey uniforms, which also featured pinstripes, were emblazoned with "Minnesota" in red block letters outlined in navy, while the updated primary logo adorned the left sleeve. Both uniforms kept the red numerals trimmed in navy, but the color on the player names was changed to navy. In 1997, player names were added to the home uniform. Initially, both uniforms were paired with an all-navy cap featuring the underlined "M" in front, but in 2002, the "TC" cap was brought back as a home cap while the "M" cap was used on the road. The "M" cap was retired following the 2010 season, though the team continued to wear them as a throwback on special occasions.
For a few games during the 1997 season, the Twins wore red alternate uniforms, which featured navy piping and letters in white trimmed in navy. In that same year, the Twins also released a road navy alternate uniform, featuring red piping, "Minnesota" and player names in white block letters outlined in red, and red numerals outlined in white. The following season, the Twins replaced the red uniforms with a home navy alternate, which features the "Twins" script and back numerals in red outlined in white, and player names and chest numerals in white outlined in red. Both uniforms contained the "TC" (later modern "Minnie and Paul") and primary logo sleeve patches respectively. The Twins also brought back the navy-brimmed red cap for a few games with the home navy alternates. The road navy alternates remained in use until 2009, with the home navy version worn for the last time in the 2013 season.
The Twins also wore three other alternate uniforms in the past. In 2006, the Twins wore a sleeveless variation of their regular home uniforms with navy undershirts, which they wore until 2010. They also wore a buttoned version of their 1973–86 home uniforms in 2009, before giving way to the throwback off-white version of their 1961–71 home uniforms from 2010 to 2018.
Roster
Minnesota Twins all-time roster: A complete list of players who played in at least one game for the Twins franchise.
Minor league affiliates
The Minnesota Twins farm system consists of six minor league affiliates. With the invitation of the St. Paul Saints to join the Twins' farm system, they will have the closest MiLB affiliate of any team in baseball at apart.
Achievements
Baseball Hall of Fame members
Molitor, Morris, and Winfield were all St. Paul natives who joined the Twins late in their careers and were warmly received as "hometown heroes", but were elected to the hall primarily on the basis of their tenures with other teams. Both Molitor and Winfield had their 3,000th hit with Minnesota, while Morris pitched a complete-game shutout for the Twins in game seven of the 1991 World Series. Molitor was the first player in history to hit a triple for his 3,000th hit.
Cronin, Goslin, Griffith, Harris, Johnson, Killebrew and Wynn are listed on the Washington Hall of Stars display at Nationals Park (previously they were listed at Robert F. Kennedy Stadium). So are Ossie Bluege, George Case, Joe Judge, George Selkirk, Roy Sievers, Cecil Travis, Mickey Vernon and Eddie Yost.
Ford C. Frick Award recipients
Team captains
3 Harmon Killebrew 1961–74
Twins Hall of Fame
Retired numbers
The Metrodome's upper deck in center and right fields was partly covered by a curtain containing banners of various titles won, and retired numbers. There was no acknowledgment of the Twins' prior championships in Washington and several Senator Hall of Famers, such as Walter Johnson, played in the days prior to numbers being used on uniforms. However, Killebrew played seven seasons as a Senator, including two full seasons as a regular prior to the move to Minnesota in 1961.
Prior to the addition of the banners, the Twins acknowledged their retired numbers on the Metrodome's outfield fence. Harmon Killebrew's #3 was the first to be displayed, as it was the only one the team had retired when they moved in. It was joined by Rod Carew's #29 in 1987, Tony Oliva's #6 in 1991, Kent Hrbek's #14 in 1995, and Kirby Puckett's #34 in 1997 before the Twins began hanging the banners to reduce capacity. The championships, meanwhile, were marked on the "Baggie" in right field.
In the Metrodome, the numbers ran in that order from left to right. In Target Field, they run from right to left, presumably to allow space for additional numbers in the future. The retired numbers also serve as entry points at Target Field, The center field gate is Gate No. 3, honoring Killebrew, the left-field gate is Gate No. 6, honoring Oliva, the home plate gate is Gate No. 14, for Hrbek, the right field gate serves as Gate No. 29, in tribute to Carew, and the plaza gate is known as Gate No. 34, honoring Puckett.
The numbers that have been retired hang within Target Field in front of the tower that serves as the Twins' executive offices in left field foul territory. The championships banners have been replaced by small pennants that fly on masts at the back of the left-field upper deck. Those pennants, along with the flags flying in the plaza behind right field, serve as a visual cue for the players, suggesting the wind direction and speed.
Jackie Robinson's number, 42, was retired by Major League Baseball on April 15, 1997, and formally honored by the Twins on May 23, 1997. Robinson's number was positioned to the left of the Twins numbers in both venues.
Awards
Team records
Team seasons
Radio and television
In 2007, the Twins took the rights to the broadcasts in-house and created the Twins Radio Network (TRN). With that new network in place the Twins secured a new Metro Affiliate flagship radio station in KSTP (AM 1500). It replaced WCCO (AM 830), which held broadcast rights for the Twins since the team moved to Minneapolis in 1961. For 2013, the Twins moved to FM radio on KTWN-FM 96.3 K-Twin, which is owned by the Pohlad family. The original radio voices of the Twins in 1961 were Ray Scott, Halsey Hall and Bob Wolff. After the first season, Herb Carneal replaced Wolff. Twins TV and radio broadcasts were originally sponsored by the Hamm's Brewing Company. In 2009, Treasure Island Resort & Casino became the first-ever naming rights partner for the Twins Radio Network, making the commercial name of TRN the Treasure Island Baseball Network. In 2017, it was announced that WCCO would become the flagship station the Twins again starting in 2018, thus returning the team back to its original station after 11 years.
Cory Provus is the current radio play by play announcer, taking over in 2012 for longtime Twins voice John Gordon who retired following the 2011 season. Former Twins OF Dan Gladden serves as color commentator.
TRN broadcasts are originated from the studios at Minnesota News Network and Minnesota Farm Networks. Kris Atteberry hosts the pre-game show, the "Lineup Card" and the "Post-game Download" from those studios except when filling in for Provus or Gladden when they are on vacation.
On April 1, 2007, Herb Carneal, the radio voice of the Twins for all but one year of their existence, died at his home in Minnetonka after a long battle with a list of illnesses. Carneal is in the broadcasters wing of the Baseball Hall of Fame.
The television rights are held by Bally Sports North with Dick Bremer as the play-by-play announcer and former Twin, 2011 National Baseball Hall of Fame inductee, Bert Blyleven as color analyst. They are sometimes joined by Roy Smalley, Justin Morneau and Jack Morris.
Bob Casey was the Twins first public-address announcer starting in 1961 and continuing until his death in 2005. He was well known for his unique delivery and his signature announcements of "No smoking in the Metrodome, either go outside or quit!" (or "go back to Boston", etc.), "Batting 3rd, the center-fielder, No. 34, Kirby Puckett!!!" and asking fans not to 'throw anything or anybody' onto the field.
Community activities
Minnesota Twins Community Fund – Play Ball! Minnesota
Team and franchise traditions
Fans wave a Homer Hanky to rally the team during play-offs and other crucial games. The Homer Hanky was created by Terrie Robbins of the Star Tribune newspaper in the Twin Cities in 1987. It was her idea to originally give away 60,000 inaugural Homer Hankies. That year, over 2.3 million Homer Hankies were distributed.
The party atmosphere of the Twins clubhouse after a win is well known, the team's players unwinding with loud rock music (usually the choice of the winning pitcher) and video games.
The club has several hazing rituals, such as requiring the most junior relief pitcher on the team to carry water and snacks to the bullpen in a brightly colored small child's backpack (Barbie in 2005, SpongeBob SquarePants in 2006, Hello Kitty in 2007, Disney Princess and Tinkerbell in 2009, Chewbacca and Darth Vader in 2010), and many of its players, both past and present, are notorious pranksters. For example, Bert Blyleven earned the nickname "The Frying Dutchman" for his ability to pull the "hotfoot" – which entails crawling under the bench in the dugout and lighting a teammate's shoelaces on fire.
Minnesota Twins in popular culture
In Little Big League, the Minnesota Twins is inherited by a 12-year-old boy who goes on to manage the team.
In Terminator 2: Judgment Day, the son of Miles Dyson wears a Minnesota Twins cap. The movie was released in 1991, in which the Twins won the World Series
In Major League: Back to the Minors, the character Roger Dorn, from previous Major League movies, is owner of the Minnesota Twins.
In the 1997 film McHale's Navy, Lt. Commander Quinton McHale wears a Minnesota Twins cap.
Notes
References
Further reading
External links
Major League Baseball teams
Grapefruit League
Sports in Minneapolis
Professional baseball teams in Minnesota
1901 establishments in Washington, D.C. |
20051 | https://en.wikipedia.org/wiki/Mach%20number | Mach number | Mach number (M or Ma) (; ) is a dimensionless quantity in fluid dynamics representing the ratio of flow velocity past a boundary to the local speed of sound.
where:
is the local Mach number,
is the local flow velocity with respect to the boundaries (either internal, such as an object immersed in the flow, or external, like a channel), and
is the speed of sound in the medium, which in air varies with the square root of the thermodynamic temperature.
By definition, at Mach1, the local flow velocity is equal to the speed of sound. At Mach0.65, is 65% of the speed of sound (subsonic), and, at Mach1.35, is 35% faster than the speed of sound (supersonic). Pilots of high-altitude aerospace vehicles use flight Mach number to express a vehicle's true airspeed, but the flow field around a vehicle varies in three dimensions, with corresponding variations in local Mach number.
The local speed of sound, and hence the Mach number, depends on the temperature of the surrounding gas. The Mach number is primarily used to determine the approximation with which a flow can be treated as an incompressible flow. The medium can be a gas or a liquid. The boundary can be traveling in the medium, or it can be stationary while the medium flows along it, or they can both be moving, with different velocities: what matters is their relative velocity with respect to each other. The boundary can be the boundary of an object immersed in the medium, or of a channel such as a nozzle, diffuser or wind tunnel channeling the medium. As the Mach number is defined as the ratio of two speeds, it is a dimensionless number. If < 0.2–0.3 and the flow is quasi-steady and isothermal, compressibility effects will be small and simplified incompressible flow equations can be used.
Etymology
The Mach number is named after Moravian physicist and philosopher Ernst Mach, and is a designation proposed by aeronautical engineer Jakob Ackeret in 1929. As the Mach number is a dimensionless quantity rather than a unit of measure, the number comes after the unit; the second Mach number is Mach2 instead of 2Mach (or Machs). This is somewhat reminiscent of the early modern ocean sounding unit mark (a synonym for fathom), which was also unit-first, and may have influenced the use of the term Mach. In the decade preceding faster-than-sound human flight, aeronautical engineers referred to the speed of sound as Mach's number, never Mach 1.
Overview
Mach number is a measure of the compressibility characteristics of fluid flow: the fluid (air) behaves under the influence of compressibility in a similar manner at a given Mach number, regardless of other variables. As modeled in the International Standard Atmosphere, dry air at mean sea level, standard temperature of , the speed of sound is . The speed of sound is not a constant; in a gas, it increases proportionally to the square root of the absolute temperature, and since atmospheric temperature generally decreases with increasing altitude between sea level and , the speed of sound also decreases. For example, the standard atmosphere model lapses temperature to at altitude, with a corresponding speed of sound (Mach1) of , 86.7% of the sea level value.
Classification of Mach regimes
While the terms subsonic and supersonic, in the purest sense, refer to speeds below and above the local speed of sound respectively, aerodynamicists often use the same terms to talk about particular ranges of Mach values. This occurs because of the presence of a transonic regime around flight (free stream) M = 1 where approximations of the Navier-Stokes equations used for subsonic design no longer apply; the simplest explanation is that the flow around an airframe locally begins to exceed M = 1 even though the free stream Mach number is below this value.
Meanwhile, the supersonic regime is usually used to talk about the set of Mach numbers for which linearised theory may be used, where for example the (air) flow is not chemically reacting, and where heat-transfer between air and vehicle may be reasonably neglected in calculations.
In the following table, the regimes or ranges of Mach values are referred to, and not the pure meanings of the words subsonic and supersonic.
Generally, NASA defines high hypersonic as any Mach number from 10 to 25, and re-entry speeds as anything greater than Mach 25. Aircraft operating in this regime include the Space Shuttle and various space planes in development.
High-speed flow around objects
Flight can be roughly classified in six categories:
For comparison: the required speed for low Earth orbit is approximately 7.5 km/s = Mach 25.4 in air at high altitudes.
At transonic speeds, the flow field around the object includes both sub- and supersonic parts. The transonic period begins when first zones of M > 1 flow appear around the object. In case of an airfoil (such as an aircraft's wing), this typically happens above the wing. Supersonic flow can decelerate back to subsonic only in a normal shock; this typically happens before the trailing edge. (Fig.1a)
As the speed increases, the zone of M > 1 flow increases towards both leading and trailing edges. As M = 1 is reached and passed, the normal shock reaches the trailing edge and becomes a weak oblique shock: the flow decelerates over the shock, but remains supersonic. A normal shock is created ahead of the object, and the only subsonic zone in the flow field is a small area around the object's leading edge. (Fig.1b)
Fig. 1. Mach number in transonic airflow around an airfoil; M < 1 (a) and M > 1 (b).
When an aircraft exceeds Mach 1 (i.e. the sound barrier), a large pressure difference is created just in front of the aircraft. This abrupt pressure difference, called a shock wave, spreads backward and outward from the aircraft in a cone shape (a so-called Mach cone). It is this shock wave that causes the sonic boom heard as a fast moving aircraft travels overhead. A person inside the aircraft will not hear this. The higher the speed, the more narrow the cone; at just over M = 1 it is hardly a cone at all, but closer to a slightly concave plane.
At fully supersonic speed, the shock wave starts to take its cone shape and flow is either completely supersonic, or (in case of a blunt object), only a very small subsonic flow area remains between the object's nose and the shock wave it creates ahead of itself. (In the case of a sharp object, there is no air between the nose and the shock wave: the shock wave starts from the nose.)
As the Mach number increases, so does the strength of the shock wave and the Mach cone becomes increasingly narrow. As the fluid flow crosses the shock wave, its speed is reduced and temperature, pressure, and density increase. The stronger the shock, the greater the changes. At high enough Mach numbers the temperature increases so much over the shock that ionization and dissociation of gas molecules behind the shock wave begin. Such flows are called hypersonic.
It is clear that any object traveling at hypersonic speeds will likewise be exposed to the same extreme temperatures as the gas behind the nose shock wave, and hence choice of heat-resistant materials becomes important.
High-speed flow in a channel
As a flow in a channel becomes supersonic, one significant change takes place. The conservation of mass flow rate leads one to expect that contracting the flow channel would increase the flow speed (i.e. making the channel narrower results in faster air flow) and at subsonic speeds this holds true. However, once the flow becomes supersonic, the relationship of flow area and speed is reversed: expanding the channel actually increases the speed.
The obvious result is that in order to accelerate a flow to supersonic, one needs a convergent-divergent nozzle, where the converging section accelerates the flow to sonic speeds, and the diverging section continues the acceleration. Such nozzles are called de Laval nozzles and in extreme cases they are able to reach hypersonic speeds ( at 20 °C).
An aircraft Machmeter or electronic flight information system (EFIS) can display Mach number derived from stagnation pressure (pitot tube) and static pressure.
Calculation
When the speed of sound is known, the Mach number at which an aircraft is flying can be calculated by
where:
M is the Mach number
u is velocity of the moving aircraft and
c is the speed of sound at the given altitude (more properly temperature)
and the speed of sound varies with the thermodynamic temperature as:
where:
is the ratio of specific heat of a gas at a constant pressure to heat at a constant volume (1.4 for air)
is the specific gas constant for air.
is the static air temperature.
If the speed of sound is not known, Mach number may be determined by measuring the various air pressures (static and dynamic) and using the following formula that is derived from Bernoulli's equation for Mach numbers less than 1.0. Assuming air to be an ideal gas, the formula to compute Mach number in a subsonic compressible flow is:
where:
qc is impact pressure (dynamic pressure) and
p is static pressure
is the ratio of specific heat of a gas at a constant pressure to heat at a constant volume (1.4 for air)
is the specific gas constant for air.
The formula to compute Mach number in a supersonic compressible flow is derived from the Rayleigh supersonic pitot equation:
Calculating Mach number from pitot tube pressure
Mach number is a function of temperature and true airspeed.
Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature.
Assuming air to be an ideal gas, the formula to compute Mach number in a subsonic compressible flow is found from Bernoulli's equation for (above):
The formula to compute Mach number in a supersonic compressible flow can be found from the Rayleigh supersonic pitot equation (above) using parameters for air:
where:
qc is the dynamic pressure measured behind a normal shock.
As can be seen, M appears on both sides of the equation, and for practical purposes a root-finding algorithm must be used for a numerical solution (the equation's solution is a root of a 7th-order polynomial in M2 and, though some of these may be solved explicitly, the Abel–Ruffini theorem guarantees that there exists no general form for the roots of these polynomials). It is first determined whether M is indeed greater than 1.0 by calculating M from the subsonic equation. If M is greater than 1.0 at that point, then the value of M from the subsonic equation is used as the initial condition for fixed point iteration of the supersonic equation, which usually converges very rapidly. Alternatively, Newton's method can also be used.
See also
Notes
External links
Gas Dynamics Toolbox Calculate Mach number and normal shock wave parameters for mixtures of perfect and imperfect gases.
NASA's page on Mach Number Interactive calculator for Mach number.
NewByte standard atmosphere calculator and speed converter
Aerodynamics
Airspeed
Dimensionless numbers of fluid mechanics
Fluid dynamics |
20053 | https://en.wikipedia.org/wiki/March%208 | March 8 |
Events
Pre-1600
1010 – Ferdowsi completes his epic poem Shahnameh.
1126 – Following the death of his mother, queen Urraca of León, Alfonso VII is proclaimed king of León.
1262 – Battle of Hausbergen between bourgeois militias and the army of the bishop of Strasbourg.
1558 – The city of Pori () was founded by Duke John on the shores of the Gulf of Bothnia.
1601–1900
1658 – Treaty of Roskilde: After a devastating defeat in the Northern Wars (1655–1661), Frederick III, the King of Denmark–Norway is forced to give up nearly half his territory to Sweden.
1702 – Queen Anne, the younger sister of Mary II, becomes Queen regnant of England, Scotland, and Ireland.
1722 – The Safavid Empire of Iran is defeated by an army from Afghanistan at the Battle of Gulnabad.
1736 – Nader Shah, founder of the Afsharid dynasty, is crowned Shah of Iran.
1775 – An anonymous writer, thought by some to be Thomas Paine, publishes "African Slavery in America", the first article in the American colonies calling for the emancipation of slaves and the abolition of slavery.
1782 – Gnadenhutten massacre: Ninety-six Native Americans in Gnadenhutten, Ohio, who had converted to Christianity, are killed by Pennsylvania militiamen in retaliation for raids carried out by other Indian tribes.
1801 – War of the Second Coalition: At the Battle of Abukir, a British force under Sir Ralph Abercromby lands in Egypt with the aim of ending the French campaign in Egypt and Syria.
1817 – The New York Stock Exchange is founded.
1844 – King Oscar I ascends to the thrones of Sweden and Norway.
1844 – The Althing, the parliament of Iceland, was reopened after 45 years of closure.
1868 – Sakai incident: Japanese samurai kill 11 French sailors in the port of Sakai, Osaka.
1901–present
1910 – French aviator Raymonde de Laroche becomes the first woman to receive a pilot's license.
1916 – World War I: A British force unsuccessfully attempts to relieve the siege of Kut (present-day Iraq) in the Battle of Dujaila.
1917 – International Women's Day protests in Petrograd mark the beginning of the February Revolution (February 23 in the Julian calendar).
1917 – The United States Senate votes to limit filibusters by adopting the cloture rule.
1921 – Spanish Prime Minister Eduardo Dato Iradier is assassinated while on his way home from the parliament building in Madrid.
1924 – A mine disaster kills 172 coal miners near Castle Gate, Utah.
1936 – Daytona Beach and Road Course holds its first oval stock car race.
1937 – Spanish Civil War: The Battle of Guadalajara begins.
1942 – World War II: The Dutch East Indies surrender Java to the Imperial Japanese Army
1942 – World War II: Imperial Japanese Army forces captured Rangoon, Burma from British.
1963 – The Ba'ath Party comes to power in Syria in a coup d'état
1966 – Nelson's Pillar in Dublin, Ireland, destroyed by a bomb.
1979 – Philips demonstrates the compact disc publicly for the first time.
1983 – Cold War: While addressing a convention of Evangelicals, U.S. President Ronald Reagan labels the Soviet Union an "evil empire".
1985 – A supposed failed assassination attempt on Islamic cleric Sayyed Mohammad Hussein Fadlallah in Beirut, Lebanon kills at least 56 and injures 180 others.
2004 – A new constitution is signed by Iraq's Governing Council.
2014 – In one of aviation's greatest mysteries, Malaysia Airlines Flight 370, carrying a total of 239 people, disappears en route from Kuala Lumpur to Beijing. The fate of the flight remains unknown.
2017 – The Azure Window, a natural arch on the Maltese island of Gozo, collapses in stormy weather.
2018 – The first Aurat March (social/political demonstration) was held being International Women's Day in Karachi, Pakistan, since then annually held across Pakistan and feminist slogan Mera Jism Meri Marzi (My body, my choice), in demand for women's right to bodily autonomy and against gender-based violence came into vogue in Pakistan.
2021 – International Women's Day marches in Mexico become violent with 62 police officers and 19 civilians injured in Mexico City alone.
Births
Pre-1600
1495 – John of God, Portuguese friar and saint (d. 1550)
1601–1900
1712 – John Fothergill, English physician and botanist (d. 1780)
1714 – Carl Philipp Emanuel Bach, German pianist and composer (d. 1788)
1726 – Richard Howe, 1st Earl Howe, English admiral and politician, Treasurer of the Navy (d. 1799)
1746 – André Michaux, French botanist and explorer (d. 1802)
1748 – William V, Prince of Orange (d. 1806)
1761 – Jan Potocki, Polish ethnologist, historian, linguist, and author (d. 1815)
1799 – Simon Cameron, American journalist and politician, United States Secretary of War (d. 1889)
1804 – Alvan Clark, American astronomer and optician (d. 1887)
1822 – Ignacy Łukasiewicz, Polish inventor and businessman, invented the Kerosene lamp (d. 1882)
1827 – Wilhelm Bleek, German linguist and anthropologist (d. 1875)
1830 – João de Deus, Portuguese poet and educator (d. 1896)
1836 – Harriet Samuel, English businesswoman and founder the jewellery retailer H. Samuel (d. 1908)
1841 – Oliver Wendell Holmes Jr., American lawyer and jurist (d. 1935)
1851 – Frank Avery Hutchins, American librarian and educator (d. 1914)
1856 – Bramwell Booth, English 2nd General of The Salvation Army (d. 1929)
1856 – Colin Campbell Cooper, American painter and academic (d. 1937)
1859 – Kenneth Grahame, British author (d. 1932)
1865 – Frederic Goudy, American type designer (d. 1947)
1879 – Otto Hahn, German chemist and academic, Nobel Prize laureate (d. 1968)
1886 – Edward Calvin Kendall, American chemist and academic, Nobel Prize laureate (d. 1972)
1892 – Juana de Ibarbourou, Uruguayan poet and author (d. 1979)
1896 – Charlotte Whitton, Canadian journalist and politician, 46th Mayor of Ottawa (d. 1975)
1901–present
1902 – Louise Beavers, American actress and singer (d. 1962)
1902 – Jennings Randolph, American journalist and politician (d. 1998)
1907 – Konstantinos Karamanlis, Greek lawyer and politician, President of Greece (d. 1998)
1909 – Beatrice Shilling, English motorcycle racer and engineer (d. 1990)
1910 – Claire Trevor, American actress (d. 2000)
1911 – Alan Hovhaness, Armenian-American pianist and composer (d. 2000)
1912 – Preston Smith, American businessman and politician, Governor of Texas (d. 2003)
1912 – Meldrim Thomson Jr., American publisher and politician, Governor of New Hampshire (d. 2001)
1914 – Yakov Borisovich Zel'dovich, Belarusian-Russian physicist and astronomer (d. 1987)
1918 – Eileen Herlie, Scottish-American actress (d. 2008)
1921 – Alan Hale Jr., American actor and restaurateur (d. 1990)
1922 – Ralph H. Baer, German-American video game designer, created the Magnavox Odyssey (d. 2014)
1922 – Cyd Charisse, American actress and dancer (d. 2008)
1922 – Carl Furillo, American baseball player (d. 1989)
1922 – Shigeru Mizuki, Japanese author and illustrator (d. 2015)
1924 – Anthony Caro, English sculptor and illustrator (d. 2013)
1924 – Sean McClory, Irish-American actor and director (d. 2003)
1924 – Addie L. Wyatt, American civil rights activist and labor leader (d. 2012)
1925 – Warren Bennis, American scholar, author, and academic (d. 2014)
1926 – Francisco Rabal, Spanish actor, director, and screenwriter (d. 2001)
1927 – Ramon Revilla Sr., Filipino actor and politician (d. 2020)
1930 – Bob Grim, American baseball player (d. 1996)
1930 – Douglas Hurd, English politician
1931 – John McPhee, American author and educator
1931 – Neil Postman, American author and social critic (d. 2003)
1931 – Gerald Potterton, English-Canadian animator, director, and producer
1934 – Marv Breeding, American baseball player and scout (d. 2006)
1935 – George Coleman, American saxophonist, composer, and bandleader
1936 – Sue Ane Langdon, American actress and singer
1937 – Richard Fariña, American singer-songwriter and author (d. 1966)
1937 – Juvénal Habyarimana, Rwandan politician, President of Rwanda (d. 1994)
1939 – Jim Bouton, American baseball player and journalist (d. 2019)
1939 – Lynn Seymour, Canadian ballerina and choreographer
1939 – Lidiya Skoblikova, Russian speed skater and coach
1939 – Robert Tear, Welsh tenor and conductor (d. 2011)
1941 – Norman Stone, British historian, author, and academic (d. 2019)
1942 – Dick Allen, American baseball player and tenor (d. 2020)
1942 – Ann Packer, English sprinter, hurdler, and long jumper
1943 – Susan Clark, Canadian actress and producer
1943 – Lynn Redgrave, English-American actress and singer (d. 2010)
1944 – Sergey Nikitin, Russian singer-songwriter and guitarist
1945 – Micky Dolenz, American singer-songwriter and actor
1945 – Anselm Kiefer, German painter and sculptor
1946 – Randy Meisner, American singer-songwriter and bass player
1947 – Carole Bayer Sager, American singer-songwriter and painter
1947 – Michael S. Hart, American author, founded Project Gutenberg (d. 2011)
1948 – Mel Galley, English rock singer-songwriter and guitarist (d. 2008)
1948 – Jonathan Sacks, English rabbi, philosopher, and scholar (d. 2020)
1949 – Teofilo Cubillas, Peruvian footballer
1951 – Dianne Walker, American tap dancer
1953 – Jim Rice, American baseball player, coach, and sportscaster
1954 – Steve James, American documentary filmmaker
1954 – David Wilkie, Sri Lankan-Scottish swimmer
1956 – Laurie Cunningham, English footballer (d. 1989)
1956 – David Malpass, American economist and government official
1957 – Clive Burr, English rock drummer (d. 2013)
1957 – William Edward Childs, American pianist and composer
1957 – Bob Stoddard, American baseball player
1958 – Gary Numan, English singer-songwriter, guitarist, and producer
1959 – Aidan Quinn, Irish-American actor
1960 – Irek Mukhamedov, Russian ballet dancer
1961 – Camryn Manheim, American actress
1961 – Larry Murphy, Canadian ice hockey player
1965 – Kenny Smith, American basketball player and sportscaster
1966 – Greg Barker, Baron Barker of Battle, English politician
1968 – Michael Bartels, German race car driver
1970 – Jason Elam, American football player
1972 – Lena Sundström, Swedish journalist and author
1976 – Juan Encarnación, Dominican baseball player
1976 – Freddie Prinze Jr., American actor, producer, and screenwriter
1977 – James Van Der Beek, American actor
1977 – Johann Vogel, Swiss footballer
1982 – Leonidas Kampantais, Greek footballer
1983 – André Santos, Brazilian footballer
1983 – Mark Worrell, American baseball player
1984 – Ross Taylor, New Zealand cricketer
1985 – Maria Ohisalo, Finnish politician and researcher
1990 – Asier Illarramendi, Spanish footballer
1990 – Petra Kvitová, Czech tennis player
1991 – Tom English, Australian rugby player
1994 – Claire Emslie, Scottish footballer
1996 – Matthew Hammelmann, Australian rules footballer
1997 – Tijana Bošković, Serbian volleyball player
1998 – Tali Darsigny, Canadian weightlifter
Deaths
Pre-1600
1126 – Urraca of León and Castile (b. 1079)
1137 – Adela of Normandy, by marriage countess of Blois (b. c. 1067)
1144 – Pope Celestine II
1403 – Bayezid I, Ottoman sultan (b. 1360)
1466 – Francesco I Sforza, Duke of Milan (b. 1401)
1550 – John of God, Portuguese friar and saint (b. 1495)
1601–1900
1619 – Veit Bach, German baker and miller
1641 – Xu Xiake, Chinese geographer and explorer (b. 1587)
1702 – William III of England (b. 1650)
1717 – Abraham Darby I, English blacksmith (b. 1678)
1723 – Christopher Wren, English architect, designed St. Paul's Cathedral (b. 1632)
1844 – Charles XIV John of Sweden (b. 1763)
1869 – Hector Berlioz, French composer, conductor, and critic (b. 1803)
1872 – Priscilla Susan Bury, British botanist (b. 1799)
1872 – Cornelius Krieghoff, Dutch-Canadian painter (b. 1815)
1874 – Millard Fillmore, American lawyer and politician, 13th President of the United States (b. 1800)
1887 – Henry Ward Beecher, American minister and activist (b. 1813)
1887 – James Buchanan Eads, American engineer, designed the Eads Bridge (b. 1820)
1889 – John Ericsson, Swedish-American engineer (b. 1803)
1901–present
1917 – Ferdinand von Zeppelin, German general and businessman (b. 1838)
1923 – Johannes Diderik van der Waals, Dutch physicist and academic, Nobel Prize laureate (b. 1837)
1930 – William Howard Taft, American politician, 27th President of the United States (b. 1857)
1930 – Edward Terry Sanford, American jurist and politician, United States Assistant Attorney General (b. 1865)
1932 – Minna Craucher, Finnish socialite and spy (b. 1891)
1937 – Howie Morenz, Canadian ice hockey player (b. 1902)
1941 – Sherwood Anderson, American novelist and short story writer (b. 1876)
1942 – José Raúl Capablanca, Cuban chess player (b. 1888)
1944 – Fredy Hirsch, German Jewish athlete who helped thousands of Jewish children in the Holocaust (b. 1916)
1948 – Hulusi Behçet, Turkish dermatologist and scientist (b. 1889)
1957 – Othmar Schoeck, Swiss composer and conductor (b. 1886)
1961 – Thomas Beecham, English conductor and composer (b. 1879)
1971 – Harold Lloyd, American actor, director, and producer (b. 1893)
1973 – Ron "Pigpen" McKernan, American keyboard player and songwriter (b. 1945)
1975 – George Stevens, American director, producer, and screenwriter (b. 1904)
1983 – Alan Lennox-Boyd, 1st Viscount Boyd of Merton, English lieutenant and politician (b. 1904)
1983 – William Walton, English composer (b. 1902)
1993 – Billy Eckstine, American trumpet player (b. 1914)
1996 – Jack Churchill, British colonel (b. 1906)
1998 – Ray Nitschke, American football player (b. 1936)
1999 – Adolfo Bioy Casares, Argentinian journalist and author (b. 1914)
1999 – Peggy Cass, American actress and comedian (b. 1924)
1999 – Joe DiMaggio, American baseball player and coach (b. 1914)
2003 – Adam Faith, English singer (b. 1940)
2003 – Karen Morley, American actress (b. 1909)
2004 – Muhammad Zaidan, Syrian terrorist, founded the Palestine Liberation Front
2005 – César Lattes, Brazilian physicist and academic (b. 1924)
2005 – Aslan Maskhadov, Chechen commander and politician, President of the Chechen Republic of Ichkeria (b. 1951)
2007 – John Inman, English actor (b. 1935)
2007 – John Vukovich, American baseball player and coach (b. 1947)
2009 – Hank Locklin, American singer-songwriter and guitarist (b. 1918)
2012 – Simin Daneshvar, Iranian author and academic (b. 1921)
2013 – John O'Connell, Irish and politician, Irish Minister of Health (b. 1927)
2013 – Ewald-Heinrich von Kleist-Schmenzin, German soldier and publisher (b. 1922)
2014 – Leo Bretholz, Austrian-American Holocaust survivor and author (b. 1921)
2014 – William Guarnere, American sergeant (b. 1923)
2015 – Sam Simon, American director, producer, and screenwriter (b. 1955)
2016 – George Martin, English composer, conductor, and producer (b. 1926)
2018 – Kate Wilhelm, American author (b. 1928)
2019 – Marshall Brodien, American actor (b. 1934)
2019 – Cedrick Hardman, American football player and actor (b. 1948)
2020 – Max von Sydow, Swedish actor (b. 1929)
Holidays and observances
Christian feast day:
Edward King (Church of England)
Felix of Burgundy
John of God
Philemon the actor
March 8 (Eastern Orthodox liturgics)
International Women's Day, and its related observances:
International Women's Collaboration Brew Day
References
External links
BBC: On This Day
Historical Events on March 8
Today in Canadian History
Days of the year
March |
20054 | https://en.wikipedia.org/wiki/March%209 | March 9 |
Events
Pre-1600
141 BC – Liu Che, posthumously known as Emperor Wu of Han, assumes the throne over the Han dynasty of China.
1009 – First known mention of Lithuania, in the annals of the monastery of Quedlinburg.
1226 – Khwarazmian sultan Jalal ad-Din conquers the Georgian capital of Tbilisi.
1230 – Bulgarian Tsar Ivan Asen II defeats Theodore of Epirus in the Battle of Klokotnitsa.
1500 – The fleet of Pedro Álvares Cabral leaves Lisbon for the Indies. The fleet will discover Brazil which lies within boundaries granted to Portugal in the Treaty of Tordesillas.
1601–1900
1701 – Safavid troops retreat from Basra, ending a three-year occupation.
1765 – After a campaign by the writer Voltaire, judges in Paris posthumously exonerate Jean Calas of murdering his son. Calas had been tortured and executed in 1762 on the charge, though his son may have actually committed suicide.
1776 – The Wealth of Nations by Scottish economist and philosopher Adam Smith is published.
1796 – Napoléon Bonaparte marries his first wife, Joséphine de Beauharnais.
1811 – Paraguayan forces defeat Manuel Belgrano at the Battle of Tacuarí.
1815 – Francis Ronalds describes the first battery-operated clock in the Philosophical Magazine.
1841 – The U.S. Supreme Court rules in the United States v. The Amistad case that captive Africans who had seized control of the ship carrying them had been taken into slavery illegally.
1842 – Giuseppe Verdi's third opera, Nabucco, receives its première performance in Milan; its success establishes Verdi as one of Italy's foremost opera composers.
1842 – The first documented discovery of gold in California occurs at Rancho San Francisco, six years before the California Gold Rush.
1847 – Mexican–American War: The first large-scale amphibious assault in U.S. history is launched in the Siege of Veracruz.
1862 – American Civil War: and fight to a draw in the Battle of Hampton Roads, the first battle between two ironclad warships.
1901–present
1908 – Inter Milan was founded on Football Club Internazionale, following a schism from A.C. Milan.
1916 – Mexican Revolution: Pancho Villa leads nearly 500 Mexican raiders in an attack against the border town of Columbus, New Mexico.
1933 – Great Depression: President Franklin D. Roosevelt submits the Emergency Banking Act to Congress, the first of his New Deal policies.
1942 – World War II: Dutch East Indies unconditionally surrendered to the Japanese forces in Kalijati, Subang, West Java, and the Japanese completed their Dutch East Indies campaign.
1944 – World War II: Soviet Army planes attack Tallinn, Estonia.
1945 – World War II: A coup d'état by Japanese forces in French Indochina removes the French from power.
1946 – Bolton Wanderers stadium disaster at Burnden Park, Bolton, England, kills 33 and injures hundreds more.
1954 – McCarthyism: CBS television broadcasts the See It Now episode, "A Report on Senator Joseph McCarthy", produced by Fred Friendly.
1956 – Soviet forces suppress mass demonstrations in the Georgian SSR, reacting to Nikita Khrushchev's de-Stalinization policy.
1957 – The 8.6 Andreanof Islands earthquake shakes the Aleutian Islands, causing over $5 million in damage from ground movement and a destructive tsunami.
1959 – The Barbie doll makes its debut at the American International Toy Fair in New York.
1960 – Dr. Belding Hibbard Scribner implants for the first time a shunt he invented into a patient, which allows the patient to receive hemodialysis on a regular basis.
1961 – Sputnik 9 successfully launches, carrying a dog and a human dummy, and demonstrating that the Soviet Union was ready to begin human spaceflight.
1967 – Trans World Airlines Flight 553 crashes in a field in Concord Township, Ohio following a mid-air collision with a Beechcraft Baron, killing 26 people.
1974 – The Mars 7 Flyby bus releases the descent module too early, missing Mars.
1976 – Forty-two people die in the Cavalese cable car disaster, the worst cable-car accident to date.
1977 – The Hanafi Siege: In a thirty-nine-hour standoff, armed Hanafi Muslims seize three Washington, D.C., buildings.
1978 – President Soeharto inaugurated Jagorawi Toll Road, the first toll highway in Indonesia, connecting Jakarta, Bogor and Ciawi, West Java.
1987 – Chrysler announces its acquisition of American Motors Corporation
1997 – Comet Hale–Bopp: Observers in China, Mongolia and eastern Siberia are treated to a rare double feature as an eclipse permits Hale-Bopp to be seen during the day.
1997 – The Notorious B.I.G. is murdered in Los Angeles after attending the Soul Train Music Awards. He is gunned down leaving an after party at the Petersen Automotive Museum. His murder remains unsolved.
2011 – Space Shuttle Discovery makes its final landing after 39 flights.
Births
Pre-1600
1451 – Amerigo Vespucci, Italian cartographer and explorer (d. 1512)
1564 – David Fabricius, German theologian, cartographer and astronomer (d. 1617)
1568 – Aloysius Gonzaga, Italian saint (d. 1591)
1601–1900
1662 – Franz Anton von Sporck, German noble (d. 1738)
1697 – Friederike Caroline Neuber, German actress (d. 1760)
1737 – Josef Mysliveček, Czech violinist and composer (d. 1781)
1749 – Honoré Gabriel Riqueti, comte de Mirabeau, French journalist and politician (d. 1791)
1753 – Jean-Baptiste Kléber, French general (d. 1800)
1758 – Franz Joseph Gall, German neuroanatomist and physiologist (d. 1828)
1763 – William Cobbett, English journalist and author (d. 1835)
1806 – Edwin Forrest, American actor and philanthropist (d. 1872)
1814 – Taras Shevchenko, Ukrainian poet and playwright (d. 1861)
1815 – David Davis, American jurist and politician (d. 1886)
1820 – Samuel Blatchford, American lawyer and jurist (d. 1893)
1824 – Amasa Leland Stanford, American businessman and politician, founded Stanford University (d. 1893)
1847 – Martin Pierre Marsick, Belgian violinist, composer, and educator (d. 1924)
1850 – Hamo Thornycroft, English sculptor and academic (d. 1925)
1856 – Eddie Foy, Sr., American actor and dancer (d. 1928)
1863 – Mary Harris Armor, American suffragist (d. 1950)
1887 – Fritz Lenz, German geneticist and physician (d. 1976)
1890 – Rupert Balfe, Australian footballer and lieutenant (d. 1915)
1890 – Vyacheslav Molotov, Russian politician and diplomat, Soviet Minister of Foreign Affairs (d. 1986)
1891 – José P. Laurel, Filipino lawyer, politician and President of the Philippines (d. 1959)
1892 – Mátyás Rákosi, Hungarian politician (d. 1971)
1892 – Vita Sackville-West, English author, poet, and gardener (d. 1962)
1901–present
1902 – Will Geer, American actor (d. 1978)
1904 – Paul Wilbur Klipsch, American soldier and engineer, founded Klipsch Audio Technologies (d. 2002)
1910 – Samuel Barber, American pianist and composer (d. 1981)
1911 – Clara Rockmore, American classical violin prodigy and theremin player, (d. 1998)
1915 – Johnnie Johnson, English air marshal and pilot (d. 2001)
1918 – George Lincoln Rockwell, American sailor and politician, founded the American Nazi Party (d. 1967)
1918 – Mickey Spillane, American crime novelist (d. 2006)
1920 – Franjo Mihalić, Croatian-Serbian runner and coach (d. 2015)
1921 – Carl Betz, American actor (d. 1978)
1922 – Ian Turbott, New Zealand-Australian former diplomat and university administrator (d. 2016)
1923 – James L. Buckley, American lawyer, judge, and politician
1923 – André Courrèges, French fashion designer (d. 2016)
1923 – Walter Kohn, Austrian-American physicist and academic, Nobel Prize laureate (d. 2016)
1926 – Joe Franklin, American radio and television host (d. 2015)
1928 – Gerald Bull, Canadian-American engineer and academic (d. 1990)
1928 – Keely Smith, American singer and actress (d. 2017)
1929 – Desmond Hoyte, Guyanese lawyer, politician and President of Guyana (d. 2002)
1929 – Zillur Rahman, Bangladeshi politician, 19th President of Bangladesh (d. 2013)
1930 – Ornette Coleman, American saxophonist, violinist, trumpet player, and composer (d. 2015)
1931 – Jackie Healy-Rae, Irish politician (d. 2014)
1932 – Qayyum Chowdhury, Bangladeshi painter and academic (d. 2014)
1932 – Walter Mercado, Puerto Rican-American astrologer and actor (d. 2019)
1933 – Lloyd Price, American R&B singer-songwriter (d. 2021)
1933 – David Weatherall, English physician, geneticist, and academic (d. 2018)
1934 – Yuri Gagarin, Russian colonel, pilot, and astronaut (d. 1968)
1934 – Joyce Van Patten, American actress
1935 – Andrew Viterbi, American engineer and businessman, co-founded Qualcomm Inc.
1936 – Mickey Gilley, American singer-songwriter and pianist
1936 – Marty Ingels, American actor and comedian (d. 2015)
1937 – Bernard Landry, Canadian lawyer, politician and Premier of Quebec (d. 2018)
1937 – Harry Neale, Canadian ice hockey player, coach, and sportscaster
1937 – Brian Redman, English race car driver
1940 – Raul Julia, Puerto Rican-American actor (d. 1994)
1941 – Jim Colbert, American golfer
1941 – Ernesto Miranda, American criminal (d. 1976)
1942 – John Cale, Welsh musician, composer, singer, songwriter and record producer
1942 – Ion Caramitru, Romanian actor and artistic director (d. 2021)
1942 – Mark Lindsay, American singer-songwriter, saxophonist, and producer
1943 – Bobby Fischer, American chess player and author (d. 2008)
1944 – Lee Irvine, South African cricketer
1945 – Robert Calvert, English singer-songwriter and playwright (d. 1988)
1945 – Robin Trower, English rock guitarist and vocalist
1945 – Dennis Rader (The BTK strangler) American serial killer.
1946 – Alexandra Bastedo, English actress (d. 2014)
1946 – Warren Skaaren, American screenwriter and producer (d. 1990)
1946 – Bernd Hölzenbein, German footballer and scout
1947 – Keri Hulme, New Zealand author and poet
1948 – Emma Bonino, Italian politician, Italian Minister of Foreign Affairs
1948 – Eric Fischl, American painter and sculptor
1948 – Jeffrey Osborne, American singer and drummer
1949 – Neil Hamilton, Welsh lawyer and politician
1950 – Doug Ault, American baseball player and manager (d. 2004)
1950 – Andy North, American golfer
1950 – Howard Shelley, English pianist and conductor
1951 – Helen Zille, South African journalist, politician and Premier of the Western Cape
1952 – Bill Beaumont, English rugby player and manager
1954 – Carlos Ghosn, Brazilian-Lebanese-French business executive
1954 – Bobby Sands, PIRA volunteer; Irish republican politician (d. 1981)
1954 – Jock Taylor, Scottish motorcycle racer (d. 1982)
1955 – Teo Fabi, Italian race car driver
1955 – Józef Pinior, Polish academic and politician
1956 – Mark Dantonio, American football player and coach
1956 – Shashi Tharoor, Indian politician, Indian Minister of External Affairs
1956 – David Willetts, English academic and politician
1958 – Paul MacLean, Canadian ice hockey player and coach
1959 – Takaaki Kajita, Japanese physicist and academic, Nobel Prize laureate
1959 – Lonny Price, American actor, director, and screenwriter
1960 – Linda Fiorentino, American actress
1961 – Rick Steiner, American wrestler
1961 – Darrell Walker, American basketball player and coach
1963 – Terry Mulholland, American baseball player
1963 – Jean-Marc Vallée, Canadian director and screenwriter
1964 – Juliette Binoche, French actress
1964 – Phil Housley, American ice hockey player and coach
1965 – Brian Bosworth, American football player and actor
1965 – Benito Santiago, Puerto Rican-American baseball player
1966 – Brendan Canty, American drummer and songwriter
1966 – Tony Lockett, Australian footballer
1968 – Youri Djorkaeff, French footballer
1969 – Kimberly Guilfoyle, American lawyer and journalist
1970 – Naveen Jindal, Indian businessman and politician
1970 – Martin Johnson, English rugby player and coach
1971 – Emmanuel Lewis, American actor
1972 – Jodey Arrington, United States politician
1973 – Liam Griffin, English race car driver
1975 – Juan Sebastián Verón, Argentinian footballer
1977 – Radek Dvořák, Czech ice hockey player
1979 – Oscar Isaac, Guatemalan-American actor
1980 – Matthew Gray Gubler, American actor.
1981 – Antonio Bryant, American football player
1981 – Clay Rapada, American baseball player
1982 – Ryan Bayley, Australian cyclist
1982 – Matt Bowen, Australian rugby league player
1982 – Mirjana Lučić-Baroni, Croatian tennis player
1983 – Wayne Simien, American basketball player
1983 – Clint Dempsey, American international soccer player
1984 – Abdoulay Konko, French footballer
1984 – Julia Mancuso, American skier
1985 – Brent Burns, Canadian ice hockey player
1985 – Jesse Litsch, American baseball player
1985 – Pastor Maldonado, Venezuelan race car driver
1985 – Parthiv Patel, Indian cricketer
1986 – Colin Greening, Canadian ice hockey player
1986 – Brittany Snow, American actress and producer
1989 – Taeyeon, South Korean artist
1990 – Daley Blind, Dutch footballer
1990 – Matt Robinson, New Zealand rugby league player
1990 – YG (rapper), American rapper
1991 – Jooyoung, Korean singer-songwriter
1993 – Suga, South Korean rapper, songwriter, record producer
1994 – Morgan Rielly, Canadian ice hockey player
1997 – Chika, American rapper
Deaths
Pre-1600
886 – Abu Ma'shar al-Balkhi, Muslim scholar and astrologer (b. 787)
1202 – Sverre of Norway
1440 – Frances of Rome, Italian nun and saint (b. 1384)
1444 – Leonardo Bruni, Italian humanist (b. c.1370)
1463 – Catherine of Bologna, Italian nun and saint (d. 1463)
1566 – David Rizzio, Italian-Scottish courtier and politician (b. 1533).
1601–1900
1649 – James Hamilton, 1st Duke of Hamilton, Scottish soldier and politician, (b. 1606)
1649 – Henry Rich, 1st Earl of Holland, English soldier and politician (b. 1590)
1661 – Cardinal Mazarin, Italian-French academic and politician, Prime Minister of France (b. 1602)
1709 – Ralph Montagu, 1st Duke of Montagu, English courtier and politician (b. 1638)
1808 – Joseph Bonomi the Elder, Italian architect (b. 1739)
1810 – Ozias Humphry, English painter and academic (b. 1742)
1825 – Anna Laetitia Barbauld, English poet, author, and critic (b. 1743)
1831 – Friedrich Maximilian von Klinger, German author and playwright (b. 1752)
1847 – Mary Anning, English paleontologist (b. 1799)
1851 – Hans Christian Ørsted, Danish physicist and chemist (b. 1777)
1888 – William I, German Emperor (b. 1797)
1895 – Leopold von Sacher-Masoch, Austrian journalist and author (b. 1836)
1897 – Sondre Norheim, Norwegian-American skier (b. 1825)
1901–present
1918 – Frank Wedekind, German author and playwright (b. 1864)
1925 – Willard Metcalf, American painter and academic (b. 1858)
1926 – Mikao Usui, Japanese spiritual leader, founded Reiki (b. 1865)
1937 – Paul Elmer More, American journalist and critic (b. 1864)
1943 – Otto Freundlich, German painter and sculptor (b. 1878)
1954 – Vagn Walfrid Ekman, Swedish oceanographer and academic (b. 1874)
1955 – Miroslava, Czech-Mexican actress (b. 1925)
1964 – Paul von Lettow-Vorbeck, German general (b. 1870)
1969 – Abdul Munim Riad, Egyptian general (b. 1919)
1971 – Pope Cyril VI of Alexandria (b. 1902)
1974 – Earl Wilbur Sutherland, Jr., American pharmacologist and biochemist, Nobel Prize laureate (b. 1915)
1974 – Harry Womack, American singer (b. 1945)
1983 – Faye Emerson, American actress (b. 1917)
1983 – Ulf von Euler, Swedish physiologist and pharmacologist, Nobel Prize laureate (b. 1905)
1988 – Kurt Georg Kiesinger, German lawyer, politician and Chancellor of Germany (b. 1904)
1989 – Robert Mapplethorpe, American photographer (b. 1946)
1991 – Jim Hardin, American baseball player (b. 1943)
1992 – Menachem Begin, Belarusian-Israeli soldier, politician and Prime Minister of Israel, Nobel Prize laureate (b. 1913)
1993 – C. Northcote Parkinson, English historian and author (b. 1909)
1994 – Charles Bukowski, American poet, novelist, and short story writer (b. 1920)
1994 – Eddie Creatchman, Canadian wrestler, referee, and manager (b. 1928)
1994 – Fernando Rey, Spanish actor (b. 1917)
1996 – George Burns, American comedian, actor, and writer (b. 1896)
1997 – Jean-Dominique Bauby, French journalist and author (b. 1952)
1997 – Terry Nation, Welsh author and screenwriter (b. 1930)
1997 – The Notorious B.I.G., American rapper, songwriter, and actor (b. 1972)
1999 – Harry Somers, Canadian pianist and composer (b. 1925)
1999 – George Singh, Belizean jurist and Chief Justice of Belize (b. 1937)
2000 – Jean Coulthard, Canadian composer and educator (b. 1908)
2003 – Stan Brakhage, American director and cinematographer (b. 1933)
2003 – Bernard Dowiyogo, Nauruan politician, President of Nauru (b. 1946)
2004 – John Mayer, Indian composer (b. 1930)
2006 – Tom Fox, American activist (b. 1951)
2006 – Anna Moffo, American soprano (b. 1932)
2006 – John Profumo, English soldier and politician, Secretary of State for War (b. 1915)
2007 – Brad Delp, American singer-songwriter and guitarist (b. 1951)
2007 – Glen Harmon, Canadian ice hockey player (b. 1921)
2010 – Willie Davis, American baseball player and manager (b. 1940)
2010 – Doris Haddock, American activist and politician (b. 1910)
2011 – David S. Broder, American journalist and academic (b. 1929)
2013 – Max Jakobson, Finnish journalist and diplomat
2013 – Merton Simpson, American painter and art collector (b. 1928)
2015 – James Molyneaux, Baron Molyneaux of Killead, Northern Irish soldier and politician (b. 1920)
2016 – Robert Horton, American actor (b. 1924)
2016 – Clyde Lovellette, American basketball player and coach (b. 1929)
2017 – Howard Hodgkin, British painter (b. 1932)
2018 – Jo Min-ki, Korean actor (b. 1965)
2020 – John Bathersby, Australian Catholic bishop (b. 1936)
2021 – James Levine, American conductor and pianist (b. 1943)
2021 – Roger Mudd, American journalist (b. 1928)
Holidays and observances
Christian feast day:
Catherine of Bologna
Forty Martyrs of Sebaste
Frances of Rome
Pacian
Pope Cyril VI of Alexandria (Coptic Orthodox Church)
Gregory of Nyssa (Episcopal Church (United States))
March 9 (Eastern Orthodox liturgics)
Teachers' Day or Eid Al Moalim (Lebanon)
References
Sources
External links
BBC: On This Day
Historical Events on March 9
Today in Canadian History
Days of the year
March |
20055 | https://en.wikipedia.org/wiki/Moving%20Picture%20Experts%20Group | Moving Picture Experts Group | The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by ISO and IEC that sets standards for media coding, including compression coding of audio, video, graphics and genomic data, and transmission and file formats for various applications. Together with JPEG, MPEG is organized under ISO/IEC JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information (ISO/IEC Joint Technical Committee 1, Subcommittee 29).
MPEG formats are used in various multimedia systems. The most well known older MPEG media formats typically use MPEG-1, MPEG-2, and MPEG-4 AVC media coding and MPEG-2 systems transport streams and program streams. Newer systems typically use the MPEG base media file format and dynamic streaming (a.k.a. MPEG-DASH).
History
MPEG was established in 1988 by the initiative of Dr. Hiroshi Yasuda (NTT) and Dr. Leonardo Chiariglione (CSELT). Chiariglione was the group's chair (called Convenor in ISO/IEC terminology) from its inception until June 6, 2020. The first MPEG meeting was in May 1988 in Ottawa, Canada.
Starting around the time of the MPEG-4 project in the late 1990s and continuing to the present, MPEG had grown to include approximately 300–500 members per meeting from various industries, universities, and research institutions.
On June 6, 2020, the MPEG section of Chiariglione's personal website was updated to inform readers that he had retired as Convenor, and he said that the MPEG group (then SC 29/WG 11) "was closed". Chiariglione described his reasons for stepping down in his personal blog. His decision followed a restructuring process within SC 29, in which "some of the subgroups of WG 11 (MPEG) [became] distinct MPEG working groups (WGs) and advisory groups (AGs)" in July 2020. Prof. Jörn Ostermann of University of Hannover was appointed as Acting Convenor of SC 29/WG 11 during the restructuring period and was then appointed Convenor of SC 29's Advisory Group 2, which coordinates MPEG overall technical activities.
The MPEG structure that replaced the former Working Group 11 includes three Advisory Groups (AGs) and seven Working Groups (WGs)
SC 29/AG 2: MPEG Technical Coordination (Convenor: Prof. Joern Ostermann of University of Hannover, Germany)
SC 29/AG 3: MPEG Liaison and Communication (Convenor: Prof. Kyuheon Kim of Kyung Hee University, Korea)
SC 29/AG 5: MPEG Visual Quality Assessment (Convenor: Dr. Mathias Wien of RWTH Aachen University, Germany)
SC 29/WG 2: MPEG Technical Requirements (Convenor: Dr. Igor Curcio of Nokia, Finland)
SC 29/WG 3: MPEG Systems (Convenor: Dr. Youngkwon Lim of Samsung, Korea)
SC 29/WG 4: MPEG Video Coding (Convenor: Prof. Lu Yu of Zhejiang University, China)
SC 29/WG 5: MPEG Joint Video Coding Team with ITU-T SG16 (Convenor: Prof. Jens-Rainer Ohm of RWTH Aachen University, Germany; formerly co-chairing with Dr. Gary Sullivan of Microsoft, United States)
SC 29/WG 6: MPEG Audio coding (Convenor: Dr. Schuyler Quackenbush of Audio Research Labs, United States)
SC 29/WG 7: MPEG 3D Graphics coding (Convenor: Prof. Marius Preda of Institut Mines-Télécom SudParis)
SC 29/WG 8: MPEG Genomic coding (Convenor: Dr. Marco Mattavelli of EPFL, Switzerland)
The first meeting under the current structure was held in October 2020. It (and all other MPEG meetings starting in April 2020) was held virtually by teleconference due to the COVID-19 pandemic.
Cooperation with other groups
MPEG-2
MPEG-2 development included a joint project between MPEG and ITU-T Study Group 15 (which later became ITU-T SG16), resulting in publication of the MPEG-2 Systems standard (ISO/IEC 13818-1, including its transport streams and program streams) as ITU-T H.222.0 and the MPEG-2 Video standard (ISO/IEC 13818-2) as ITU-T H.262. Sakae Okubo (NTT), was the ITU-T coordinator and chaired the agreements on its requirements.
Joint Video Team
Joint Video Team (JVT) was joint project between ITU-T SG16/Q.6 (Study Group 16 / Question 6) – VCEG (Video Coding Experts Group) and ISO/IEC JTC 1/SC 29/WG 11 – MPEG for the development of a video coding ITU-T Recommendation and ISO/IEC International Standard. It was formed in 2001 and its main result was H.264/MPEG-4 AVC (MPEG-4 Part 10), which reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.262 / MPEG-2 standard. The JVT was chaired by Dr. Gary Sullivan, with vice-chairs Dr. Thomas Wiegand of the Heinrich Hertz Institute in Germany and Dr. Ajay Luthra of Motorola in the United States.
Joint Collaborative Team on Video Coding
Joint Collaborative Team on Video Coding (JCT-VC) was a group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG). It was created in 2010 to develop High Efficiency Video Coding (HEVC, MPEG-H Part 2, ITU-T H.265), a video coding standard that further reduces by about 50% the data rate required for video coding, as compared to the then-current ITU-T H.264 / ISO/IEC 14496-10 standard. JCT-VC was co-chaired by Prof. Jens-Rainer Ohm and Gary Sullivan.
Joint Video Experts Team
Joint Video Experts Team (JVET) is a joint group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG) created in 2017 after an exploration phase that began in 2015. JVET developed Versatile Video Coding (VVC, MPEG-I Part 3, ITU-T H.266), completed in July 2020, which further reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.265 / HEVC standard, and the JCT-VC was merged into JVET in July 2020. Like JCT-VC, JVET was co-chaired by Jens-Rainer Ohm and Gary Sullivan, until July 2021 when Ohm became the sole chair (after Sullivan became the chair of SC 29).
Standards
The MPEG standards consist of different Parts. Each Part covers a certain aspect of the whole specification. The standards also specify profiles and levels. Profiles are intended to define a set of tools that are available, and Levels define the range of appropriate values for the properties associated with them. Some of the approved MPEG standards were revised by later amendments and/or new editions.
The primary early MPEG compression formats and related standards include:
MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s (ISO/IEC 11172). This initial version is known as a lossy fileformat and is the first MPEG compression standard for audio and video. It is commonly limited to about 1.5 Mbit/s although the specification is capable of much higher bit rates. It was basically designed to allow moving pictures and sound to be encoded into the bitrate of a Compact Disc. It is used on Video CD and can be used for low-quality video on DVD Video. It was used in digital satellite/cable TV services before MPEG-2 became widespread. To meet the low bit requirement, MPEG-1 downsamples the images, as well as uses picture rates of only 24–30 Hz, resulting in a moderate quality. It includes the popular MPEG-1 Audio Layer III (MP3) audio compression format.
MPEG-2 (1996): Generic coding of moving pictures and associated audio information (ISO/IEC 13818). Transport, video and audio standards for broadcast-quality television. MPEG-2 standard was considerably broader in scope and of wider appeal – supporting interlacing and high definition. MPEG-2 is considered important because it was chosen as the compression scheme for over-the-air digital television ATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD and DVD Video. It is also used on Blu-ray Discs, but these normally use MPEG-4 Part 10 or SMPTE VC-1 for high-definition content.
MPEG-4 (1998): Coding of audio-visual objects. (ISO/IEC 14496) MPEG-4 provides a framework for more advanced compression algorithms potentially resulting in higher compression ratios compared to MPEG-2 at the cost of higher computational requirements. MPEG-4 also supports Intellectual Property Management and Protection (IPMP), which provides the facility to use proprietary technologies to manage and protect content like digital rights management. It also supports MPEG-J, a fully programmatic solution for creation of custom interactive multimedia applications (Java application environment with a Java API) and many other features. Two new higher-efficiency video coding standards (newer than MPEG-2 Video) are included:
MPEG-4 Part 2 (including its Simple and Advanced Simple profiles) and
MPEG-4 AVC (MPEG-4 Part 10 or ITU-T H.264, 2003). MPEG-4 AVC may be used on HD DVD and Blu-ray Discs, along with VC-1 and MPEG-2.
MPEG-4 AVC was chosen as the video compression scheme for over-the-air television broadcasting in Brazil (ISDB-TB), based on the digital television system of Japan (ISDB-T).
An MPEG-3 project was cancelled. MPEG-3 was planned to deal with standardizing scalable and multi-resolution compression and was intended for HDTV compression, but was found to be unnecessary and was merged with MPEG-2; as a result there is no MPEG-3 standard. The cancelled MPEG-3 project is not to be confused with MP3, which is MPEG-1 or MPEG-2 Audio Layer III.
In addition, the following standards, while not sequential advances to the video encoding standard as with MPEG-1 through MPEG-4, are referred to by similar notation:
MPEG-7 (2002): Multimedia content description interface. (ISO/IEC 15938)
MPEG-21 (2001): Multimedia framework (MPEG-21). (ISO/IEC 21000) MPEG describes this standard as a multimedia framework and provides for intellectual property management and protection.
Moreover, more recently than other standards above, MPEG has produced the following international standards; each of the standards holds multiple MPEG technologies for a variety of applications. (For example, MPEG-A includes a number of technologies on multimedia application format.)
MPEG-A (2007): Multimedia application format (MPEG-A). (ISO/IEC 23000) (e.g., an explanation of the purpose for multimedia application formats, MPEG music player application format, MPEG photo player application format and others)
MPEG-B (2006): MPEG systems technologies. (ISO/IEC 23001) (e.g., Binary MPEG format for XML, Fragment Request Units (FRUs), Bitstream Syntax Description Language (BSDL) and others)
MPEG-C (2006): MPEG video technologies. (ISO/IEC 23002) (e.g., accuracy requirements for implementation of integer-output 8x8 inverse discrete cosine transform and others)
MPEG-D (2007): MPEG audio technologies. (ISO/IEC 23003) (e.g., MPEG Surround, SAOC-Spatial Audio Object Coding and USAC-Unified Speech and Audio Coding)
MPEG-E (2007): Multimedia Middleware. (ISO/IEC 23004) (a.k.a. M3W) (e.g., architecture, multimedia application programming interface (API), component model and others)
MPEG-G (2019) Genomic Information Representation (ISO/IEC 23092), Parts 1–6 for transport and storage, coding, metadata and APIs, reference software, conformance, and annotations
Supplemental media technologies (2008, later replaced and withdrawn). (ISO/IEC 29116) It had one published part, media streaming application format protocols, which was later replaced and revised in MPEG-M Part 4's MPEG extensible middleware (MXM) protocols.
MPEG-V (2011): Media context and control. (ISO/IEC 23005) (a.k.a. Information exchange with Virtual Worlds) (e.g., Avatar characteristics, Sensor information, Architecture and others)
MPEG-M (2010): MPEG eXtensible Middleware (MXM). (ISO/IEC 23006) (e.g., MXM architecture and technologies, API, and MPEG extensible middleware (MXM) protocols)
MPEG-U (2010): Rich media user interfaces. (ISO/IEC 23007) (e.g., Widgets)
MPEG-H (2013): High Efficiency Coding and Media Delivery in Heterogeneous Environments. (ISO/IEC 23008) Part 1 – MPEG media transport; Part 2 – High Efficiency Video Coding (HEVC, ITU-T H.265); Part 3 – 3D Audio.
MPEG-DASH (2012): Information technology – Dynamic adaptive streaming over HTTP (DASH). (ISO/IEC 23009) Part 1 – Media presentation description and segment formats
MPEG-I (2020): Coded Representation of Immersive Media (ISO/IEC 23090), including Part 2 Omnidirectional Media Format (OMAF) and Part 3 – Versatile Video Coding (VVC, ITU-T H.266)
MPEG-CICP (ISO/IEC 23091) Coding-Independent Code Points (CICP), Parts 1–4 for systems, video, audio, and usage of video code points
Standardization process
A standard published by ISO/IEC is the last stage of an approval process that starts with the proposal of new work within a committee. Stages of the standard development process include:
NP or NWIP – New Project or New Work Item Proposal
AWI – Approved Work Item
WD – Working Draft
CD or CDAM – Committee Draft or Committee Draft Amendment
DIS or DAM – Draft International Standard or Draft Amendment
FDIS or FDAM – Final Draft International Standard or Final Draft Amendment
IS or AMD – International Standard or Amendment
Other abbreviations:
DTR – Draft Technical Report (for information)
TR – Technical Report
DCOR – Draft Technical Corrigendum (for corrections)
COR – Technical Corrigendum
A proposal of work (New Proposal) is approved at the Subcommittee level and then at the Technical Committee level (SC 29 and JTC 1, respectively, in the case of MPEG). When the scope of new work is sufficiently clarified, MPEG usually makes open "calls for proposals". The first document that is produced for audio and video coding standards is typically called a test model. When a sufficient confidence in the stability of the standard under development is reached, a Working Draft (WD) is produced. When a WD is sufficiently solid (typically after producing several numbered WDs), the next draft is issued as a Committee Draft (CD) (usually at the planned time) and is sent to National Bodies (NBs) for comment. When a consensus is reached to proceed to the next stage, the draft becomes a Draft International Standard (DIS) and is sent for another ballot. After a review and comments issued by NBs and a resolution of comments in the working group, a Final Draft International Standard (FDIS) is typically issued for a final approval ballot. The final approval ballot is voted on by National Bodies, with no technical changes allowed (a yes/no approval ballot). If approved, the document becomes an International Standard (IS). In cases where the text is considered sufficiently mature, the WD, CD, and/or FDIS stages can be skipped. The development of a standard is completed when the FDIS document has been issued, with the FDIS stage only being for final approval, and in practice, the FDIS stage for MPEG standards has always resulted in approval.
See also
Video Coding Experts Group (VCEG)
Joint Photographic Experts Group (JPEG)
Joint Bi-level Image Experts Group (JBIG)
Multimedia and Hypermedia information coding Expert Group (MHEG)
Audio codec
Audio coding format
Video codec
Video coding format
Video quality
Video compression
MP3
Notes
External links
MPEG.ORG
Papers and books on MPEG
Computer file formats
Film and video technology
MPEG
Organizations established in 1988
Working groups |
20056 | https://en.wikipedia.org/wiki/MPEG-1 | MPEG-1 | MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to about 1.5 Mbit/s (26:1 and 6:1 compression ratios respectively) without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting (DAB) practical.
Today, MPEG-1 has become the most widely compatible lossy audio/video format in the world, and is used in a large number of products and technologies. Perhaps the best-known part of the MPEG-1 standard is the first version of the MP3 audio format it introduced.
The MPEG-1 standard is published as ISO/IEC 11172 – Information technology—Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s.
The standard consists of the following five Parts:
Systems (storage and synchronization of video, audio, and other data together)
Video (compressed video content)
Audio (compressed audio content)
Conformance testing (testing the correctness of implementations of the standard)
Reference software (example software showing how to encode and decode according to the standard)
History
The predecessor of MPEG-1 for video coding was the H.261 standard produced by the CCITT (now known as the ITU-T). The basic architecture established in H.261 was the motion-compensated DCT hybrid video coding structure. It uses macroblocks of size 16×16 with block-based motion estimation in the encoder and motion compensation using encoder-selected motion vectors in the decoder, with residual difference coding using a discrete cosine transform (DCT) of size 8×8, scalar quantization, and variable-length codes (like Huffman codes) for entropy coding. H.261 was the first practical video coding standard, and all of its described design elements were also used in MPEG-1.
Modeled on the successful collaborative approach and the compression technologies developed by the Joint Photographic Experts Group and CCITT's Experts Group on Telephony (creators of the JPEG image compression standard and the H.261 standard for video conferencing respectively), the Moving Picture Experts Group (MPEG) working group was established in January 1988, by the initiative of Hiroshi Yasuda (Nippon Telegraph and Telephone) and Leonardo Chiariglione (CSELT). MPEG was formed to address the need for standard video and audio formats, and to build on H.261 to get better quality through the use of somewhat more complex encoding methods (e.g., supporting higher precision for motion vectors).
Development of the MPEG-1 standard began in May 1988. Fourteen video and fourteen audio codec proposals were submitted by individual companies and institutions for evaluation. The codecs were extensively tested for computational complexity and subjective (human perceived) quality, at data rates of 1.5 Mbit/s. This specific bitrate was chosen for transmission over T-1/E-1 lines and as the approximate data rate of audio CDs. The codecs that excelled in this testing were utilized as the basis for the standard and refined further, with additional features and other improvements being incorporated in the process.
After 20 meetings of the full group in various cities around the world, and 4½ years of development and testing, the final standard (for parts 1–3) was approved in early November 1992 and published a few months later. The reported completion date of the MPEG-1 standard varies greatly: a largely complete draft standard was produced in September 1990, and from that point on, only minor changes were introduced. The draft standard was publicly available for purchase. The standard was finished with the 6 November 1992 meeting. The Berkeley Plateau Multimedia Research Group developed an MPEG-1 decoder in November 1992. In July 1990, before the first draft of the MPEG-1 standard had even been written, work began on a second standard, MPEG-2, intended to extend MPEG-1 technology to provide full broadcast-quality video (as per CCIR 601) at high bitrates (3–15 Mbit/s) and support for interlaced video. Due in part to the similarity between the two codecs, the MPEG-2 standard includes full backwards compatibility with MPEG-1 video, so any MPEG-2 decoder can play MPEG-1 videos.
Notably, the MPEG-1 standard very strictly defines the bitstream, and decoder function, but does not define how MPEG-1 encoding is to be performed, although a reference implementation is provided in ISO/IEC-11172-5. This means that MPEG-1 coding efficiency can drastically vary depending on the encoder used, and generally means that newer encoders perform significantly better than their predecessors. The first three parts (Systems, Video and Audio) of ISO/IEC 11172 were published in August 1993.
Patents
Due to its age, MPEG-1 is no longer covered by any essential patents and can thus be used without obtaining a licence or paying any fees. The ISO patent database lists one patent for ISO 11172, US 4,472,747, which expired in 2003. The near-complete draft of the MPEG-1 standard was publicly available as ISO CD 11172 by December 6, 1991. Neither the July 2008 Kuro5hin article "Patent Status of MPEG-1, H.261 and MPEG-2", nor an August 2008 thread on the gstreamer-devel mailing list were able to list a single unexpired MPEG-1 Video and MPEG-1 Audio Layer I/II patent. A May 2009 discussion on the whatwg mailing list mentioned US 5,214,678 patent as possibly covering MPEG-1 Audio Layer II. Filed in 1990 and published in 1993, this patent is now expired.
A full MPEG-1 decoder and encoder, with "Layer III audio", could not be implemented royalty free since there were companies that required patent fees for implementations of MPEG-1 Audio Layer III, as discussed in the MP3 article. All patents in the world connected to MP3 expired 30 December 2017, which makes this format totally free for use. On 23 April 2017, Fraunhofer IIS stopped charging for Technicolor's MP3 licensing program for certain MP3 related patents and software.
Former patent holders
The following corporations filed declarations with ISO saying they held patents for the MPEG-1 Video (ISO/IEC-11172-2) format, although all such patents have since expired.
BBC
Daimler Benz AG
Fujitsu
IBM
Matsushita Electric Industrial Co., Ltd.
Mitsubishi Electric
NEC
NHK
Philips
Pioneer Corporation
Qualcomm
Ricoh
Sony
Texas Instruments
Thomson Multimedia
Toppan Printing
Toshiba
Victor Company of Japan
Applications
Most popular software for video playback includes MPEG-1 decoding, in addition to any other supported formats.
The popularity of MP3 audio has established a massive installed base of hardware that can play back MPEG-1 Audio (all three layers).
"Virtually all digital audio devices" can play back MPEG-1 Audio. Many millions have been sold to-date.
Before MPEG-2 became widespread, many digital satellite/cable TV services used MPEG-1 exclusively.
The widespread popularity of MPEG-2 with broadcasters means MPEG-1 is playable by most digital cable and satellite set-top boxes, and digital disc and tape players, due to backwards compatibility.
MPEG-1 was used for full-screen video on Green Book CD-i, and on Video CD (VCD).
The Super Video CD standard, based on VCD, uses MPEG-1 audio exclusively, as well as MPEG-2 video.
The DVD-Video format uses MPEG-2 video primarily, but MPEG-1 support is explicitly defined in the standard.
The DVD-Video standard originally required MPEG-1 Audio Layer II for PAL countries, but was changed to allow AC-3/Dolby Digital-only discs. MPEG-1 Audio Layer II is still allowed on DVDs, although newer extensions to the format, like MPEG Multichannel, are rarely supported.
Most DVD players also support Video CD and MP3 CD playback, which use MPEG-1.
The international Digital Video Broadcasting (DVB) standard primarily uses MPEG-1 Audio Layer II, and MPEG-2 video.
The international Digital Audio Broadcasting (DAB) standard uses MPEG-1 Audio Layer II exclusively, due to its especially high quality, modest decoder performance requirements, and tolerance of errors.
The Digital Compact Cassette uses PASC (Precision Adaptive Sub-band Coding) to encode its audio. PASC is an early version of MPEG-1 Audio Layer I with a fixed bit rate of 384 kilobits per second.
Part 1: Systems
Part 1 of the MPEG-1 standard covers systems, and is defined in ISO/IEC-11172-1.
MPEG-1 Systems specifies the logical layout and methods used to store the encoded audio, video, and other data into a standard bitstream, and to maintain synchronization between the different contents. This file format is specifically designed for storage on media, and transmission over communication channels, that are considered relatively reliable. Only limited error protection is defined by the standard, and small errors in the bitstream may cause noticeable defects.
This structure was later named an MPEG program stream: "The MPEG-1 Systems design is essentially identical to the MPEG-2 Program Stream structure." This terminology is more popular, precise (differentiates it from an MPEG transport stream) and will be used here.
Elementary streams, packets, and clock references
Elementary Streams (ES) are the raw bitstreams of MPEG-1 audio and video encoded data (output from an encoder). These files can be distributed on their own, such as is the case with MP3 files.
Packetized Elementary Streams (PES) are elementary streams packetized into packets of variable lengths, i.e., divided ES into independent chunks where cyclic redundancy check (CRC) checksum was added to each packet for error detection.
System Clock Reference (SCR) is a timing value stored in a 33-bit header of each PES, at a frequency/precision of 90 kHz, with an extra 9-bit extension that stores additional timing data with a precision of 27 MHz. These are inserted by the encoder, derived from the system time clock (STC). Simultaneously encoded audio and video streams will not have identical SCR values, however, due to buffering, encoding, jitter, and other delay.
Program streams
Program Streams (PS) are concerned with combining multiple packetized elementary streams (usually just one audio and video PES) into a single stream, ensuring simultaneous delivery, and maintaining synchronization. The PS structure is known as a multiplex, or a container format.
Presentation time stamps (PTS) exist in PS to correct the inevitable disparity between audio and video SCR values (time-base correction). 90 kHz PTS values in the PS header tell the decoder which video SCR values match which audio SCR values. PTS determines when to display a portion of an MPEG program, and is also used by the decoder to determine when data can be discarded from the buffer. Either video or audio will be delayed by the decoder until the corresponding segment of the other arrives and can be decoded.
PTS handling can be problematic. Decoders must accept multiple program streams that have been concatenated (joined sequentially). This causes PTS values in the middle of the video to reset to zero, which then begin incrementing again. Such PTS wraparound disparities can cause timing issues that must be specially handled by the decoder.
Decoding Time Stamps (DTS), additionally, are required because of B-frames. With B-frames in the video stream, adjacent frames have to be encoded and decoded out-of-order (re-ordered frames). DTS is quite similar to PTS, but instead of just handling sequential frames, it contains the proper time-stamps to tell the decoder when to decode and display the next B-frame (types of frames explained below), ahead of its anchor (P- or I-) frame. Without B-frames in the video, PTS and DTS values are identical.
Multiplexing
To generate the PS, the multiplexer will interleave the (two or more) packetized elementary streams. This is done so the packets of the simultaneous streams can be transferred over the same channel and are guaranteed to both arrive at the decoder at precisely the same time. This is a case of time-division multiplexing.
Determining how much data from each stream should be in each interleaved segment (the size of the interleave) is complicated, yet an important requirement. Improper interleaving will result in buffer underflows or overflows, as the receiver gets more of one stream than it can store (e.g. audio), before it gets enough data to decode the other simultaneous stream (e.g. video). The MPEG Video Buffering Verifier (VBV) assists in determining if a multiplexed PS can be decoded by a device with a specified data throughput rate and buffer size. This offers feedback to the multiplexer and the encoder, so that they can change the multiplex size or adjust bitrates as needed for compliance.
Part 2: Video
Part 2 of the MPEG-1 standard covers video and is defined in ISO/IEC-11172-2. The design was heavily influenced by H.261.
MPEG-1 Video exploits perceptual compression methods to significantly reduce the data rate required by a video stream. It reduces or completely discards information in certain frequencies and areas of the picture that the human eye has limited ability to fully perceive. It also exploits temporal (over time) and spatial (across a picture) redundancy common in video to achieve better data compression than would be possible otherwise. (See: Video compression)
Color space
Before encoding video to MPEG-1, the color-space is transformed to Y′CbCr (Y′=Luma, Cb=Chroma Blue, Cr=Chroma Red). Luma (brightness, resolution) is stored separately from chroma (color, hue, phase) and even further separated into red and blue components.
The chroma is also subsampled to 4:2:0, meaning it is reduced to half resolution vertically and half resolution horizontally, i.e., to just one quarter the number of samples used for the luma component of the video. This use of higher resolution for some color components is similar in concept to the Bayer pattern filter that is commonly used for the image capturing sensor in digital color cameras. Because the human eye is much more sensitive to small changes in brightness (the Y component) than in color (the Cr and Cb components), chroma subsampling is a very effective way to reduce the amount of video data that needs to be compressed. However, on videos with fine detail (high spatial complexity) this can manifest as chroma aliasing artifacts. Compared to other digital compression artifacts, this issue seems to very rarely be a source of annoyance. Because of the subsampling, Y′CbCr 4:2:0 video is ordinarily stored using even dimensions (divisible by 2 horizontally and vertically).
Y′CbCr color is often informally called YUV to simplify the notation, although that term more properly applies to a somewhat different color format. Similarly, the terms luminance and chrominance are often used instead of the (more accurate) terms luma and chroma.
Resolution/bitrate
MPEG-1 supports resolutions up to 4095×4095 (12 bits), and bit rates up to 100 Mbit/s.
MPEG-1 videos are most commonly seen using Source Input Format (SIF) resolution: 352×240, 352×288, or 320×240. These relatively low resolutions, combined with a bitrate less than 1.5 Mbit/s, make up what is known as a constrained parameters bitstream (CPB), later renamed the "Low Level" (LL) profile in MPEG-2. This is the minimum video specifications any decoder should be able to handle, to be considered MPEG-1 compliant. This was selected to provide a good balance between quality and performance, allowing the use of reasonably inexpensive hardware of the time.
Frame/picture/block types
MPEG-1 has several frame/picture types that serve different purposes. The most important, yet simplest, is I-frame.
I-frames
"I-frame" is an abbreviation for "Intra-frame", so-called because they can be decoded independently of any other frames. They may also be known as I-pictures, or keyframes due to their somewhat similar function to the key frames used in animation. I-frames can be considered effectively identical to baseline JPEG images.
High-speed seeking through an MPEG-1 video is only possible to the nearest I-frame. When cutting a video it is not possible to start playback of a segment of video before the first I-frame in the segment (at least not without computationally intensive re-encoding). For this reason, I-frame-only MPEG videos are used in editing applications.
I-frame only compression is very fast, but produces very large file sizes: a factor of 3× (or more) larger than normally encoded MPEG-1 video, depending on how temporally complex a specific video is. I-frame only MPEG-1 video is very similar to MJPEG video. So much so that very high-speed and theoretically lossless (in reality, there are rounding errors) conversion can be made from one format to the other, provided a couple of restrictions (color space and quantization matrix) are followed in the creation of the bitstream.
The length between I-frames is known as the group of pictures (GOP) size. MPEG-1 most commonly uses a GOP size of 15–18. i.e. 1 I-frame for every 14-17 non-I-frames (some combination of P- and B- frames). With more intelligent encoders, GOP size is dynamically chosen, up to some pre-selected maximum limit.
Limits are placed on the maximum number of frames between I-frames due to decoding complexing, decoder buffer size, recovery time after data errors, seeking ability, and accumulation of IDCT errors in low-precision implementations most common in hardware decoders (See: IEEE-1180).
P-frames
"P-frame" is an abbreviation for "Predicted-frame". They may also be called forward-predicted frames or inter-frames (B-frames are also inter-frames).
P-frames exist to improve compression by exploiting the temporal (over time) redundancy in a video. P-frames store only the difference in image from the frame (either an I-frame or P-frame) immediately preceding it (this reference frame is also called the anchor frame).
The difference between a P-frame and its anchor frame is calculated using motion vectors on each macroblock of the frame (see below). Such motion vector data will be embedded in the P-frame for use by the decoder.
A P-frame can contain any number of intra-coded blocks, in addition to any forward-predicted blocks.
If a video drastically changes from one frame to the next (such as a cut), it is more efficient to encode it as an I-frame.
B-frames
"B-frame" stands for "bidirectional-frame" or "bipredictive frame". They may also be known as backwards-predicted frames or B-pictures. B-frames are quite similar to P-frames, except they can make predictions using both the previous and future frames (i.e. two anchor frames).
It is therefore necessary for the player to first decode the next I- or P- anchor frame sequentially after the B-frame, before the B-frame can be decoded and displayed. This means decoding B-frames requires larger data buffers and causes an increased delay on both decoding and during encoding. This also necessitates the decoding time stamps (DTS) feature in the container/system stream (see above). As such, B-frames have long been subject of much controversy, they are often avoided in videos, and are sometimes not fully supported by hardware decoders.
No other frames are predicted from a B-frame. Because of this, a very low bitrate B-frame can be inserted, where needed, to help control the bitrate. If this was done with a P-frame, future P-frames would be predicted from it and would lower the quality of the entire sequence. However, similarly, the future P-frame must still encode all the changes between it and the previous I- or P- anchor frame. B-frames can also be beneficial in videos where the background behind an object is being revealed over several frames, or in fading transitions, such as scene changes.
A B-frame can contain any number of intra-coded blocks and forward-predicted blocks, in addition to backwards-predicted, or bidirectionally predicted blocks.
D-frames
MPEG-1 has a unique frame type not found in later video standards. "D-frames" or DC-pictures are independently coded images (intra-frames) that have been encoded using DC transform coefficients only (AC coefficients are removed when encoding D-frames—see DCT below) and hence are very low quality. D-frames are never referenced by I-, P- or B- frames. D-frames are only used for fast previews of video, for instance when seeking through a video at high speed.
Given moderately higher-performance decoding equipment, fast preview can be accomplished by decoding I-frames instead of D-frames. This provides higher quality previews, since I-frames contain AC coefficients as well as DC coefficients. If the encoder can assume that rapid I-frame decoding capability is available in decoders, it can save bits by not sending D-frames (thus improving compression of the video content). For this reason, D-frames are seldom actually used in MPEG-1 video encoding, and the D-frame feature has not been included in any later video coding standards.
Macroblocks
MPEG-1 operates on video in a series of 8×8 blocks for quantization. However, to reduce the bit rate needed for motion vectors and because chroma (color) is subsampled by a factor of 4, each pair of (red and blue) chroma blocks corresponds to 4 different luma blocks. This set of 6 blocks, with a resolution of 16×16, is processed together and called a macroblock.
A macroblock is the smallest independent unit of (color) video. Motion vectors (see below) operate solely at the macroblock level.
If the height or width of the video are not exact multiples of 16, full rows and full columns of macroblocks must still be encoded and decoded to fill out the picture (though the extra decoded pixels are not displayed).
Motion vectors
To decrease the amount of temporal redundancy in a video, only blocks that change are updated, (up to the maximum GOP size). This is known as conditional replenishment. However, this is not very effective by itself. Movement of the objects, and/or the camera may result in large portions of the frame needing to be updated, even though only the position of the previously encoded objects has changed. Through motion estimation, the encoder can compensate for this movement and remove a large amount of redundant information.
The encoder compares the current frame with adjacent parts of the video from the anchor frame (previous I- or P- frame) in a diamond pattern, up to a (encoder-specific) predefined radius limit from the area of the current macroblock. If a match is found, only the direction and distance (i.e. the vector of the motion) from the previous video area to the current macroblock need to be encoded into the inter-frame (P- or B- frame). The reverse of this process, performed by the decoder to reconstruct the picture, is called motion compensation.
A predicted macroblock rarely matches the current picture perfectly, however. The differences between the estimated matching area, and the real frame/macroblock is called the prediction error. The larger the amount of prediction error, the more data must be additionally encoded in the frame. For efficient video compression, it is very important that the encoder is capable of effectively and precisely performing motion estimation.
Motion vectors record the distance between two areas on screen based on the number of pixels (also called pels). MPEG-1 video uses a motion vector (MV) precision of one half of one pixel, or half-pel. The finer the precision of the MVs, the more accurate the match is likely to be, and the more efficient the compression. There are trade-offs to higher precision, however. Finer MV precision results in using a larger amount of data to represent the MV, as larger numbers must be stored in the frame for every single MV, increased coding complexity as increasing levels of interpolation on the macroblock are required for both the encoder and decoder, and diminishing returns (minimal gains) with higher precision MVs. Half-pel precision was chosen as the ideal trade-off for that point in time. (See: qpel)
Because neighboring macroblocks are likely to have very similar motion vectors, this redundant information can be compressed quite effectively by being stored DPCM-encoded. Only the (smaller) amount of difference between the MVs for each macroblock needs to be stored in the final bitstream.
P-frames have one motion vector per macroblock, relative to the previous anchor frame. B-frames, however, can use two motion vectors; one from the previous anchor frame, and one from the future anchor frame.
Partial macroblocks, and black borders/bars encoded into the video that do not fall exactly on a macroblock boundary, cause havoc with motion prediction. The block padding/border information prevents the macroblock from closely matching with any other area of the video, and so, significantly larger prediction error information must be encoded for every one of the several dozen partial macroblocks along the screen border. DCT encoding and quantization (see below) also isn't nearly as effective when there is large/sharp picture contrast in a block.
An even more serious problem exists with macroblocks that contain significant, random, edge noise, where the picture transitions to (typically) black. All the above problems also apply to edge noise. In addition, the added randomness is simply impossible to compress significantly. All of these effects will lower the quality (or increase the bitrate) of the video substantially.
DCT
Each 8×8 block is encoded by first applying a forward discrete cosine transform (FDCT) and then a quantization process. The FDCT process (by itself) is theoretically lossless, and can be reversed by applying an Inverse DCT (IDCT) to reproduce the original values (in the absence of any quantization and rounding errors). In reality, there are some (sometimes large) rounding errors introduced both by quantization in the encoder (as described in the next section) and by IDCT approximation error in the decoder. The minimum allowed accuracy of a decoder IDCT approximation is defined by ISO/IEC 23002-1. (Prior to 2006, it was specified by IEEE 1180-1990.)
The FDCT process converts the 8×8 block of uncompressed pixel values (brightness or color difference values) into an 8×8 indexed array of frequency coefficient values. One of these is the (statistically high in variance) "DC coefficient", which represents the average value of the entire 8×8 block. The other 63 coefficients are the statistically smaller "AC coefficients", which have positive or negative values each representing sinusoidal deviations from the flat block value represented by the DC coefficient.
An example of an encoded 8×8 FDCT block:
Since the DC coefficient value is statistically correlated from one block to the next, it is compressed using DPCM encoding. Only the (smaller) amount of difference between each DC value and the value of the DC coefficient in the block to its left needs to be represented in the final bitstream.
Additionally, the frequency conversion performed by applying the DCT provides a statistical decorrelation function to efficiently concentrate the signal into fewer high-amplitude values prior to applying quantization (see below).
Quantization
Quantization is, essentially, the process of reducing the accuracy of a signal, by dividing it by some larger step size and rounding to an integer value (i.e. finding the nearest multiple, and discarding the remainder).
The frame-level quantizer is a number from 0 to 31 (although encoders will usually omit/disable some of the extreme values) which determines how much information will be removed from a given frame. The frame-level quantizer is typically either dynamically selected by the encoder to maintain a certain user-specified bitrate, or (much less commonly) directly specified by the user.
A "quantization matrix" is a string of 64 numbers (ranging from 0 to 255) which tells the encoder how relatively important or unimportant each piece of visual information is. Each number in the matrix corresponds to a certain frequency component of the video image.
An example quantization matrix:
Quantization is performed by taking each of the 64 frequency values of the DCT block, dividing them by the frame-level quantizer, then dividing them by their corresponding values in the quantization matrix. Finally, the result is rounded down. This significantly reduces, or completely eliminates, the information in some frequency components of the picture. Typically, high frequency information is less visually important, and so high frequencies are much more strongly quantized (drastically reduced). MPEG-1 actually uses two separate quantization matrices, one for intra-blocks (I-blocks) and one for inter-block (P- and B- blocks) so quantization of different block types can be done independently, and so, more effectively.
This quantization process usually reduces a significant number of the AC coefficients to zero, (known as sparse data) which can then be more efficiently compressed by entropy coding (lossless compression) in the next step.
An example quantized DCT block:
Quantization eliminates a large amount of data, and is the main lossy processing step in MPEG-1 video encoding. This is also the primary source of most MPEG-1 video compression artifacts, like blockiness, color banding, noise, ringing, discoloration, et al. This happens when video is encoded with an insufficient bitrate, and the encoder is therefore forced to use high frame-level quantizers (strong quantization) through much of the video.
Entropy coding
Several steps in the encoding of MPEG-1 video are lossless, meaning they will be reversed upon decoding, to produce exactly the same (original) values. Since these lossless data compression steps don't add noise into, or otherwise change the contents (unlike quantization), it is sometimes referred to as noiseless coding. Since lossless compression aims to remove as much redundancy as possible, it is known as entropy coding in the field of information theory.
The coefficients of quantized DCT blocks tend to zero towards the bottom-right. Maximum compression can be achieved by a zig-zag scanning of the DCT block starting from the top left and using Run-length encoding techniques.
The DC coefficients and motion vectors are DPCM-encoded.
Run-length encoding (RLE) is a simple method of compressing repetition. A sequential string of characters, no matter how long, can be replaced with a few bytes, noting the value that repeats, and how many times. For example, if someone were to say "five nines", you would know they mean the number: 99999.
RLE is particularly effective after quantization, as a significant number of the AC coefficients are now zero (called sparse data), and can be represented with just a couple of bytes. This is stored in a special 2-dimensional Huffman table that codes the run-length and the run-ending character.
Huffman Coding is a very popular and relatively simple method of entropy coding, and used in MPEG-1 video to reduce the data size. The data is analyzed to find strings that repeat often. Those strings are then put into a special table, with the most frequently repeating data assigned the shortest code. This keeps the data as small as possible with this form of compression. Once the table is constructed, those strings in the data are replaced with their (much smaller) codes, which reference the appropriate entry in the table. The decoder simply reverses this process to produce the original data.
This is the final step in the video encoding process, so the result of Huffman coding is known as the MPEG-1 video "bitstream."
GOP configurations for specific applications
I-frames store complete frame info within the frame and are therefore suited for random access. P-frames provide compression using motion vectors relative to the previous frame ( I or P ). B-frames provide maximum compression but require the previous as well as next frame for computation. Therefore, processing of B-frames requires more buffer on the decoded side. A configuration of the Group of Pictures (GOP) should be selected based on these factors. I-frame only sequences give least compression, but are useful for random access, FF/FR and editability. I- and P-frame sequences give moderate compression but add a certain degree of random access, FF/FR functionality. I-, P- and B-frame sequences give very high compression but also increase the coding/decoding delay significantly. Such configurations are therefore not suited for video-telephony or video-conferencing applications.
The typical data rate of an I-frame is 1 bit per pixel while that of a P-frame is 0.1 bit per pixel and for a B-frame, 0.015 bit per pixel.
Part 3: Audio
Part 3 of the MPEG-1 standard covers audio and is defined in ISO/IEC-11172-3.
MPEG-1 Audio utilizes psychoacoustics to significantly reduce the data rate required by an audio stream. It reduces or completely discards certain parts of the audio that it deduces that the human ear can't hear, either because they are in frequencies where the ear has limited sensitivity, or are masked by other (typically louder) sounds.
Channel Encoding:
Mono
Joint Stereo – intensity encoded
Joint Stereo – M/S encoded for Layer III only
Stereo
Dual (two uncorrelated mono channels)
Sampling rates: 32000, 44100, and 48000 Hz
Bitrates for Layer I: 32, 64, 96, 128, 160, 192, 224, 256, 288, 320, 352, 384, 416 and 448 kbit/s
Bitrates for Layer II: 32, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320 and 384 kbit/s
Bitrates for Layer III: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s
MPEG-1 Audio is divided into 3 layers. Each higher layer is more computationally complex, and generally more efficient at lower bitrates than the previous. The layers are semi backwards compatible as higher layers reuse technologies implemented by the lower layers. A "Full" Layer II decoder can also play Layer I audio, but not Layer III audio, although not all higher level players are "full".
Layer I
MPEG-1 Audio Layer I is a simplified version of MPEG-1 Audio Layer II. Layer I uses a smaller 384-sample frame size for very low delay, and finer resolution. This is advantageous for applications like teleconferencing, studio editing, etc. It has lower complexity than Layer II to facilitate real-time encoding on the hardware available .
Layer I saw limited adoption in its time, and most notably was used on Philips' defunct Digital Compact Cassette at a bitrate of 384 kbit/s. With the substantial performance improvements in digital processing since its introduction, Layer I quickly became unnecessary and obsolete.
Layer I audio files typically use the extension ".mp1" or sometimes ".m1a".
Layer II
MPEG-1 Audio Layer II (the first version of MP2, often informally called MUSICAM) is a lossy audio format designed to provide high quality at about 192 kbit/s for stereo sound. Decoding MP2 audio is computationally simple relative to MP3, AAC, etc.
History/MUSICAM
MPEG-1 Audio Layer II was derived from the MUSICAM (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing) audio codec, developed by Centre commun d'études de télévision et télécommunications (CCETT), Philips, and Institut für Rundfunktechnik (IRT/CNET) as part of the EUREKA 147 pan-European inter-governmental research and development initiative for the development of digital audio broadcasting.
Most key features of MPEG-1 Audio were directly inherited from MUSICAM, including the filter bank, time-domain processing, audio frame sizes, etc. However, improvements were made, and the actual MUSICAM algorithm was not used in the final MPEG-1 Audio Layer II standard. The widespread usage of the term MUSICAM to refer to Layer II is entirely incorrect and discouraged for both technical and legal reasons.
Technical details
MP2 is a time-domain encoder. It uses a low-delay 32 sub-band polyphased filter bank for time-frequency mapping; having overlapping ranges (i.e. polyphased) to prevent aliasing. The psychoacoustic model is based on the principles of auditory masking, simultaneous masking effects, and the absolute threshold of hearing (ATH). The size of a Layer II frame is fixed at 1152-samples (coefficients).
Time domain refers to how analysis and quantization is performed on short, discrete samples/chunks of the audio waveform. This offers low delay as only a small number of samples are analyzed before encoding, as opposed to frequency domain encoding (like MP3) which must analyze many times more samples before it can decide how to transform and output encoded audio. This also offers higher performance on complex, random and transient impulses (such as percussive instruments, and applause), offering avoidance of artifacts like pre-echo.
The 32 sub-band filter bank returns 32 amplitude coefficients, one for each equal-sized frequency band/segment of the audio, which is about 700 Hz wide (depending on the audio's sampling frequency). The encoder then utilizes the psychoacoustic model to determine which sub-bands contain audio information that is less important, and so, where quantization will be inaudible, or at least much less noticeable.
The psychoacoustic model is applied using a 1024-point fast Fourier transform (FFT). Of the 1152 samples per frame, 64 samples at the top and bottom of the frequency range are ignored for this analysis. They are presumably not significant enough to change the result. The psychoacoustic model uses an empirically determined masking model to determine which sub-bands contribute more to the masking threshold, and how much quantization noise each can contain without being perceived. Any sounds below the absolute threshold of hearing (ATH) are completely discarded. The available bits are then assigned to each sub-band accordingly.
Typically, sub-bands are less important if they contain quieter sounds (smaller coefficient) than a neighboring (i.e. similar frequency) sub-band with louder sounds (larger coefficient). Also, "noise" components typically have a more significant masking effect than "tonal" components.
Less significant sub-bands are reduced in accuracy by quantization. This basically involves compressing the frequency range (amplitude of the coefficient), i.e. raising the noise floor. Then computing an amplification factor, for the decoder to use to re-expand each sub-band to the proper frequency range.
Layer II can also optionally use intensity stereo coding, a form of joint stereo. This means that the frequencies above 6 kHz of both channels are combined/down-mixed into one single (mono) channel, but the "side channel" information on the relative intensity (volume, amplitude) of each channel is preserved and encoded into the bitstream separately. On playback, the single channel is played through left and right speakers, with the intensity information applied to each channel to give the illusion of stereo sound. This perceptual trick is known as "stereo irrelevancy". This can allow further reduction of the audio bitrate without much perceivable loss of fidelity, but is generally not used with higher bitrates as it does not provide very high quality (transparent) audio.
Quality
Subjective audio testing by experts, in the most critical conditions ever implemented, has shown MP2 to offer transparent audio compression at 256 kbit/s for 16-bit 44.1 kHz CD audio using the earliest reference implementation (more recent encoders should presumably perform even better). That (approximately) 1:6 compression ratio for CD audio is particularly impressive because it is quite close to the estimated upper limit of perceptual entropy, at just over 1:8. Achieving much higher compression is simply not possible without discarding some perceptible information.
MP2 remains a favoured lossy audio coding standard due to its particularly high audio coding performances on important audio material such as castanet, symphonic orchestra, male and female voices and particularly complex and high energy transients (impulses) like percussive sounds: triangle, glockenspiel and audience applause. More recent testing has shown that MPEG Multichannel (based on MP2), despite being compromised by an inferior matrixed mode (for the sake of backwards compatibility) rates just slightly lower than much more recent audio codecs, such as Dolby Digital (AC-3) and Advanced Audio Coding (AAC) (mostly within the margin of error—and substantially superior in some cases, such as audience applause). This is one reason that MP2 audio continues to be used extensively. The MPEG-2 AAC Stereo verification tests reached a vastly different conclusion, however, showing AAC to provide superior performance to MP2 at half the bitrate. The reason for this disparity with both earlier and later tests is not clear, but strangely, a sample of applause is notably absent from the latter test.
Layer II audio files typically use the extension ".mp2" or sometimes ".m2a".
Layer III
MPEG-1 Audio Layer III (the first version of MP3) is a lossy audio format designed to provide acceptable quality at about 64 kbit/s for monaural audio over single-channel (BRI) ISDN links, and 128 kbit/s for stereo sound.
History/ASPEC
MPEG-1 Audio Layer III was derived from the Adaptive Spectral Perceptual Entropy Coding (ASPEC) codec developed by Fraunhofer as part of the EUREKA 147 pan-European inter-governmental research and development initiative for the development of digital audio broadcasting. ASPEC was adapted to fit in with the Layer II model (frame size, filter bank, FFT, etc.), to become Layer III.
ASPEC was itself based on Multiple adaptive Spectral audio Coding (MSC) by E. F. Schroeder, Optimum Coding in the Frequency domain (OCF) the doctoral thesis by Karlheinz Brandenburg at the University of Erlangen-Nuremberg, Perceptual Transform Coding (PXFM) by J. D. Johnston at AT&T Bell Labs, and Transform coding of audio signals by Y. Mahieux and J. Petit at Institut für Rundfunktechnik (IRT/CNET).
Technical details
MP3 is a frequency-domain audio transform encoder. Even though it utilizes some of the lower layer functions, MP3 is quite different from MP2.
MP3 works on 1152 samples like MP2, but needs to take multiple frames for analysis before frequency-domain (MDCT) processing and quantization can be effective. It outputs a variable number of samples, using a bit buffer to enable this variable bitrate (VBR) encoding while maintaining 1152 sample size output frames. This causes a significantly longer delay before output, which has caused MP3 to be considered unsuitable for studio applications where editing or other processing needs to take place.
MP3 does not benefit from the 32 sub-band polyphased filter bank, instead just using an 18-point MDCT transformation on each output to split the data into 576 frequency components, and processing it in the frequency domain. This extra granularity allows MP3 to have a much finer psychoacoustic model, and more carefully apply appropriate quantization to each band, providing much better low-bitrate performance.
Frequency-domain processing imposes some limitations as well, causing a factor of 12 or 36 × worse temporal resolution than Layer II. This causes quantization artifacts, due to transient sounds like percussive events and other high-frequency events that spread over a larger window. This results in audible smearing and pre-echo. MP3 uses pre-echo detection routines, and VBR encoding, which allows it to temporarily increase the bitrate during difficult passages, in an attempt to reduce this effect. It is also able to switch between the normal 36 sample quantization window, and instead using 3× short 12 sample windows instead, to reduce the temporal (time) length of quantization artifacts. And yet in choosing a fairly small window size to make MP3's temporal response adequate enough to avoid the most serious artifacts, MP3 becomes much less efficient in frequency domain compression of stationary, tonal components.
Being forced to use a hybrid time domain (filter bank) /frequency domain (MDCT) model to fit in with Layer II simply wastes processing time and compromises quality by introducing aliasing artifacts. MP3 has an aliasing cancellation stage specifically to mask this problem, but which instead produces frequency domain energy which must be encoded in the audio. This is pushed to the top of the frequency range, where most people have limited hearing, in hopes the distortion it causes will be less audible.
Layer II's 1024 point FFT doesn't entirely cover all samples, and would omit several entire MP3 sub-bands, where quantization factors must be determined. MP3 instead uses two passes of FFT analysis for spectral estimation, to calculate the global and individual masking thresholds. This allows it to cover all 1152 samples. Of the two, it utilizes the global masking threshold level from the more critical pass, with the most difficult audio.
In addition to Layer II's intensity encoded joint stereo, MP3 can use middle/side (mid/side, m/s, MS, matrixed) joint stereo. With mid/side stereo, certain frequency ranges of both channels are merged into a single (middle, mid, L+R) mono channel, while the sound difference between the left and right channels is stored as a separate (side, L-R) channel. Unlike intensity stereo, this process does not discard any audio information. When combined with quantization, however, it can exaggerate artifacts.
If the difference between the left and right channels is small, the side channel will be small, which will offer as much as a 50% bitrate savings, and associated quality improvement. If the difference between left and right is large, standard (discrete, left/right) stereo encoding may be preferred, as mid/side joint stereo will not provide any benefits. An MP3 encoder can switch between m/s stereo and full stereo on a frame-by-frame basis.
Unlike Layers I and II, MP3 uses variable-length Huffman coding (after perceptual) to further reduce the bitrate, without any further quality loss.
Quality
MP3's more fine-grained and selective quantization does prove notably superior to MP2 at lower-bitrates. It is able to provide nearly equivalent audio quality to Layer II, at a 15% lower bitrate (approximately). 128 kbit/s is considered the "sweet spot" for MP3; meaning it provides generally acceptable quality stereo sound on most music, and there are diminishing quality improvements from increasing the bitrate further. MP3 is also regarded as exhibiting artifacts that are less annoying than Layer II, when both are used at bitrates that are too low to possibly provide faithful reproduction.
Layer III audio files use the extension ".mp3".
MPEG-2 audio extensions
The MPEG-2 standard includes several extensions to MPEG-1 Audio. These are known as MPEG-2 BC – backwards compatible with MPEG-1 Audio. MPEG-2 Audio is defined in ISO/IEC 13818-3.
MPEG Multichannel – Backward compatible 5.1-channel surround sound.
Sampling rates: 16000, 22050, and 24000 Hz
Bitrates: 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144 and 160 kbit/s
These sampling rates are exactly half that of those originally defined for MPEG-1 Audio. They were introduced to maintain higher quality sound when encoding audio at lower-bitrates. The even-lower bitrates were introduced because tests showed that MPEG-1 Audio could provide higher quality than any existing () very low bitrate (i.e. speech) audio codecs.
Part 4: Conformance testing
Part 4 of the MPEG-1 standard covers conformance testing, and is defined in ISO/IEC-11172-4.
Conformance: Procedures for testing conformance.
Provides two sets of guidelines and reference bitstreams for testing the conformance of MPEG-1 audio and video decoders, as well as the bitstreams produced by an encoder.
Part 5: Reference software
Part 5 of the MPEG-1 standard includes reference software, and is defined in ISO/IEC TR 11172–5.
Simulation: Reference software.
C reference code for encoding and decoding of audio and video, as well as multiplexing and demultiplexing.
This includes the ISO Dist10 audio encoder code, which LAME and TooLAME were originally based upon.
File extension
.mpg is one of a number of file extensions for MPEG-1 or MPEG-2 audio and video compression. MPEG-1 Part 2 video is rare nowadays, and this extension typically refers to an MPEG program stream (defined in MPEG-1 and MPEG-2) or MPEG transport stream (defined in MPEG-2). Other suffixes such as .m2ts also exist specifying the precise container, in this case MPEG-2 TS, but this has little relevance to MPEG-1 media.
.mp3 is the most common extension for files containing MP3 audio (typically MPEG-1 Audio, sometimes MPEG-2 Audio). An MP3 file is typically an uncontained stream of raw audio; the conventional way to tag MP3 files is by writing data to "garbage" segments of each frame, which preserve the media information but are discarded by the player. This is similar in many respects to how raw .AAC files are tagged (but this is less supported nowadays, e.g. iTunes).
Note that although it would apply, .mpg does not normally append raw AAC or AAC in MPEG-2 Part 7 Containers. The .aac extension normally denotes these audio files.
See also
MPEG The Moving Picture Experts Group, developers of the MPEG-1 standard
MP3 Additional less technical details about MPEG-1 Audio Layer III
MPEG Multichannel Backwards compatible 5.1 channel surround sound extension to MPEG-1 Audio Layer II
MPEG-2 The direct successor to the MPEG-1 standard.
ISO/IEC JTC 1/SC 29
Implementations
Libavcodec includes MPEG-1/2 video/audio encoders and decoders
Mjpegtools MPEG-1/2 video/audio encoders
TooLAME A high quality MPEG-1 Audio Layer II encoder.
LAME A high quality MP3 audio encoder.
Musepack A format originally based on MPEG-1 Audio Layer II, but now incompatible.
References
External links
Official Web Page of the Moving Picture Experts Group (MPEG) a working group of ISO/IEC
MPEG Industry Forum Organization
Source Code to Implement MPEG-1
A simple, concise explanation from Berkeley Multimedia Research Center
Audio codecs
Video codecs
MPEG
ISO/IEC standards
Computer-related introductions in 1993 |
20057 | https://en.wikipedia.org/wiki/Mumia%20Abu-Jamal | Mumia Abu-Jamal | Mumia Abu-Jamal (born Wesley Cook; April 24, 1954) is an American political activist and journalist who was convicted of murder and sentenced to death in 1982 for the 1981 murder of Philadelphia police officer Daniel Faulkner. He became widely known while on death row for his writings and commentary on the criminal justice system in the United States. After numerous appeals, his death penalty sentence was overturned by a Federal court. In 2011, the prosecution agreed to a sentence of life imprisonment without parole. He entered the general prison population early the following year.
Beginning at the age of 14 in 1968, Abu-Jamal became involved with the Black Panther Party and was a member until October 1970, leaving the party at age 16. After leaving, he completed his high school education, and later became a radio reporter. He eventually served as president of the Philadelphia Association of Black Journalists (1978–1980). He supported the Philadelphia organization MOVE and covered the 1978 confrontation in which one police officer was killed. The MOVE Nine were the members who were arrested and convicted of murder in that case.
Since 1982, the murder trial of Abu-Jamal has been seriously criticized for constitutional failings; some have claimed that he is innocent, and many opposed his death sentence. The Faulkner family, politicians, and other groups involved with law enforcement, state and city governments argue that Abu-Jamal's trial was fair, his guilt beyond question, and his death sentence justified.
When his death sentence was overturned by a Federal court in 2001, he was described as "perhaps the world's best-known death-row inmate" by The New York Times. During his imprisonment, Abu-Jamal has published books and commentaries on social and political issues; his first book was Live from Death Row (1995).
Early life and activism
He was born Wesley Cook in Philadelphia, Pennsylvania, where he grew up. He has a younger brother named William. They attended local public schools.
In 1968, a high school teacher, a Kenyan instructing a class on African cultures, encouraged the students to take African or Arabic names for classroom use; he gave Cook the name "Mumia". According to Abu-Jamal, "Mumia" means "Prince" and was the name of a Kenyan anti-colonial African nationalist who fought against the British before Kenyan independence.
Involvement with the Black Panthers
Abu-Jamal has described being "kicked ... into the Black Panther Party" as a teenager of 14, after suffering a beating from "white racists" and a policeman for trying to disrupt a 1968 rally for Independent candidate George Wallace, former governor of Alabama, who was running on a racist platform. From then he helped form the Philadelphia branch of the Black Panther Party with Defense Captain Reggie Schell, and other Panthers. He was appointed as the chapter's "Lieutenant of Information," responsible for writing information and news communications. In an interview in the early years, Abu-Jamal quoted Mao Zedong, saying that "political power grows out of the barrel of a gun". That same year, he dropped out of Benjamin Franklin High School and began living at the branch's headquarters.
He spent late 1969 in New York City and early 1970 in Oakland, living and working with BPP colleagues in those cities; the party had been founded in Oakland. He was a party member from May 1969 until October 1970. During this period, he was subject to illegal surveillance as part of the Federal Bureau of Investigation's COINTELPRO program, with which the Philadelphia police cooperated. The FBI was working to infiltrate black radical groups and to disrupt them by creating internal dissension.
Return to education
After leaving the Panthers, Abu-Jamal returned as a student to his former high school. He was suspended for distributing literature calling for "black revolutionary student power". He led unsuccessful protests to change the school name to Malcolm X High, to honor the major African-American leader who had been killed in New York by political opponents.
After attaining his GED, Abu-Jamal studied briefly at Goddard College in rural Vermont. He returned to Philadelphia.
Marriages and family
Cook adopted the surname Abu-Jamal ("father of Jamal" in Arabic) after the birth of his first child, son Jamal, on July 18, 1971. He married Jamal's mother Biba in 1973, but they did not stay together long. Their daughter, Lateefa, was born shortly after the wedding. The couple divorced.
In 1977 Abu-Jamal married again, to his second wife, Marilyn (known as "Peachie"). Their son, Mazi, was born in early 1978. By 1981, Abu-Jamal had divorced Peachie and had married his third (and current) wife, Wadiya.
Radio journalism career
By 1975 Abu-Jamal was working in radio newscasting, first at Temple University's WRTI and then at commercial enterprises. In 1975, he was employed at radio station WHAT, and he became host of a weekly feature program at WCAU-FM in 1978. He also worked for brief periods at radio station WPEN. He became active in the local chapter of the Marijuana Users Association of America.
From 1979 to 1981 he worked at National Public Radio (NPR) affiliate WHYY. The management asked him to resign, saying that he did not maintain a sufficiently objective approach in his presentation of news. As a radio journalist, Abu-Jamal was renowned for identifying with and covering the MOVE anarcho-primitivist commune in West Philadelphia's Powelton Village neighborhood. He reported on the 1979–80 trial of certain members (the "MOVE Nine"), who were convicted of the murder of police officer James Ramp. Abu-Jamal had several high-profile interviews, including with Julius Erving, Bob Marley and Alex Haley. He was elected president of the Philadelphia Association of Black Journalists.
Before joining MOVE, Abu-Jamal reported on the organization. When he joined MOVE, he said it was because of his love of the people in the organization. Thinking back on it later, he said he "was probably enraged as well".
In December 1981, Abu-Jamal was working as a taxicab driver in Philadelphia two nights a week to supplement his income. He had been working part-time as a reporter for WDAS, then an African-American-oriented and minority-owned radio station.
Traffic stop and death of officer Faulkner
At 3:55 am on December 9, 1981, in Philadelphia, close to the intersection at 13th and Locust streets, Philadelphia Police Department officer Daniel Faulkner conducted a traffic stop on a vehicle belonging to and driven by William Cook, Abu-Jamal's younger brother. Faulkner and Cook became engaged in a physical confrontation. Driving his cab in the vicinity, Abu-Jamal observed the altercation, parked, and ran across the street toward Cook's car. Faulkner was shot in the back and face. He shot Abu-Jamal in the stomach. Faulkner died at the scene from the gunshot to his head.
Arrest and trial
Police arrived and arrested Abu-Jamal, who was found to be wearing a shoulder holster. His revolver, which had five spent cartridges, was beside him. He was taken directly from the scene of the shooting to Thomas Jefferson University Hospital, where he received treatment for his wound. He was next taken to Police
Headquarters, where he was charged and held for trial in the first-degree murder of Officer Faulkner.
Prosecution case at trial
The prosecution presented four witnesses to the court about the shootings. Robert Chobert, a cab driver who testified he was parked behind Faulkner, identified Abu-Jamal as the shooter. Cynthia White testified that Abu-Jamal emerged from a nearby parking lot and shot Faulkner. Michael Scanlan, a motorist, testified that from two car lengths away he saw a man matching Abu-Jamal's description run across the street from a parking lot and shoot Faulkner. Albert Magilton testified to seeing Faulkner pull over Cook's car. As Abu-Jamal started to cross the street toward them, Magilton turned away and did not see what happened next.
The prosecution presented two witnesses from the hospital where Abu-Jamal was treated. Hospital security guard Priscilla Durham and police officer Garry Bell testified that Abu-Jamal said in the hospital, "I shot the motherfucker, and I hope the motherfucker dies."
A .38 caliber Charter Arms revolver, belonging to Abu-Jamal, with five spent cartridges, was retrieved beside him at the scene. He was wearing a shoulder holster. Anthony Paul, the Supervisor of the Philadelphia Police Department's firearms identification unit, testified at trial that the cartridge cases and rifling characteristics of the weapon were consistent with bullet fragments taken from Faulkner's body. Tests to confirm that Abu-Jamal had handled and fired the weapon were not performed. Contact with arresting police and other surfaces at the scene could have compromised the forensic value of such tests.
Defense case at trial
The defense maintained that Abu-Jamal was innocent, and that the prosecution witnesses were unreliable. The defense presented nine character witnesses, including poet Sonia Sanchez, who testified that Abu-Jamal was "viewed by the black community as a creative, articulate, peaceful, genial man". Another defense witness, Dessie Hightower, testified that he saw a man running along the street shortly after the shooting, although he did not see the shooting itself. His testimony contributed to the development of a "running man theory", based on the possibility that a "running man" may have been the shooter. Veronica Jones also testified for the defense, but she did not testify to having seen another man. Other potential defense witnesses refused to appear in court. Abu-Jamal did not testify in his own defense, nor did his brother, William Cook. Cook had repeatedly told investigators at the crime scene: "I ain't got nothing to do with this!".
Verdict and sentence
After three hours of deliberations, the jury presented a unanimous guilty verdict.
In the sentencing phase of the trial, Abu-Jamal read to the jury from a prepared statement. He was cross-examined about issues relevant to the assessment of his character by Joseph McGill, the prosecuting attorney.
In his statement, Abu-Jamal criticized his attorney as a "legal trained lawyer", who was imposed on him against his will and who "knew he was inadequate to the task and chose to follow the directions of this black-robed conspirator [referring to the judge], Albert Sabo, even if it meant ignoring my directions." He claimed that his rights had been "deceitfully stolen" from him by [Judge] Sabo, particularly focusing on the denial of his request to receive defense assistance from John Africa, who was not an attorney, and being prevented from proceeding pro se. He quoted remarks of John Africa, and said:
Abu-Jamal was sentenced to death by the unanimous decision of the jury. Amnesty International has objected to the introduction by the prosecution at the time of his sentencing of statements from when he was an activist as a youth. It also protested the politicization of the trial, noting that there was documented recent history in Philadelphia of police abuse and corruption, including fabricated evidence and use of excessive force. Amnesty International concluded "that the proceedings used to convict and sentence Mumia Abu-Jamal to death were in violation of minimum international standards that govern fair trial procedures and the use of the death penalty".
Appeals and review
State appeals
The Supreme Court of Pennsylvania on March 6, 1989, heard and rejected a direct appeal of his conviction. It subsequently denied rehearing. The Supreme Court of the United States denied his petition for writ of certiorari on October 1, 1990, and denied his petition for rehearing twice up to June 10, 1991.
On June 1, 1995, Abu-Jamal's death warrant was signed by Pennsylvania Governor Tom Ridge. Its execution was suspended while Abu-Jamal pursued state post-conviction review. At the post-conviction review hearings, new witnesses were called. William "Dales" Singletary testified that he saw the shooting, and that the gunman was the passenger in Cook's car. Singletary's account contained discrepancies which rendered it "not credible" in the opinion of the court.
The six judges of the Supreme Court of Pennsylvania ruled unanimously that all issues raised by Abu-Jamal, including the claim of ineffective assistance of counsel, were without merit. The Supreme Court of the United States denied a petition for certiorari against that decision on October 4, 1999, enabling Ridge to sign a second death warrant on October 13, 1999. Its execution was stayed as Abu-Jamal began to seek federal habeas corpus review.
In 1999, Arnold Beverly claimed that he and an unnamed assailant, not Mumia Abu-Jamal, shot Daniel Faulkner as part of a contract killing because Faulkner was interfering with graft and payoff to corrupt police. As Abu-Jamal's defense team prepared another appeal in 2001, they were divided over use of the Beverly affidavit. Some thought it usable and others rejected Beverly's story as "not credible".
Private investigator George Newman claimed in 2001 that Chobert had recanted his testimony. Commentators noted that police and news photographs of the crime scene did not show Chobert's taxi, and that Cynthia White, the only witness at the original trial to testify to seeing the taxi, had previously provided crime scene descriptions that omitted it. Cynthia White was declared to be dead by the state of New Jersey in 1992, but Pamela Jenkins claimed that she saw White alive as late as 1997. The Free Mumia Coalition has claimed that White was a police informant and that she falsified her testimony against Abu-Jamal.
Kenneth Pate, who was imprisoned with Abu-Jamal on other charges, has since claimed that his step-sister Priscilla Durham, a hospital security guard, admitted later she had not heard the "hospital confession" to which she had testified at trial. The hospital doctors said that Abu-Jamal was "on the verge of fainting" when brought in, and they did not hear any such confession.
In 2008, the Supreme Court of Pennsylvania rejected a further request from Abu-Jamal for a hearing into claims that the trial witnesses perjured themselves, on the grounds that he had waited too long before filing the appeal.
On March 26, 2012, the Supreme Court of Pennsylvania rejected his appeal for retrial. His defense had asserted, based on a 2009 report by the National Academy of Sciences, that forensic evidence presented by the prosecution and accepted into evidence in the original trial was unreliable. This was reported as Abu-Jamal's last legal appeal.
On April 30, 2018, the Pennsylvania Supreme Court ruled that Abu-Jamal would not be immediately granted another appeal and that the proceedings had to continue until August 30 of that year. The defense argued that former Pennsylvania Supreme Court Chief justice Ronald D. Castille should have recused himself from the 2012 appeals decision after his involvement as Philadelphia District Attorney (DA) in the 1989 appeal. Both sides of the 2018 proceedings repeatedly cited a 1990 letter sent by Castille to then-Governor Bob Casey, urging Casey to sign the execution warrants of those convicted of murdering police. This letter, demanding Casey send "a clear and dramatic message to all cop killers," was claimed one of many reasons to suspect Castille's bias in the case. Philadelphia's current DA Larry Krasner stated he could not find any document supporting the defense's claim. On August 30, 2018, the proceedings to determine another appeal were once again extended and a ruling on the matter was delayed for at least 60 more days.
Federal District Court 2001 ruling
The Free Mumia Coalition published statements by William Cook and his brother Abu-Jamal in the spring of 2001. Cook, who had been stopped by the police officer, had not made any statement before April 29, 2001, and did not testify at his brother's trial. In 2001 he said that he had not seen who had shot Faulkner. Abu-Jamal did not make any public statements about Faulkner's murder until May 4, 2001. In his version of events, he claimed that he was sitting in his cab across the street when he heard shouting, saw a police vehicle, and heard the sound of gunshots. Upon seeing his brother appearing disoriented across the street, Abu-Jamal ran to him from the parking lot and was shot by a police officer.
In 2001 Judge William H. Yohn, Jr. of the United States District Court for the Eastern District of Pennsylvania upheld the conviction, saying that Abu-Jamal did not have the right to a new trial. But he vacated the sentence of death on December 18, 2001, citing irregularities in the penalty phase of the trial and the original process of sentencing. Particularly, he said that
He ordered the State of Pennsylvania to commence new sentencing proceedings within 180 days, and ruled unconstitutional the requirement that a jury be unanimous in its finding of circumstances mitigating against a sentence of death.
Eliot Grossman and Marlene Kamish, attorneys for Abu-Jamal, criticized the ruling on the grounds that it denied the possibility of a trial de novo, at which they could introduce evidence that their client had been framed. Prosecutors also criticized the ruling. Officer Faulkner's widow Maureen said the judgment would allow Abu-Jamal, whom she described as a "remorseless, hate-filled killer", to "be permitted to enjoy the pleasures that come from simply being alive". Both parties appealed.
Federal appeal and review
On December 6, 2005, the Third Circuit Court of Appeals admitted four issues for appeal of the ruling of the District Court:
in relation to sentencing, whether the jury verdict form had been flawed and the judge's instructions to the jury had been confusing;
in relation to conviction and sentencing, whether racial bias in jury selection existed to an extent tending to produce an inherently biased jury and therefore an unfair trial (the Batson claim);
in relation to conviction, whether the prosecutor improperly attempted to reduce jurors' sense of responsibility by telling them that a guilty verdict would be subsequently vetted and subject to appeal; and
in relation to post-conviction review hearings in 1995–6, whether the presiding judge, who had also presided at the trial, demonstrated unacceptable bias in his conduct.
The Third Circuit Court heard oral arguments in the appeals on May 17, 2007, at the United States Courthouse in Philadelphia. The appeal panel consisted of Chief Judge Anthony Joseph Scirica, Judge Thomas Ambro, and Judge Robert Cowen. The Commonwealth of Pennsylvania sought to reinstate the sentence of death, on the basis that Yohn's ruling was flawed, as he should have deferred to the Pennsylvania Supreme Court which had already ruled on the issue of sentencing. The prosecution said that the Batson claim was invalid because Abu-Jamal made no complaints during the original jury selection.
The resulting jury was racially mixed, with 2 blacks and 10 whites at the time of the unanimous conviction, but defense counsel told the Third Circuit Court that Abu-Jamal did not get a fair trial because the jury was racially biased, misinformed, and the judge was a racist. He noted that the prosecution used eleven out of fourteen peremptory challenges to eliminate prospective black jurors. Terri Maurer-Carter, a former Philadelphia court stenographer, stated in a 2001 affidavit that she overheard Judge Sabo say "Yeah, and I'm going to help them fry the nigger" in the course of a conversation with three people present regarding Abu-Jamal's case. Sabo denied having made any such comment.
On March 27, 2008, the three-judge panel issued a majority 2–1 opinion upholding Yohn's 2001 opinion but rejecting the bias and Batson claims, with Judge Ambro dissenting on the Batson issue. On July 22, 2008, Abu-Jamal's formal petition seeking reconsideration of the decision by the full Third Circuit panel of 12 judges was denied. On April 6, 2009, the United States Supreme Court refused to hear Abu-Jamal's appeal, allowing his conviction to stand.
On January 19, 2010, the Supreme Court ordered the appeals court to reconsider its decision to rescind the death penalty. The same three-judge panel convened in Philadelphia on November 9, 2010, to hear oral argument. On April 26, 2011, the Third Circuit Court of Appeals reaffirmed its prior decision to vacate the death sentence on the grounds that the jury instructions and verdict form were ambiguous and confusing. The Supreme Court declined to hear the case in October.
Death penalty dropped
On December 7, 2011, District Attorney of Philadelphia R. Seth Williams announced that prosecutors, with the support of the victim's family, would no longer seek the death penalty for Abu-Jamal and would accept a sentence of life imprisonment without parole. This sentence was reaffirmed by the Superior Court of Pennsylvania on July 9, 2013.
After the press conference on the sentence, widow Maureen Faulkner said that she did not want to relive the trauma of another trial. She understood that it would be extremely difficult to present the case against Abu-Jamal again, after the passage of 30 years and the deaths of several key witnesses. She also reiterated her belief that Abu-Jamal will be punished further after death.
Life as a prisoner
In 1991 Abu-Jamal published an essay in the Yale Law Journal, on the death penalty and his death row experience. In May 1994, Abu-Jamal was engaged by National Public Radio's All Things Considered program to deliver a series of monthly three-minute commentaries on crime and punishment. The broadcast plans and commercial arrangement were canceled following condemnations from, among others, the Fraternal Order of Police and Senate Minority Leader Bob Dole. Abu-Jamal sued NPR for not airing his work, but a federal judge dismissed the suit. His commentaries later were published in May 1995 as part of his first book, Live from Death Row.
In 1996, he completed a B.A. degree via correspondence classes at Goddard College, which he had attended for a time as a young man. He has been invited as commencement speaker by a number of colleges, and has participated via recordings. In 1999, Abu-Jamal was invited to record a keynote address for the graduating class at Evergreen State College in Washington State. The event was protested by some. In 2000, he recorded a commencement address for Antioch College. The now defunct New College of California School of Law presented him with an honorary degree "for his struggle to resist the death penalty."
On October 5, 2014, he gave the commencement speech at Goddard College, via playback of a recording. As before, the choice of Abu-Jamal was controversial. Ten days later the Pennsylvania legislature had passed an addition to the Crime Victims Act called "Revictimization Relief." The new provision is intended to prevent actions that cause "a temporary or permanent state of mental anguish" to those who have previously been victimized by crime. It was signed by Republican governor Tom Corbett five days later. Commentators suggest that the bill was directed to control Abu-Jamal's journalism, book publication, and public speaking, and that it would be challenged on the grounds of free speech.
With occasional interruptions due to prison disciplinary actions, Abu-Jamal has for many years been a regular commentator on an online broadcast, sponsored by Prison Radio. He also is published as a regular columnist for Junge Welt, a Marxist newspaper in Germany. For almost a decade, Abu-Jamal taught introductory courses in Georgist economics by correspondence to other prisoners around the world.
In addition, he has written and published several books: Live From Death Row (1995), a diary of life on Pennsylvania's death row; All Things Censored (2000), a collection of essays examining issues of crime and punishment; Death Blossoms: Reflections from a Prisoner of Conscience (2003), in which he explores religious themes; and We Want Freedom: A Life in the Black Panther Party (2004), a history of the Black Panthers that draws on his own experience and research, and discusses the federal government's program known as COINTELPRO, to disrupt black activist organizations.
In 1995, Abu-Jamal was punished with solitary confinement for engaging in entrepreneurship contrary to prison regulations. Subsequent to the airing of the 1996 HBO documentary Mumia Abu-Jamal: A Case For Reasonable Doubt?, which included footage from visitation interviews conducted with him, the Pennsylvania Department of Corrections banned outsiders from using any recording equipment in state prisons.
In litigation before the U.S. Court of Appeals, in 1998 Abu-Jamal successfully established his right while in prison to write for financial gain. The same litigation also established that the Pennsylvania Department of Corrections had illegally opened his mail in an attempt to establish whether he was earning money by his writing.
When, for a brief time in August 1999, Abu-Jamal began delivering his radio commentaries live on the Pacifica Network's Democracy Now! weekday radio newsmagazine, prison staff severed the connecting wires of his telephone from their mounting in mid-performance. He was later allowed to resume his broadcasts, and hundreds of his broadcasts have been aired on Pacifica Radio.
Following the overturning of his death sentence, Abu-Jamal was sentenced to life in prison in December 2011. At the end of January 2012, he was shifted from the isolation of death row into the general prison population at State Correctional Institution – Mahanoy.
In August 2015 his attorneys filed suit in the U.S. District Court for the Middle District of Pennsylvania, alleging that he has not received appropriate medical care for his serious health conditions. In April 2021, he tested positive for COVID-19 and was scheduled for heart surgery to relieve blocked coronary arteries.
Popular support and opposition
Labor unions, politicians, advocates, educators, the NAACP Legal Defense and Educational Fund, and human rights advocacy organizations such as Human Rights Watch and Amnesty International have expressed concern about the impartiality of the trial of Abu-Jamal. Amnesty International neither takes a position on the guilt or innocence of Abu-Jamal nor classifies him as a political prisoner.
The family of Daniel Faulkner, the Commonwealth of Pennsylvania, the City of Philadelphia, politicians, and the Fraternal Order of Police have continued to support the original trial and sentencing of the journalist. In August 1999, the Fraternal Order of Police called for an economic boycott against all individuals and organizations that support Abu-Jamal. Many such groups operate within the Prison-Industrial Complex, a system which Abu-Jamal has frequently criticized.
Partly based on his own writing, Abu-Jamal and his cause have become widely known internationally, and other groups have classified him as a political prisoner. About 25 cities, including Montreal, Palermo, and Paris, have made him an honorary citizen.
In 2001, he received the sixth biennial Erich Mühsam Prize, named after an anarcho-communist essayist, which recognizes activism in line with that of its namesake. In October 2002, he was made an honorary member of the German political organization Society of People Persecuted by the Nazi Regime – Federation of Anti-Fascists (VVN-BdA).
On April 29, 2006, a newly paved road in the Parisian suburb of Saint-Denis was named Rue Mumia Abu-Jamal in his honor. In protest of the street-naming, U.S. Congressman Michael Fitzpatrick and Senator Rick Santorum, both members of the Republican Party of Pennsylvania, introduced resolutions in both Houses of Congress condemning the decision. The House of Representatives voted 368–31 in favor of Fitzpatrick's resolution. In December 2006, the 25th anniversary of the murder, the executive committee of the Republican Party for the 59th Ward of the City of Philadelphia—covering approximately Germantown, Philadelphia—filed two criminal complaints in the French legal system against the city of Paris and the city of Saint-Denis, accusing the municipalities of "glorifying" Abu-Jamal and alleging the offense "apology or denial of crime" in respect of their actions.
In 2007, the widow of Officer Faulkner co-authored a book with Philadelphia radio journalist Michael Smerconish titled Murdered by Mumia: A Life Sentence of Pain, Loss, and Injustice. The book was part memoir of Faulkner's widow, and part discussion in which they chronicled Abu-Jamal's trial and discussed evidence for his conviction. They also discussed support for the death penalty.
In early 2014, President Barack Obama nominated Debo Adegbile, a former lawyer for the NAACP Legal Defense Fund, to head the civil rights division of the Justice Department. He had worked on Abu-Jamal's case, and his nomination was rejected by the U.S. Senate on a bipartisan basis because of that.
On April 10, 2015, Marylin Zuniga, a teacher at Forest Street Elementary School in Orange, New Jersey, was suspended without pay after asking her students to write cards to Abu-Jamal, who was ill in prison due to complications from diabetes, without approval from the school or parents. Some parents and police leaders denounced her actions. On the other hand, community members, parents, teachers, and professors expressed their support and condemned Zuniga's suspension. Scholars and educators nationwide, including Noam Chomsky, Chris Hedges and Cornel West among others, signed a letter calling for her immediate reinstatement. On May 13, 2015, the Orange Preparatory Academy board voted to dismiss Marylin Zuniga after hearing from her and several of her supporters.
Written works
Have Black Lives Ever Mattered? City Lights Publishers (2017),
Writing on the Wall: Selected Prison Writings of Mumia Abu-Jamal, City Lights Publishers (2015),
The Classroom and the Cell: Conversations on Black Life in America, Third World Press (2011),
Jailhouse Lawyers: Prisoners Defending Prisoners v. the U.S.A., City Lights Publishers (2009),
We Want Freedom: A Life in the Black Panther Party, South End Press (2008),
Faith Of Our Fathers: An Examination of the Spiritual Life of African and African-American People, Africa World Pr (2003),
All Things Censored, Seven Stories Press (2000),
Death Blossoms: Reflections from a Prisoner of Conscience, Plough Publishing House (1997),
Live from Death Row, Harper Perennial (1996),
Representation in popular culture
HBO aired the documentary film Mumia Abu-Jamal: A Case For Reasonable Doubt? in 1996; this 57-minute film about the 1982 murder trial is directed by John Edginton. There are two versions by Edginton, both produced by Otmoor Productions. The second is 72 minutes long and contains additional information by witnesses.
Political hip hop artist Immortal Technique featured Abu-Jamal on his second album Revolutionary Vol. 2.
The punk band Anti-Flag has a speech from Mumia Abu-Jamal in the intro to their song "The Modern Rome Burning" from their 2008 album The Bright Lights of America. The speech also appears on the end of their preceding track "Vices".
The documentary film In Prison My Whole Life (2008), directed by Marc Evans, and written by Evans and William Francome, explores the life of Abu-Jamal.
See also
Black Lives Matter
In Prison My Whole Life – 2008 documentary film
References
External links
Interview on the Mumia-Abu-Jamal Case, Part 1, 1995-11-01, In Black America; National Association of Black Journalists, KUT Radio, American Archive of Public Broadcasting (WGBH and the Library of Congress)
Interview on the Mumia-Abu-Jamal Case, Part 2, 1995-11-01, In Black America; National Association of Black Journalists, KUT Radio, American Archive of Public Broadcasting (WGBH and the Library of Congress)
Interview on the Mumia-Abu-Jamal Case, Part 3, 1996-11-01, In Black America; National Association of Black Journalists, KUT Radio, American Archive of Public Broadcasting (WGBH and the Library of Congress)
Video
1996 interview with Mumia Abu-Jamal, by Monica Moorehead and Larry Holmes of Workers World Party
Competing Films Offer Differing Views – video report by Democracy Now!
Mumia: Long Distance Revolutionary , 2012 documentary film
Mumia Abu-Jamal: Prison Industrial Complex, Interview with Mumia discussing the prison-industrial complex
Supporter websites
Free Mumia Abu-Jamal Coalition (New York City)
Journalists for Mumia
Opponent websites
Fraternal Order of Police news, press releases, and communications relating to Mumia Abu-Jamal
Daniel Faulkner
Justice for Daniel Faulkner
1954 births
Living people
African-American journalists
African-American writers
Alternative Tentacles artists
American anti–death penalty activists
American columnists
American newspaper reporters and correspondents
American people convicted of murdering police officers
American political writers
American prisoners sentenced to death
American radio reporters and correspondents
American male journalists
Members of the Black Panther Party
Converts to Islam
Criminals from Philadelphia
Goddard College alumni
Human rights activists
American Marxist journalists
Pennsylvania political activists
People convicted of murder by Pennsylvania
Prisoners sentenced to death by Pennsylvania
Writers from Philadelphia
Anti-globalization activists |
20059 | https://en.wikipedia.org/wiki/Multiplicative%20function | Multiplicative function | In number theory, a multiplicative function is an arithmetic function f(n) of a positive integer n with the property that f(1) = 1 and
whenever a and b are coprime.
An arithmetic function f(n) is said to be completely multiplicative (or totally multiplicative) if f(1) = 1 and f(ab) = f(a)f(b) holds for all positive integers a and b, even when they are not coprime.
Examples
Some multiplicative functions are defined to make formulas easier to write:
1(n): the constant function, defined by 1(n) = 1 (completely multiplicative)
Id(n): identity function, defined by Id(n) = n (completely multiplicative)
Idk(n): the power functions, defined by Idk(n) = nk for any complex number k (completely multiplicative). As special cases we have
Id0(n) = 1(n) and
Id1(n) = Id(n).
ε(n): the function defined by ε(n) = 1 if n = 1 and 0 otherwise, sometimes called multiplication unit for Dirichlet convolution or simply the unit function (completely multiplicative). Sometimes written as u(n), but not to be confused with μ(n) .
1C(n), the indicator function of the set C ⊂ Z, for certain sets C. The indicator function 1C(n) is multiplicative precisely when the set C has the following property for any coprime numbers a and b: the product ab is in C if and only if the numbers a and b are both themselves in C. This is the case if C is the set of squares, cubes, or k-th powers, or if C is the set of square-free numbers.
Other examples of multiplicative functions include many functions of importance in number theory, such as:
gcd(n,k): the greatest common divisor of n and k, as a function of n, where k is a fixed integer.
: Euler's totient function , counting the positive integers coprime to (but not bigger than) n
μ(n): the Möbius function, the parity (−1 for odd, +1 for even) of the number of prime factors of square-free numbers; 0 if n is not square-free
σk(n): the divisor function, which is the sum of the k-th powers of all the positive divisors of n (where k may be any complex number). Special cases we have
σ0(n) = d(n) the number of positive divisors of n,
σ1(n) = σ(n), the sum of all the positive divisors of n.
a(n): the number of non-isomorphic abelian groups of order n.
λ(n): the Liouville function, λ(n) = (−1)Ω(n) where Ω(n) is the total number of primes (counted with multiplicity) dividing n. (completely multiplicative).
γ(n), defined by γ(n) = (−1)ω(n), where the additive function ω(n) is the number of distinct primes dividing n.
τ(n): the Ramanujan tau function.
All Dirichlet characters are completely multiplicative functions. For example
(n/p), the Legendre symbol, considered as a function of n where p is a fixed prime number.
An example of a non-multiplicative function is the arithmetic function r2(n) - the number of representations of n as a sum of squares of two integers, positive, negative, or zero, where in counting the number of ways, reversal of order is allowed. For example:
and therefore r2(1) = 4 ≠ 1. This shows that the function is not multiplicative. However, r2(n)/4 is multiplicative.
In the On-Line Encyclopedia of Integer Sequences, sequences of values of a multiplicative function have the keyword "mult".
See arithmetic function for some other examples of non-multiplicative functions.
Properties
A multiplicative function is completely determined by its values at the powers of prime numbers, a consequence of the fundamental theorem of arithmetic. Thus, if n is a product of powers of distinct primes, say n = pa qb ..., then
f(n) = f(pa) f(qb) ...
This property of multiplicative functions significantly reduces the need for computation, as in the following examples for n = 144 = 24 · 32:
Similarly, we have:
In general, if f(n) is a multiplicative function and a, b are any two positive integers, then
Every completely multiplicative function is a homomorphism of monoids and is completely determined by its restriction to the prime numbers.
Convolution
If f and g are two multiplicative functions, one defines a new multiplicative function , the Dirichlet convolution of f and g, by
where the sum extends over all positive divisors d of n.
With this operation, the set of all multiplicative functions turns into an abelian group; the identity element is ε. Convolution is commutative, associative, and distributive over addition.
Relations among the multiplicative functions discussed above include:
(the Möbius inversion formula)
(generalized Möbius inversion)
The Dirichlet convolution can be defined for general arithmetic functions, and yields a ring structure, the Dirichlet ring.
The Dirichlet convolution of two multiplicative functions is again multiplicative. A proof of this fact is given by the following expansion for relatively prime :
Dirichlet series for some multiplicative functions
More examples are shown in the article on Dirichlet series.
Multiplicative function over
Let , the polynomial ring over the finite field with q elements. A is a principal ideal domain and therefore A is a unique factorization domain.
A complex-valued function on A is called multiplicative if whenever f and g are relatively prime.
Zeta function and Dirichlet series in
Let h be a polynomial arithmetic function (i.e. a function on set of monic polynomials over A). Its corresponding Dirichlet series is defined to be
where for set if and otherwise.
The polynomial zeta function is then
Similar to the situation in , every Dirichlet series of a multiplicative function h has a product representation (Euler product):
where the product runs over all monic irreducible polynomials P. For example, the product representation of the zeta function is as for the integers:
Unlike the classical zeta function, is a simple rational function:
In a similar way, If f and g are two polynomial arithmetic functions, one defines f * g, the Dirichlet convolution of f and g, by
where the sum is over all monic divisors d of m, or equivalently over all pairs (a, b) of monic polynomials whose product is m. The identity still holds.
See also
Euler product
Bell series
Lambert series
References
See chapter 2 of
External links
Planet Math |