title
stringlengths
3
71
text
stringlengths
643
117k
relevans
float64
0.76
0.83
popularity
float64
0.94
1
ranking
float64
0.76
0.83
Hypothetical types of biochemistry
Hypothetical types of biochemistry are forms of biochemistry agreed to be scientifically viable but not proven to exist at this time. The kinds of living organisms currently known on Earth all use carbon compounds for basic structural and metabolic functions, water as a solvent, and DNA or RNA to define and control their form. If life exists on other planets or moons it may be chemically similar, though it is also possible that there are organisms with quite different chemistries for instance, involving other classes of carbon compounds, compounds of another element, or another solvent in place of water. The possibility of life-forms being based on "alternative" biochemistries is the topic of an ongoing scientific discussion, informed by what is known about extraterrestrial environments and about the chemical behaviour of various elements and compounds. It is of interest in synthetic biology and is also a common subject in science fiction. The element silicon has been much discussed as a hypothetical alternative to carbon. Silicon is in the same group as carbon on the periodic table and, like carbon, it is tetravalent. Hypothetical alternatives to water include ammonia, which, like water, is a polar molecule, and cosmically abundant; and non-polar hydrocarbon solvents such as methane and ethane, which are known to exist in liquid form on the surface of Titan. Overview of hypothetical types of biochemistry Shadow biosphere A shadow biosphere is a hypothetical microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. Although life on Earth is relatively well-studied, the shadow biosphere may still remain unnoticed because the exploration of the microbial world targets primarily the biochemistry of the macro-organisms. Alternative-chirality biomolecules Perhaps the least unusual alternative biochemistry would be one with differing chirality of its biomolecules. In known Earth-based life, amino acids are almost universally of the form and sugars are of the form. Molecules using amino acids or sugars may be possible; molecules of such a chirality, however, would be incompatible with organisms using the opposing chirality molecules. Amino acids whose chirality is opposite to the norm are found on Earth, and these substances are generally thought to result from decay of organisms of normal chirality. However, physicist Paul Davies speculates that some of them might be products of "anti-chiral" life. It is questionable, however, whether such a biochemistry would be truly alien. Although it would certainly be an alternative stereochemistry, molecules that are overwhelmingly found in one enantiomer throughout the vast majority of organisms can nonetheless often be found in another enantiomer in different (often basal) organisms such as in comparisons between members of Archaea and other domains, making it an open topic whether an alternative stereochemistry is truly novel. Non-carbon-based biochemistries On Earth, all known living things have a carbon-based structure and system. Scientists have speculated about the pros and cons of using elements other than carbon to form the molecular structures necessary for life, but no one has proposed a theory employing such atoms to form all the necessary structures. However, as Carl Sagan argued, it is very difficult to be certain whether a statement that applies to all life on Earth will turn out to apply to all life throughout the universe. Sagan used the term "carbon chauvinism" for such an assumption. He regarded silicon and germanium as conceivable alternatives to carbon (other plausible elements include but are not limited to palladium and titanium); but, on the other hand, he noted that carbon does seem more chemically versatile and is more abundant in the cosmos. Norman Horowitz devised the experiments to determine whether life might exist on Mars that were carried out by the Viking Lander of 1976, the first U.S. mission to successfully land a probe on the surface of Mars. Horowitz argued that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. He considered that there was only a remote possibility that non-carbon life forms could exist with genetic information systems capable of self-replication and the ability to evolve and adapt. Silicon biochemistry The silicon atom has been much discussed as the basis for an alternative biochemical system, because silicon has many chemical similarities to carbon and is in the same group of the periodic table. Like carbon, silicon can create molecules that are sufficiently large to carry biological information. However, silicon has several drawbacks as a carbon alternative. Carbon is ten times more cosmically abundant than silicon, and its chemistry appears naturally more complex. By 1998, astronomers had identified 84 carbon-containing molecules in the interstellar medium, but only 8 containing silicon, of which half also included carbon. Even though Earth and other terrestrial planets are exceptionally silicon-rich and carbon-poor (silicon is roughly 925 times more abundant in Earth's crust than carbon), terrestrial life bases itself on carbon. It may eschew silicon because silicon compounds are less varied, unstable in the presence of water, or block the flow of heat. Relative to carbon, silicon has a much larger atomic radius, and forms much weaker covalent bonds to atoms — except oxygen and fluorine, with which it forms very strong bonds. Almost no multiple bonds to silicon are stable, although silicon does exhibit varied coordination number. Silanes, silicon analogues to the alkanes, react rapidly with water, and long-chain silanes spontaneously decompose. Consequently, most terrestrial silicon is "locked up" in silica, and not a wide variety of biogenic precursors. Silicones, which alternate between silicon and oxygen atoms, are much more stable than silanes, and may even be more stable than the equivalent hydrocarbons in sulfuric acid-rich extraterrestrial environments. Alternatively, the weak bonds in silicon compounds may help maintain a rapid pace of life at cryogenic temperatures. Polysilanols, the silicon homologues to sugars, are among the few compounds soluble in liquid nitrogen. All known silicon macromolecules are artificial polymers, and so "monotonous compared with the combinatorial universe of organic macromolecules". Even so, some Earth life uses biogenic silica: diatoms' silicate skeletons. A. G. Cairns-Smith hypothesized that silicate minerals in water played a crucial role in abiogenesis, in that biogenic carbon compounds formed around their crystal structures. Although not observed in nature, carbon–silicon bonds have been added to biochemistry under directed evolution (artificial selection): a cytochrome c protein from Rhodothermus marinus has been engineered to catalyze new carbon–silicon bonds between hydrosilanes and diazo compounds. Other exotic element-based biochemistries Boranes are dangerously explosive in Earth's atmosphere, but would be more stable in a reducing atmosphere. However, boron's low cosmic abundance makes it less likely as a base for life than carbon. Various metals, together with oxygen, can form very complex and thermally stable structures rivaling those of organic compounds; the heteropoly acids are one such family. Some metal oxides are also similar to carbon in their ability to form both nanotube structures and diamond-like crystals (such as cubic zirconia). Titanium, aluminium, magnesium, and iron are all more abundant in the Earth's crust than carbon. Metal-oxide-based life could therefore be a possibility under certain conditions, including those (such as high temperatures) at which carbon-based life would be unlikely. The Cronin group at Glasgow University reported self-assembly of tungsten polyoxometalates into cell-like spheres. By modifying their metal oxide content, the spheres can acquire holes that act as porous membrane, selectively allowing chemicals in and out of the sphere according to size. Sulfur is also able to form long-chain molecules, but suffers from the same high-reactivity problems as phosphorus and silanes. The biological use of sulfur as an alternative to carbon is purely hypothetical, especially because sulfur usually forms only linear chains rather than branched ones. (The biological use of sulfur as an electron acceptor is widespread and can be traced back 3.5 billion years on Earth, thus predating the use of molecular oxygen. Sulfur-reducing bacteria can utilize elemental sulfur instead of oxygen, reducing sulfur to hydrogen sulfide.) Arsenic as an alternative to phosphorus Arsenic, which is chemically similar to phosphorus, while poisonous for most life forms on Earth, is incorporated into the biochemistry of some organisms. Some marine algae incorporate arsenic into complex organic molecules such as arsenosugars and arsenobetaines. Fungi and bacteria can produce volatile methylated arsenic compounds. Arsenate reduction and arsenite oxidation have been observed in microbes (Chrysiogenes arsenatis). Additionally, some prokaryotes can use arsenate as a terminal electron acceptor during anaerobic growth and some can utilize arsenite as an electron donor to generate energy. It has been speculated that the earliest life forms on Earth may have used arsenic biochemistry in place of phosphorus in the structure of their DNA. A common objection to this scenario is that arsenate esters are so much less stable to hydrolysis than corresponding phosphate esters that arsenic is poorly suited for this function. The authors of a 2010 geomicrobiology study, supported in part by NASA, have postulated that a bacterium, named GFAJ-1, collected in the sediments of Mono Lake in eastern California, can employ such 'arsenic DNA' when cultured without phosphorus. They proposed that the bacterium may employ high levels of poly-β-hydroxybutyrate or other means to reduce the effective concentration of water and stabilize its arsenate esters. This claim was heavily criticized almost immediately after publication for the perceived lack of appropriate controls. Science writer Carl Zimmer contacted several scientists for an assessment: "I reached out to a dozen experts ... Almost unanimously, they think the NASA scientists have failed to make their case". Other authors were unable to reproduce their results and showed that the study had issues with phosphate contamination, suggesting that the low amounts present could sustain extremophile lifeforms. Alternatively, it was suggested that GFAJ-1 cells grow by recycling phosphate from degraded ribosomes, rather than by replacing it with arsenate. Non-water solvents In addition to carbon compounds, all currently known terrestrial life also requires water as a solvent. This has led to discussions about whether water is the only liquid capable of filling that role. The idea that an extraterrestrial life-form might be based on a solvent other than water has been taken seriously in recent scientific literature by the biochemist Steven Benner, and by the astrobiological committee chaired by John A. Baross. Solvents discussed by the Baross committee include ammonia, sulfuric acid, formamide, hydrocarbons, and (at temperatures much lower than Earth's) liquid nitrogen, or hydrogen in the form of a supercritical fluid. Water as a solvent limits the forms biochemistry can take. For example, Steven Benner, proposes the polyelectrolyte theory of the gene that claims that for a genetic biopolymer such as, DNA, to function in water, it requires repeated ionic charges. If water is not required for life, these limits on genetic biopolymers are removed. Carl Sagan once described himself as both a carbon chauvinist and a water chauvinist; however, on another occasion he said that he was a carbon chauvinist but "not that much of a water chauvinist". He speculated on hydrocarbons, hydrofluoric acid, and ammonia as possible alternatives to water. Some of the properties of water that are important for life processes include: A complexity which leads to a large number of permutations of possible reaction paths including acid–base chemistry, H+ cations, OH− anions, hydrogen bonding, van der Waals bonding, dipole–dipole and other polar interactions, aqueous solvent cages, and hydrolysis. This complexity offers a large number of pathways for evolution to produce life, many other solvents have dramatically fewer possible reactions, which severely limits evolution. Thermodynamic stability: the free energy of formation of liquid water is low enough (−237.24 kJ/mol) that water undergoes few reactions. Other solvents are highly reactive, particularly with oxygen. Water does not combust in oxygen because it is already the combustion product of hydrogen with oxygen. Most alternative solvents are not stable in an oxygen-rich atmosphere, so it is highly unlikely that those liquids could support aerobic life. A large temperature range over which it is liquid. High solubility of oxygen and carbon dioxide at room temperature supporting the evolution of aerobic aquatic plant and animal life. A high heat capacity (leading to higher environmental temperature stability). Water is a room-temperature liquid leading to a large population of quantum transition states required to overcome reaction barriers. Cryogenic liquids (such as liquid methane) have exponentially lower transition state populations which are needed for life based on chemical reactions. This leads to chemical reaction rates which may be so slow as to preclude the development of any life based on chemical reactions. Spectroscopic transparency allowing solar radiation to penetrate several meters into the liquid (or solid), greatly aiding the evolution of aquatic life. A large heat of vaporization leading to stable lakes and oceans. The ability to dissolve a wide variety of compounds. The solid (ice) has lower density than the liquid, so ice floats on the liquid. This is why bodies of water freeze over but do not freeze solid (from the bottom up). If ice were denser than liquid water (as is true for nearly all other compounds), then large bodies of liquid would slowly freeze solid, which would not be conducive to the formation of life. Water as a compound is cosmically abundant, although much of it is in the form of vapor or ice. Subsurface liquid water is considered likely or possible on several of the outer moons: Enceladus (where geysers have been observed), Europa, Titan, and Ganymede. Earth and Titan are the only worlds currently known to have stable bodies of liquid on their surfaces. Not all properties of water are necessarily advantageous for life, however. For instance, water ice has a high albedo, meaning that it reflects a significant quantity of light and heat from the Sun. During ice ages, as reflective ice builds up over the surface of the water, the effects of global cooling are increased. There are some properties that make certain compounds and elements much more favorable than others as solvents in a successful biosphere. The solvent must be able to exist in liquid equilibrium over a range of temperatures the planetary object would normally encounter. Because boiling points vary with the pressure, the question tends not to be does the prospective solvent remain liquid, but at what pressure. For example, hydrogen cyanide has a narrow liquid-phase temperature range at 1 atmosphere, but in an atmosphere with the pressure of Venus, with of pressure, it can indeed exist in liquid form over a wide temperature range. Ammonia The ammonia molecule (NH3), like the water molecule, is abundant in the universe, being a compound of hydrogen (the simplest and most common element) with another very common element, nitrogen. The possible role of liquid ammonia as an alternative solvent for life is an idea that goes back at least to 1954, when J. B. S. Haldane raised the topic at a symposium about life's origin. Numerous chemical reactions are possible in an ammonia solution, and liquid ammonia has chemical similarities with water. Ammonia can dissolve most organic molecules at least as well as water does and, in addition, it is capable of dissolving many elemental metals. Haldane made the point that various common water-related organic compounds have ammonia-related analogs; for instance the ammonia-related amine group (−NH2) is analogous to the water-related hydroxyl group (−OH). Ammonia, like water, can either accept or donate an H+ ion. When ammonia accepts an H+, it forms the ammonium cation (NH4+), analogous to hydronium (H3O+). When it donates an H+ ion, it forms the amide anion (NH2−), analogous to the hydroxide anion (OH−). Compared to water, however, ammonia is more inclined to accept an H+ ion, and less inclined to donate one; it is a stronger nucleophile. Ammonia added to water functions as Arrhenius base: it increases the concentration of the anion hydroxide. Conversely, using a solvent system definition of acidity and basicity, water added to liquid ammonia functions as an acid, because it increases the concentration of the cation ammonium. The carbonyl group (C=O), which is much used in terrestrial biochemistry, would not be stable in ammonia solution, but the analogous imine group (C=NH) could be used instead. However, ammonia has some problems as a basis for life. The hydrogen bonds between ammonia molecules are weaker than those in water, causing ammonia's heat of vaporization to be half that of water, its surface tension to be a third, and reducing its ability to concentrate non-polar molecules through a hydrophobic effect. Gerald Feinberg and Robert Shapiro have questioned whether ammonia could hold prebiotic molecules together well enough to allow the emergence of a self-reproducing system. Ammonia is also flammable in oxygen and could not exist sustainably in an environment suitable for aerobic metabolism. A biosphere based on ammonia would likely exist at temperatures or air pressures that are extremely unusual in relation to life on Earth. Life on Earth usually exists between the melting point and boiling point of water, at a pressure designated as normal pressure, between . When also held to normal pressure, ammonia's melting and boiling points are and respectively. Because chemical reactions generally proceed more slowly at lower temperatures, ammonia-based life existing in this set of conditions might metabolize more slowly and evolve more slowly than life on Earth. On the other hand, lower temperatures could also enable living systems to use chemical species that would be too unstable at Earth temperatures to be useful. A set of conditions where ammonia is liquid at Earth-like temperatures would involve it being at a much higher pressure. For example, at 60 atm ammonia melts at and boils at . Ammonia and ammonia–water mixtures remain liquid at temperatures far below the freezing point of pure water, so such biochemistries might be well suited to planets and moons orbiting outside the water-based habitability zone. Such conditions could exist, for example, under the surface of Saturn's largest moon Titan. Methane and other hydrocarbons Methane (CH4) is a simple hydrocarbon: that is, a compound of two of the most common elements in the cosmos: hydrogen and carbon. It has a cosmic abundance comparable with ammonia. Hydrocarbons could act as a solvent over a wide range of temperatures, but would lack polarity. Isaac Asimov, the biochemist and science fiction writer, suggested in 1981 that poly-lipids could form a substitute for proteins in a non-polar solvent such as methane. Lakes composed of a mixture of hydrocarbons, including methane and ethane, have been detected on the surface of Titan by the Cassini spacecraft. There is debate about the effectiveness of methane and other hydrocarbons as a solvent for life compared to water or ammonia. Water is a stronger solvent than the hydrocarbons, enabling easier transport of substances in a cell. However, water is also more chemically reactive and can break down large organic molecules through hydrolysis. A life-form whose solvent was a hydrocarbon would not face the threat of its biomolecules being destroyed in this way. Also, the water molecule's tendency to form strong hydrogen bonds can interfere with internal hydrogen bonding in complex organic molecules. Life with a hydrocarbon solvent could make more use of hydrogen bonds within its biomolecules. Moreover, the strength of hydrogen bonds within biomolecules would be appropriate to a low-temperature biochemistry. Astrobiologist Chris McKay has argued, on thermodynamic grounds, that if life does exist on Titan's surface, using hydrocarbons as a solvent, it is likely also to use the more complex hydrocarbons as an energy source by reacting them with hydrogen, reducing ethane and acetylene to methane. Possible evidence for this form of life on Titan was identified in 2010 by Darrell Strobel of Johns Hopkins University; a greater abundance of molecular hydrogen in the upper atmospheric layers of Titan compared to the lower layers, arguing for a downward diffusion at a rate of roughly 1025 molecules per second and disappearance of hydrogen near Titan's surface. As Strobel noted, his findings were in line with the effects Chris McKay had predicted if methanogenic life-forms were present. The same year, another study showed low levels of acetylene on Titan's surface, which were interpreted by Chris McKay as consistent with the hypothesis of organisms reducing acetylene to methane. While restating the biological hypothesis, McKay cautioned that other explanations for the hydrogen and acetylene findings are to be considered more likely: the possibilities of yet unidentified physical or chemical processes (e.g. a non-living surface catalyst enabling acetylene to react with hydrogen), or flaws in the current models of material flow. He noted that even a non-biological catalyst effective at 95 K would in itself be a startling discovery. Azotosome A hypothetical cell membrane termed an azotosome, capable of functioning in liquid methane in Titan conditions was computer-modeled in an article published in February 2015. Composed of acrylonitrile, a small molecule containing carbon, hydrogen, and nitrogen, it is predicted to have stability and flexibility in liquid methane comparable to that of a phospholipid bilayer (the type of cell membrane possessed by all life on Earth) in liquid water. An analysis of data obtained using the Atacama Large Millimeter / submillimeter Array (ALMA), completed in 2017, confirmed substantial amounts of acrylonitrile in Titan's atmosphere. Later studies questioned whether acrylonitrile would be able to self-assemble into azotozomes. Hydrogen fluoride Hydrogen fluoride (HF), like water, is a polar molecule, and due to its polarity it can dissolve many ionic compounds. At atmospheric pressure, its melting point is , and its boiling point is ; the difference between the two is a little more than 100 K. HF also makes hydrogen bonds with its neighbor molecules, as do water and ammonia. It has been considered as a possible solvent for life by scientists such as Peter Sneath and Carl Sagan. HF is dangerous to the systems of molecules that Earth-life is made of, but certain other organic compounds, such as paraffin waxes, are stable with it. Like water and ammonia, liquid hydrogen fluoride supports an acid–base chemistry. Using a solvent system definition of acidity and basicity, nitric acid functions as a base when it is added to liquid HF. However, hydrogen fluoride is cosmically rare, unlike water, ammonia, and methane. Hydrogen sulfide Hydrogen sulfide is the closest chemical analog to water, but is less polar and is a weaker inorganic solvent. Hydrogen sulfide is quite plentiful on Jupiter's moon Io and may be in liquid form a short distance below the surface; astrobiologist Dirk Schulze-Makuch has suggested it as a possible solvent for life there. On a planet with hydrogen sulfide oceans, the source of the hydrogen sulfide could come from volcanoes, in which case it could be mixed in with a bit of hydrogen fluoride, which could help dissolve minerals. Hydrogen sulfide life might use a mixture of carbon monoxide and carbon dioxide as their carbon source. They might produce and live on sulfur monoxide, which is analogous to oxygen (O2). Hydrogen sulfide, like hydrogen cyanide and ammonia, suffers from the small temperature range where it is liquid, though that, like that of hydrogen cyanide and ammonia, increases with increasing pressure. Silicon dioxide and silicates Silicon dioxide, also known as silica and quartz, is very abundant in the universe and has a large temperature range where it is liquid. However, its melting point is , so it would be impossible to make organic compounds in that temperature, because all of them would decompose. Silicates are similar to silicon dioxide and some have lower melting points than silica. Feinberg and Shapiro have suggested that molten silicate rock could serve as a liquid medium for organisms with a chemistry based on silicon, oxygen, and other elements such as aluminium. Other solvents or cosolvents Other solvents sometimes proposed: Supercritical fluids: supercritical carbon dioxide and supercritical hydrogen. Simple hydrogen compounds: hydrogen chloride. More complex compounds: sulfuric acid, formamide, methanol. Very-low-temperature fluids: liquid nitrogen and hydrogen. High-temperature liquids: sodium chloride. Sulfuric acid in liquid form is strongly polar. It remains liquid at higher temperatures than water, its liquid range being 10 °C to 337 °C at a pressure of 1 atm, although above 300 °C it slowly decomposes. Sulfuric acid is known to be abundant in the clouds of Venus, in the form of aerosol droplets. In a biochemistry that used sulfuric acid as a solvent, the alkene group (C=C), with two carbon atoms joined by a double bond, could function analogously to the carbonyl group (C=O) in water-based biochemistry. A proposal has been made that life on Mars may exist and be using a mixture of water and hydrogen peroxide as its solvent. A 61.2% (by mass) mix of water and hydrogen peroxide has a freezing point of −56.5 °C and tends to super-cool rather than crystallize. It is also hygroscopic, an advantage in a water-scarce environment. Supercritical carbon dioxide has been proposed as a candidate for alternative biochemistry due to its ability to selectively dissolve organic compounds and assist the functioning of enzymes and because "super-Earth"- or "super-Venus"-type planets with dense high-pressure atmospheres may be common. Other speculations Non-green photosynthesizers Physicists have noted that, although photosynthesis on Earth generally involves green plants, a variety of other-colored plants could also support photosynthesis, essential for most life on Earth, and that other colors might be preferred in places that receive a different mix of stellar radiation than Earth. These studies indicate that blue plants would be unlikely; however yellow or red plants may be relatively common. Variable environments Many Earth plants and animals undergo major biochemical changes during their life cycles as a response to changing environmental conditions, for example, by having a spore or hibernation state that can be sustained for years or even millennia between more active life stages. Thus, it would be biochemically possible to sustain life in environments that are only periodically consistent with life as we know it. For example, frogs in cold climates can survive for extended periods of time with most of their body water in a frozen state, whereas desert frogs in Australia can become inactive and dehydrate in dry periods, losing up to 75% of their fluids, yet return to life by rapidly rehydrating in wet periods. Either type of frog would appear biochemically inactive (i.e. not living) during dormant periods to anyone lacking a sensitive means of detecting low levels of metabolism. Alanine world and hypothetical alternatives The genetic code may have evolved during the transition from the RNA world to a protein world. The Alanine World Hypothesis postulates that the evolution of the genetic code (the so-called GC phase) started with only four basic amino acids: alanine, glycine, proline and ornithine (now arginine). The evolution of the genetic code ended with 20 proteinogenic amino acids. From a chemical point of view, most of them are Alanine-derivatives particularly suitable for the construction of α-helices and β-sheets basic secondary structural elements of modern proteins. Direct evidence of this is an experimental procedure in molecular biology known as alanine scanning. A hypothetical "Proline World" would create a possible alternative life with the genetic code based on the proline chemical scaffold as the protein backbone. Similarly, a "Glycine World" and "Ornithine World" are also conceivable, but nature has chosen none of them. Evolution of life with Proline, Glycine, or Ornithine as the basic structure for protein-like polymers (foldamers) would lead to parallel biological worlds. They would have morphologically radically different body plans and genetics from the living organisms of the known biosphere. Nonplanetary life Dusty plasma-based In 2007, Vadim N. Tsytovich and colleagues proposed that lifelike behaviors could be exhibited by dust particles suspended in a plasma, under conditions that might exist in space. Computer models showed that, when the dust became charged, the particles could self-organize into microscopic helical structures, and the authors offer "a rough sketch of a possible model of...helical grain structure reproduction". Cosmic necklace-based In 2020, Luis A. Anchordoqu and Eugene M. Chudnovsky of the City University of New York hypothesized that cosmic necklace-based life composed of magnetic monopoles connected by cosmic strings could evolve inside stars. This would be achieved by a stretching of cosmic strings due to the star's intense gravity, thus allowing it to take on more complex forms and potentially form structures similar to the RNA and DNA structures found within carbon-based life. As such, it is theoretically possible that such beings could eventually become intelligent and construct a civilization using the power generated by the star's nuclear fusion. Because such use would use up part of the star's energy output, the luminosity would also fall. For this reason, it is thought that such life might exist inside stars observed to be cooling faster or dimmer than current cosmological models predict. Life on a neutron star Frank Drake suggested in 1973 that intelligent life could inhabit neutron stars. Physical models in 1973 implied that Drake's creatures would be microscopic. Scientists who have published on this topic Scientists who have considered possible alternatives to carbon-water biochemistry include: J. B. S. Haldane (1892–1964), a geneticist noted for his work on abiogenesis. V. Axel Firsoff (1910–1981), British astronomer. Isaac Asimov (1920–1992), biochemist and science fiction writer. Fred Hoyle (1915–2001), astronomer and science fiction writer. Norman Horowitz (1915–2005), Caltech geneticist who devised the first experiments carried out to detect life on Mars. George C. Pimentel (1922–1989), American chemist, University of California, Berkeley. Peter Sneath (1923–2011), microbiologist, author of the book Planets and Life. Gerald Feinberg (1933–1992), physicist and Robert Shapiro (1935–2011), chemist, co-authors of the book Life Beyond Earth. Carl Sagan (1934–1996), astronomer, science popularizer, and SETI proponent. Jonathan Lunine (born 1959), American planetary scientist and physicist. Robert Freitas (born 1952), specialist in nano-technology and nano-medicine. John Baross (born 1940), oceanographer and astrobiologist, who chaired a committee of scientists under the United States National Research Council that published a report on life's limiting conditions in 2007. See also Abiogenesis Astrobiology Carbon chauvinism Carbon-based life Earliest known life forms Extraterrestrial life Hachimoji DNA Iron–sulfur world hypothesis Life origination beyond planets Nexus for Exoplanet System Science Non-cellular life Non-proteinogenic amino acids Nucleic acid analogues Planetary habitability Shadow biosphere References Further reading External links Astronomy FAQ Ammonia-based life Silicon-based life Astrobiology Science fiction themes Biological hypotheses Scientific speculation
0.764273
0.995912
0.761149
Gluconeogenesis
Gluconeogenesis (GNG) is a metabolic pathway that results in the biosynthesis of glucose from certain non-carbohydrate carbon substrates. It is a ubiquitous process, present in plants, animals, fungi, bacteria, and other microorganisms. In vertebrates, gluconeogenesis occurs mainly in the liver and, to a lesser extent, in the cortex of the kidneys. It is one of two primary mechanisms – the other being degradation of glycogen (glycogenolysis) – used by humans and many other animals to maintain blood sugar levels, avoiding low levels (hypoglycemia). In ruminants, because dietary carbohydrates tend to be metabolized by rumen organisms, gluconeogenesis occurs regardless of fasting, low-carbohydrate diets, exercise, etc. In many other animals, the process occurs during periods of fasting, starvation, low-carbohydrate diets, or intense exercise. In humans, substrates for gluconeogenesis may come from any non-carbohydrate sources that can be converted to pyruvate or intermediates of glycolysis (see figure). For the breakdown of proteins, these substrates include glucogenic amino acids (although not ketogenic amino acids); from breakdown of lipids (such as triglycerides), they include glycerol, odd-chain fatty acids (although not even-chain fatty acids, see below); and from other parts of metabolism that includes lactate from the Cori cycle. Under conditions of prolonged fasting, acetone derived from ketone bodies can also serve as a substrate, providing a pathway from fatty acids to glucose. Although most gluconeogenesis occurs in the liver, the relative contribution of gluconeogenesis by the kidney is increased in diabetes and prolonged fasting. The gluconeogenesis pathway is highly endergonic until it is coupled to the hydrolysis of ATP or GTP, effectively making the process exergonic. For example, the pathway leading from pyruvate to glucose-6-phosphate requires 4 molecules of ATP and 2 molecules of GTP to proceed spontaneously. These ATPs are supplied from fatty acid catabolism via beta oxidation. Precursors In humans the main gluconeogenic precursors are lactate, glycerol (which is a part of the triglyceride molecule), alanine and glutamine. Altogether, they account for over 90% of the overall gluconeogenesis. Other glucogenic amino acids and all citric acid cycle intermediates (through conversion to oxaloacetate) can also function as substrates for gluconeogenesis. Generally, human consumption of gluconeogenic substrates in food does not result in increased gluconeogenesis. In ruminants, propionate is the principal gluconeogenic substrate. In nonruminants, including human beings, propionate arises from the β-oxidation of odd-chain and branched-chain fatty acids, and is a (relatively minor) substrate for gluconeogenesis. Lactate is transported back to the liver where it is converted into pyruvate by the Cori cycle using the enzyme lactate dehydrogenase. Pyruvate, the first designated substrate of the gluconeogenic pathway, can then be used to generate glucose. Transamination or deamination of amino acids facilitates entering of their carbon skeleton into the cycle directly (as pyruvate or oxaloacetate), or indirectly via the citric acid cycle. The contribution of Cori cycle lactate to overall glucose production increases with fasting duration. Specifically, after 12, 20, and 40 hours of fasting by human volunteers, the contribution of Cori cycle lactate to gluconeogenesis was 41%, 71%, and 92%, respectively. Whether even-chain fatty acids can be converted into glucose in animals has been a longstanding question in biochemistry. Odd-chain fatty acids can be oxidized to yield acetyl-CoA and propionyl-CoA, the latter serving as a precursor to succinyl-CoA, which can be converted to oxaloacetate and enter into gluconeogenesis. In contrast, even-chain fatty acids are oxidized to yield only acetyl-CoA, whose entry into gluconeogenesis requires the presence of a glyoxylate cycle (also known as glyoxylate shunt) to produce four-carbon dicarboxylic acid precursors. The glyoxylate shunt comprises two enzymes, malate synthase and isocitrate lyase, and is present in fungi, plants, and bacteria. Despite some reports of glyoxylate shunt enzymatic activities detected in animal tissues, genes encoding both enzymatic functions have only been found in nematodes, in which they exist as a single bi-functional enzyme. Genes coding for malate synthase alone (but not isocitrate lyase) have been identified in other animals including arthropods, echinoderms, and even some vertebrates. Mammals found to possess the malate synthase gene include monotremes (platypus) and marsupials (opossum), but not placental mammals. The existence of the glyoxylate cycle in humans has not been established, and it is widely held that fatty acids cannot be converted to glucose in humans directly. Carbon-14 has been shown to end up in glucose when it is supplied in fatty acids, but this can be expected from the incorporation of labelled atoms derived from acetyl-CoA into citric acid cycle intermediates which are interchangeable with those derived from other physiological sources, such as glucogenic amino acids. In the absence of other glucogenic sources, the 2-carbon acetyl-CoA derived from the oxidation of fatty acids cannot produce a net yield of glucose via the citric acid cycle, since an equivalent two carbon atoms are released as carbon dioxide during the cycle. During ketosis, however, acetyl-CoA from fatty acids yields ketone bodies, including acetone, and up to ~60% of acetone may be oxidized in the liver to the pyruvate precursors acetol and methylglyoxal. Thus ketone bodies derived from fatty acids could account for up to 11% of gluconeogenesis during starvation. Catabolism of fatty acids also produces energy in the form of ATP that is necessary for the gluconeogenesis pathway. Location In mammals, gluconeogenesis has been believed to be restricted to the liver, the kidney, the intestine, and muscle, but recent evidence indicates gluconeogenesis occurring in astrocytes of the brain. These organs use somewhat different gluconeogenic precursors. The liver preferentially uses lactate, glycerol, and glucogenic amino acids (especially alanine) while the kidney preferentially uses lactate, glutamine and glycerol. Lactate from the Cori cycle is quantitatively the largest source of substrate for gluconeogenesis, especially for the kidney. The liver uses both glycogenolysis and gluconeogenesis to produce glucose, whereas the kidney only uses gluconeogenesis. After a meal, the liver shifts to glycogen synthesis, whereas the kidney increases gluconeogenesis. The intestine uses mostly glutamine and glycerol. Propionate is the principal substrate for gluconeogenesis in the ruminant liver, and the ruminant liver may make increased use of gluconeogenic amino acids (e.g., alanine) when glucose demand is increased. The capacity of liver cells to use lactate for gluconeogenesis declines from the preruminant stage to the ruminant stage in calves and lambs. In sheep kidney tissue, very high rates of gluconeogenesis from propionate have been observed. In all species, the formation of oxaloacetate from pyruvate and TCA cycle intermediates is restricted to the mitochondrion, and the enzymes that convert Phosphoenolpyruvic acid (PEP) to glucose-6-phosphate are found in the cytosol. The location of the enzyme that links these two parts of gluconeogenesis by converting oxaloacetate to PEP – PEP carboxykinase (PEPCK) – is variable by species: it can be found entirely within the mitochondria, entirely within the cytosol, or dispersed evenly between the two, as it is in humans. Transport of PEP across the mitochondrial membrane is accomplished by dedicated transport proteins; however no such proteins exist for oxaloacetate. Therefore, in species that lack intra-mitochondrial PEPCK, oxaloacetate must be converted into malate or aspartate, exported from the mitochondrion, and converted back into oxaloacetate in order to allow gluconeogenesis to continue. Pathway Gluconeogenesis is a pathway consisting of a series of eleven enzyme-catalyzed reactions. The pathway will begin in either the liver or kidney, in the mitochondria or cytoplasm of those cells, this being dependent on the substrate being used. Many of the reactions are the reverse of steps found in glycolysis. Gluconeogenesis begins in the mitochondria with the formation of oxaloacetate by the carboxylation of pyruvate. This reaction also requires one molecule of ATP, and is catalyzed by pyruvate carboxylase. This enzyme is stimulated by high levels of acetyl-CoA (produced in β-oxidation in the liver) and inhibited by high levels of ADP and glucose. Oxaloacetate is reduced to malate using NADH, a step required for its transportation out of the mitochondria. Malate is oxidized to oxaloacetate using NAD+ in the cytosol, where the remaining steps of gluconeogenesis take place. Oxaloacetate is decarboxylated and then phosphorylated to form phosphoenolpyruvate using the enzyme PEPCK. A molecule of GTP is hydrolyzed to GDP during this reaction. The next steps in the reaction are the same as reversed glycolysis. However, fructose 1,6-bisphosphatase converts fructose 1,6-bisphosphate to fructose 6-phosphate, using one water molecule and releasing one phosphate (in glycolysis, phosphofructokinase 1 converts F6P and ATP to F1,6BP and ADP). This is also the rate-limiting step of gluconeogenesis. Glucose-6-phosphate is formed from fructose 6-phosphate by phosphoglucoisomerase (the reverse of step 2 in glycolysis). Glucose-6-phosphate can be used in other metabolic pathways or dephosphorylated to free glucose. Whereas free glucose can easily diffuse in and out of the cell, the phosphorylated form (glucose-6-phosphate) is locked in the cell, a mechanism by which intracellular glucose levels are controlled by cells. The final step in gluconeogenesis, the formation of glucose, occurs in the lumen of the endoplasmic reticulum, where glucose-6-phosphate is hydrolyzed by glucose-6-phosphatase to produce glucose and release an inorganic phosphate. Like two steps prior, this step is not a simple reversal of glycolysis, in which hexokinase catalyzes the conversion of glucose and ATP into G6P and ADP. Glucose is shuttled into the cytoplasm by glucose transporters located in the endoplasmic reticulum's membrane. Regulation While most steps in gluconeogenesis are the reverse of those found in glycolysis, three regulated and strongly endergonic reactions are replaced with more kinetically favorable reactions. Hexokinase/glucokinase, phosphofructokinase, and pyruvate kinase enzymes of glycolysis are replaced with glucose-6-phosphatase, fructose-1,6-bisphosphatase, and PEP carboxykinase/pyruvate carboxylase. These enzymes are typically regulated by similar molecules, but with opposite results. For example, acetyl CoA and citrate activate gluconeogenesis enzymes (pyruvate carboxylase and fructose-1,6-bisphosphatase, respectively), while at the same time inhibiting the glycolytic enzyme pyruvate kinase. This system of reciprocal control allow glycolysis and gluconeogenesis to inhibit each other and prevents a futile cycle of synthesizing glucose to only break it down. Pyruvate kinase can be also bypassed by 86 pathways not related to gluconeogenesis, for the purpose of forming pyruvate and subsequently lactate; some of these pathways use carbon atoms originated from glucose. The majority of the enzymes responsible for gluconeogenesis are found in the cytosol; the exceptions are mitochondrial pyruvate carboxylase and, in animals, phosphoenolpyruvate carboxykinase. The latter exists as an isozyme located in both the mitochondrion and the cytosol. The rate of gluconeogenesis is ultimately controlled by the action of a key enzyme, fructose-1,6-bisphosphatase, which is also regulated through signal transduction by cAMP and its phosphorylation. Global control of gluconeogenesis is mediated by glucagon (released when blood glucose is low); it triggers phosphorylation of enzymes and regulatory proteins by Protein Kinase A (a cyclic AMP regulated kinase) resulting in inhibition of glycolysis and stimulation of gluconeogenesis. Insulin counteracts glucagon by inhibiting gluconeogenesis. Type 2 diabetes is marked by excess glucagon and insulin resistance from the body. Insulin can no longer inhibit the gene expression of enzymes such as PEPCK which leads to increased levels of hyperglycemia in the body. The anti-diabetic drug metformin reduces blood glucose primarily through inhibition of gluconeogenesis, overcoming the failure of insulin to inhibit gluconeogenesis due to insulin resistance. Studies have shown that the absence of hepatic glucose production has no major effect on the control of fasting plasma glucose concentration. Compensatory induction of gluconeogenesis occurs in the kidneys and intestine, driven by glucagon, glucocorticoids, and acidosis. Insulin resistance In the liver, the FOX protein FOXO6 normally promotes gluconeogenesis in the fasted state, but insulin blocks FOXO6 upon feeding. In a condition of insulin resistance, insulin fails to block FOXO6 resulting in continued gluconeogenesis even upon feeding, resulting in high blood glucose (hyperglycemia). Insulin resistance is a common feature of metabolic syndrome and type 2 diabetes. For this reason, gluconeogenesis is a target of therapy for type 2 diabetes, such as the antidiabetic drug metformin, which inhibits gluconeogenic glucose formation, and stimulates glucose uptake by cells. Origins Gluconeogenesis is considered one of the most ancient anabolic pathways and is likely to have been exhibited in the last universal common ancestor. Rafael F. Say and Georg Fuchs stated in 2010 that "all archaeal groups as well as the deeply branching bacterial lineages contain a bifunctional fructose 1,6-bisphosphate (FBP) aldolase/phosphatase with both FBP aldolase and FBP phosphatase activity. This enzyme is missing in most other Bacteria and in Eukaryota, and is heat-stabile even in mesophilic marine Crenarchaeota". It is proposed that fructose 1,6-bisphosphate aldolase/phosphatase was an ancestral gluconeogenic enzyme and had preceded glycolysis. But the chemical mechanisms between gluconeogenesis and glycolysis, whether it is anabolic or catabolic, are similar, suggesting they both originated at the same time. Fructose 1,6-bisphosphate is shown to be nonenzymatically synthesized continuously within a freezing solution. The synthesis is accelerated in the presence of amino acids such as glycine and lysine implying that the first anabolic enzymes were amino acids. The prebiotic reactions in gluconeogenesis can also proceed nonenzymatically at dehydration-desiccation cycles. Such chemistry could have occurred in hydrothermal environments, including temperature gradients and cycling of freezing and thawing. Mineral surfaces might have played a role in the phosphorylation of metabolic intermediates from gluconeogenesis and have to been shown to produce tetrose, hexose phosphates, and pentose from formaldehyde, glyceraldehyde, and glycolaldehyde. See also Bioenergetics References External links Overview at indstate.edu Interactive diagram at uakron.edu The chemical logic behind gluconeogenesis metpath: Interactive representation of gluconeogenesis Biochemical reactions Carbohydrate metabolism Diabetes Glycobiology Exercise physiology Hepatology Metabolic pathways
0.763124
0.997411
0.761148
Climate change mitigation
Climate change mitigation (or decarbonisation) is action to limit the greenhouse gases in the atmosphere that cause climate change. Climate change mitigation actions include conserving energy and replacing fossil fuels with clean energy sources. Secondary mitigation strategies include changes to land use and removing carbon dioxide (CO2) from the atmosphere. Current climate change mitigation policies are insufficient as they would still result in global warming of about 2.7 °C by 2100, significantly above the 2015 Paris Agreement's goal of limiting global warming to below 2 °C. Solar energy and wind power can replace fossil fuels at the lowest cost compared to other renewable energy options. The availability of sunshine and wind is variable and can require electrical grid upgrades, such as using long-distance electricity transmission to group a range of power sources. Energy storage can also be used to even out power output, and demand management can limit power use when power generation is low. Cleanly generated electricity can usually replace fossil fuels for powering transportation, heating buildings, and running industrial processes. Certain processes are more difficult to decarbonise, such as air travel and cement production. Carbon capture and storage (CCS) can be an option to reduce net emissions in these circumstances, although fossil fuel power plants with CCS technology is currently a high cost climate change mitigation strategy. Human land use changes such as agriculture and deforestation cause about 1/4th of climate change. These changes impact how much is absorbed by plant matter and how much organic matter decays or burns to release . These changes are part of the fast carbon cycle, whereas fossil fuels release that was buried underground as part of the slow carbon cycle. Methane is a short lived greenhouse gas that is produced by decaying organic matter and livestock, as well as fossil fuel extraction. Land use changes can also impact precipitation patterns and the reflectivity of the surface of the Earth. It is possible to cut emissions from agriculture by reducing food waste, switching to a more plant-based diet (also referred to as low-carbon diet), and by improving farming processes. Various policies can encourage climate change mitigation. Carbon pricing systems have been set up that either tax emissions or cap total emissions and trade emission credits. Fossil fuel subsidies can be eliminated in favor of clean energy subsidies, and incentives offered for installing energy efficiency measures or switching to electric power sources. Another issue is overcoming environmental objections when constructing new clean energy sources and making grid modifications. Definitions and scope Climate change mitigation aims to sustain ecosystems to maintain human civilisation. This requires drastic cuts in greenhouse gas emissions . The Intergovernmental Panel on Climate Change (IPCC) defines mitigation (of climate change) as "a human intervention to reduce emissions or enhance the sinks of greenhouse gases". It is possible to approach various mitigation measures in parallel. This is because there is no single pathway to limit global warming to 1.5 or 2 °C. There are four types of measures: Sustainable energy and sustainable transport Energy conservation, including efficient energy use Sustainable agriculture and green industrial policy Enhancing carbon sinks and carbon dioxide removal (CDR), including carbon sequestration The IPCC defined carbon dioxide removal as "Anthropogenic activities removing carbon dioxide from the atmosphere and durably storing it in geological, terrestrial, or ocean reservoirs, or in products. It includes existing and potential anthropogenic enhancement of biological or geochemical sinks and direct air carbon dioxide capture and storage (DACCS), but excludes natural uptake not directly caused by human activities." Relationship with solar radiation modification (SRM) While solar radiation modification (SRM) could reduce surface temperatures, it temporarily masks climate change rather than addressing the root cause, which is greenhouse gases. SRM would work by altering how much solar radiation the Earth absorbs. Examples include reducing the amount of sunlight reaching the surface, reducing the optical thickness and lifetime of clouds, and changing the ability of the surface to reflect radiation. The IPCC describes SRM as a climate risk reduction strategy or supplementary option rather than a climate mitigation option. The terminology in this area is still evolving. Experts sometimes use the term geoengineering or climate engineering in the scientific literature for both CDR or SRM, if the techniques are used at a global scale. IPCC reports no longer use the terms geoengineering or climate engineering. Emission trends and pledges Greenhouse gas emissions from human activities strengthen the greenhouse effect. This contributes to climate change. Most is carbon dioxide from burning fossil fuels: coal, oil, and natural gas. Human-caused emissions have increased atmospheric carbon dioxide by about 50% over pre-industrial levels. Emissions in the 2010s averaged a record 56 billion tons (Gt) a year. In 2016, energy for electricity, heat and transport was responsible for 73.2% of GHG emissions. Direct industrial processes accounted for 5.2%, waste for 3.2% and agriculture, forestry and land use for 18.4%. Electricity generation and transport are major emitters. The largest single source is coal-fired power stations with 20% of greenhouse gas emissions. Deforestation and other changes in land use also emit carbon dioxide and methane. The largest sources of anthropogenic methane emissions are agriculture, and gas venting and fugitive emissions from the fossil-fuel industry. The largest agricultural methane source is livestock. Agricultural soils emit nitrous oxide, partly due to fertilizers. There is now a political solution to the problem of fluorinated gases from refrigerants. This is because many countries have ratified the Kigali Amendment. Carbon dioxide is the dominant emitted greenhouse gas. Methane emissions almost have the same short-term impact. Nitrous oxide (N2O) and fluorinated gases (F-Gases) play a minor role. Livestock and manure produce 5.8% of all greenhouse gas emissions. But this depends on the time frame used to calculate the global warming potential of the respective gas. Greenhouse gas (GHG) emissions are measured in equivalents. Scientists determine their equivalents from their global warming potential (GWP). This depends on their lifetime in the atmosphere. There are widely used greenhouse gas accounting methods that convert volumes of methane, nitrous oxide and other greenhouse gases to carbon dioxide equivalents. Estimates largely depend on the ability of oceans and land sinks to absorb these gases. Short-lived climate pollutants (SLCPs) persist in the atmosphere for a period ranging from days to 15 years. Carbon dioxide can remain in the atmosphere for millennia. Short-lived climate pollutants include methane, hydrofluorocarbons (HFCs), tropospheric ozone and black carbon. Scientists increasingly use satellites to locate and measure greenhouse gas emissions and deforestation. Earlier, scientists largely relied on or calculated estimates of greenhouse gas emissions and governments' self-reported data. Needed emissions cuts The annual "Emissions Gap Report" by UNEP stated in 2022 that it was necessary to almost halve emissions. "To get on track for limiting global warming to 1.5°C, global annual GHG emissions must be reduced by 45 per cent compared with emissions projections under policies currently in place in just eight years, and they must continue to decline rapidly after 2030, to avoid exhausting the limited remaining atmospheric carbon budget." The report commented that the world should focus on broad-based economy-wide transformations and not incremental change. In 2022, the Intergovernmental Panel on Climate Change (IPCC) released its Sixth Assessment Report on climate change. It warned that greenhouse gas emissions must peak before 2025 at the latest and decline 43% by 2030 to have a good chance of limiting global warming to 1.5 °C (2.7 °F). Or in the words of Secretary-General of the United Nations António Guterres: "Main emitters must drastically cut emissions starting this year". Pledges Climate Action Tracker described the situation on 9 November 2021 as follows. The global temperature will rise by 2.7 °C by the end of the century with current policies and by 2.9 °C with nationally adopted policies. The temperature will rise by 2.4 °C if countries only implement the pledges for 2030. The rise would be 2.1 °C with the achievement of the long-term targets too. Full achievement of all announced targets would mean the rise in global temperature will peak at 1.9 °C and go down to 1.8 °C by the year 2100. Experts gather information about climate pledges in the Global Climate Action Portal - Nazca. The scientific community is checking their fulfilment. There has not been a definitive or detailed evaluation of most goals set for 2020. But it appears the world failed to meet most or all international goals set for that year. One update came during the 2021 United Nations Climate Change Conference in Glasgow. The group of researchers running the Climate Action Tracker looked at countries responsible for 85% of greenhouse gas emissions. It found that only four countries or political entities—the EU, UK, Chile and Costa Rica—have published a detailed official policyplan that describes the steps to realise 2030 mitigation targets. These four polities are responsible for 6% of global greenhouse gas emissions. In 2021 the US and EU launched the Global Methane Pledge to cut methane emissions by 30% by 2030. The UK, Argentina, Indonesia, Italy and Mexico joined the initiative. Ghana and Iraq signaled interest in joining. A White House summary of the meeting noted those countries represent six of the top 15 methane emitters globally. Israel also joined the initiative. Low-carbon energy The energy system includes the delivery and use of energy. It is the main emitter of carbon dioxide. Rapid and deep reductions in the carbon dioxide and other greenhouse gas emissions from the energy sector are necessary to limit global warming to well below 2 °C. IPCC recommendations include reducing fossil fuel consumption, increasing production from low- and zero carbon energy sources, and increasing use of electricity and alternative energy carriers. Nearly all scenarios and strategies involve a major increase in the use of renewable energy in combination with increased energy efficiency measures. It will be necessary to accelerate the deployment of renewable energy six-fold from 0.25% annual growth in 2015 to 1.5% to keep global warming under 2 °C. The competitiveness of renewable energy is a key to a rapid deployment. In 2020, onshore wind and solar photovoltaics were the cheapest source for new bulk electricity generation in many regions. Renewables may have higher storage costs but non-renewables may have higher clean-up costs. A carbon price can increase the competitiveness of renewable energy. Solar and wind energy Wind and sun can provide large amounts of low-carbon energy at competitive production costs. The IPCC estimates that these two mitigation options have the largest potential to reduce emissions before 2030 at low cost. Solar photovoltaics (PV) has become the cheapest way to generate electricity in many regions of the world. The growth of photovoltaics has been close to exponential. It has about doubled every three years since the 1990s. A different technology is concentrated solar power (CSP). This uses mirrors or lenses to concentrate a large area of sunlight on to a receiver. With CSP, the energy can be stored for a few hours. This provides supply in the evening. Solar water heating doubled between 2010 and 2019. Regions in the higher northern and southern latitudes have the greatest potential for wind power. Offshore wind farms are more expensive. But offshore units deliver more energy per installed capacity with less fluctuations. In most regions, wind power generation is higher in the winter when PV output is low. For this reason, combinations of wind and solar power lead to better-balanced systems. Other renewables Other well-established renewable energy forms include hydropower, bioenergy and geothermal energy. Hydroelectricity is electricity generated by hydropower and plays a leading role in countries like Brazil, Norway and China. but there are geographical limits and environmental issues. Tidal power can be used in coastal regions. Bioenergy can provide energy for electricity, heat and transport. Bioenergy, in particular biogas, can provide dispatchable electricity generation. While burning plant-derived biomass releases , the plants withdraw from the atmosphere while they grow. The technologies for producing, transporting and processing a fuel have a significant impact on the lifecycle emissions of the fuel. For example, aviation is starting to use renewable biofuels. Geothermal power is electrical power generated from geothermal energy. Geothermal electricity generation is currently used in 26 countries. Geothermal heating is in use in 70 countries. Integrating variable renewable energy Wind and solar power production does not consistently match demand. To deliver reliable electricity from variable renewable energy sources such as wind and solar, electrical power systems must be flexible. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. The integration of larger amounts of solar and wind energy into the grid requires a change of the energy system; this is necessary to ensure that the supply of electricity matches demand. There are various ways to make the electricity system more flexible. In many places, wind and solar generation are complementary on a daily and a seasonal scale. There is more wind during the night and in winter when solar energy production is low. Linking different geographical regions through long-distance transmission lines also makes it possible to reduce variability. It is possible to shift energy demand in time. Energy demand management and the use of smart grids make it possible to match the times when variable energy production is highest. Sector coupling can provide further flexibility. This involves coupling the electricity sector to the heat and mobility sector via power-to-heat-systems and electric vehicles. Energy storage helps overcome barriers to intermittent renewable energy. The most commonly used and available storage method is pumped-storage hydroelectricity. This requires locations with large differences in height and access to water. Batteries are also in wide use. They typically store electricity for short periods. Batteries have low energy density. This and their cost makes them impractical for the large energy storage necessary to balance inter-seasonal variations in energy production. Some locations have implemented pumped hydro storage with capacity for multi-month usage. Nuclear power Nuclear power could complement renewables for electricity. On the other hand, environmental and security risks could outweigh the benefits. The construction of new nuclear reactors currently takes about 10 years. This is much longer than scaling up the deployment of wind and solar. And this timing gives rise to credit risks. However nuclear may be much cheaper in China. China is building a significant number of new power plants. the cost of extending nuclear power plant lifetimes is competitive with other electricity generation technologies if long term costs for nuclear waste disposal are excluded from the calculation. There is also no sufficient financial insurance for nuclear accidents. Replacing coal with natural gas Demand reduction Reducing demand for products and services that cause greenhouse gas emissions can help in mitigating climate change. One is to reduce demand by behavioural and cultural changes, for example by making changes in diet, especially the decision to reduce meat consumption, an effective action individuals take to fight climate change. Another is by reducing the demand by improving infrastructure, by building a good public transport network, for example. Lastly, changes in end-use technology can reduce energy demand. For instance a well-insulated house emits less than a poorly-insulated house. Mitigation options that reduce demand for products or services help people make personal choices to reduce their carbon footprint. This could be in their choice of transport or food. So these mitigation options have many social aspects that focus on demand reduction; they are therefore demand-side mitigation actions. For example, people with high socio-economic status often cause more greenhouse gas emissions than those from a lower status. If they reduce their emissions and promote green policies, these people could become low-carbon lifestyle role models. However, there are many psychological variables that influence consumers. These include awareness and perceived risk. Government policies can support or hinder demand-side mitigation options. For example, public policy can promote circular economy concepts which would support climate change mitigation. Reducing greenhouse gas emissions is linked to the sharing economy. There is a debate regarding the correlation of economic growth and emissions. It seems economic growth no longer necessarily means higher emissions. Energy conservation and efficiency Global primary energy demand exceeded 161,000 terawatt hours (TWh) in 2018. This refers to electricity, transport and heating including all losses. In transport and electricity production, fossil fuel usage has a low efficiency of less than 50%. Large amounts of heat in power plants and in motors of vehicles go to waste. The actual amount of energy consumed is significantly lower at 116,000 TWh. Energy conservation is the effort made to reduce the consumption of energy by using less of an energy service. One way is to use energy more efficiently. This means using less energy than before to produce the same service. Another way is to reduce the amount of service used. An example of this would be to drive less. Energy conservation is at the top of the sustainable energy hierarchy. When consumers reduce wastage and losses they can conserve energy. The upgrading of technology as well as the improvements to operations and maintenance can result in overall efficiency improvements. Efficient energy use (or energy efficiency) is the process of reducing the amount of energy required to provide products and services. Improved energy efficiency in buildings ("green buildings"), industrial processes and transportation could reduce the world's energy needs in 2050 by one third. This would help reduce global emissions of greenhouse gases. For example, insulating a building allows it to use less heating and cooling energy to achieve and maintain thermal comfort. Improvements in energy efficiency are generally achieved by adopting a more efficient technology or production process. Another way is to use commonly accepted methods to reduce energy losses. Lifestyle changes Individual action on climate change can include personal choices in many areas. These include diet, travel, household energy use, consumption of goods and services, and family size. People who wish to reduce their carbon footprint can take high-impact actions such as avoiding frequent flying and petrol-fuelled cars, eating mainly a plant-based diet, having fewer children, using clothes and electrical products for longer, and electrifying homes. These approaches are more practical for people in high-income countries with high-consumption lifestyles. Naturally, it is more difficult for those with lower income statuses to make these changes. This is because choices like electric-powered cars may not be available. Excessive consumption is more to blame for climate change than population increase. High-consumption lifestyles have a greater environmental impact, with the richest 10% of people emitting about half the total lifestyle emissions. Dietary change Some scientists say that avoiding meat and dairy foods is the single biggest way an individual can reduce their environmental impact. The widespread adoption of a vegetarian diet could cut food-related greenhouse gas emissions by 63% by 2050. China introduced new dietary guidelines in 2016 which aim to cut meat consumption by 50% and thereby reduce greenhouse gas emissions by 1Gt per year by 2030. Overall, food accounts for the largest share of consumption-based greenhouse gas emissions. It is responsible for nearly 20% of the global carbon footprint. Almost 15% of all anthropogenic greenhouse gas emissions have been attributed to the livestock sector. A shift towards plant-based diets would help to mitigate climate change. In particular, reducing meat consumption would help to reduce methane emissions. If high-income nations switched to a plant-based diet, vast amounts of land used for animal agriculture could be allowed to return to their natural state. This in turn has the potential to sequester 100 billion tonnes of by the end of the century. A comprehensive analysis found that plant based diets reduce emissions, water pollution and land use significantly (by 75%), while reducing the destruction of wildlife and usage of water. Family size Population growth has resulted in higher greenhouse gas emissions in most regions, particularly Africa. However, economic growth has a bigger effect than population growth. Rising incomes, changes in consumption and dietary patterns, as well as population growth, cause pressure on land and other natural resources. This leads to more greenhouse gas emissions and fewer carbon sinks. Some scholars have argued that humane policies to slow population growth should be part of a broad climate response together with policies that end fossil fuel use and encourage sustainable consumption. Advances in female education and reproductive health, especially voluntary family planning, can contribute to reducing population growth. Preserving and enhancing carbon sinks An important mitigation measure is "preserving and enhancing carbon sinks". This refers to the management of Earth's natural carbon sinks in a way that preserves or increases their capability to remove CO2 from the atmosphere and to store it durably. Scientists call this process also carbon sequestration. In the context of climate change mitigation, the IPCC defines a sink as "Any process, activity or mechanism which removes a greenhouse gas, an aerosol or a precursor of a greenhouse gas from the atmosphere". Globally, the two most important carbon sinks are vegetation and the ocean. To enhance the ability of ecosystems to sequester carbon, changes are necessary in agriculture and forestry. Examples are preventing deforestation and restoring natural ecosystems by reforestation. Scenarios that limit global warming to 1.5 °C typically project the large-scale use of carbon dioxide removal methods over the 21st century. There are concerns about over-reliance on these technologies, and their environmental impacts. But ecosystem restoration and reduced conversion are among the mitigation tools that can yield the most emissions reductions before 2030. Land-based mitigation options are referred to as "AFOLU mitigation options" in the 2022 IPCC report on mitigation. The abbreviation stands for "agriculture, forestry and other land use" The report described the economic mitigation potential from relevant activities around forests and ecosystems as follows: "the conservation, improved management, and restoration of forests and other ecosystems (coastal wetlands, peatlands, savannas and grasslands)". A high mitigation potential is found for reducing deforestation in tropical regions. The economic potential of these activities has been estimated to be 4.2 to 7.4 gigatonnes of carbon dioxide equivalent (GtCO2 -eq) per year. Forests Conservation The Stern Review on the economics of climate change stated in 2007 that curbing deforestation was a highly cost-effective way of reducing greenhouse gas emissions. About 95% of deforestation occurs in the tropics, where clearing of land for agriculture is one of the main causes. One forest conservation strategy is to transfer rights over land from public ownership to its indigenous inhabitants. Land concessions often go to powerful extractive companies. Conservation strategies that exclude and even evict humans, called fortress conservation, often lead to more exploitation of the land. This is because the native inhabitants turn to work for extractive companies to survive. Proforestation is promoting forests to capture their full ecological potential. This is a mitigation strategy as secondary forests that have regrown in abandoned farmland are found to have less biodiversity than the original old-growth forests. Original forests store 60% more carbon than these new forests. Strategies include rewilding and establishing wildlife corridors. Afforestation and reforestation Afforestation is the establishment of trees where there was previously no tree cover. Scenarios for new plantations covering up to 4000 million hectares (Mha) (6300 x 6300 km) suggest cumulative carbon storage of more than 900 GtC (2300 Gt) until 2100. But they are not a viable alternative to aggressive emissions reduction. This is because the plantations would need to be so large they would eliminate most natural ecosystems or reduce food production. One example is the Trillion Tree Campaign. However, preserving biodiversity is also important and for example not all grasslands are suitable for conversion into forests. Grasslands can even turn from carbon sinks to carbon sources. Reforestation is the restocking of existing depleted forests or in places where there were recently forests. Reforestation could save at least 1GtCO2 per year, at an estimated cost of $5–15 per tonne of carbon dioxide (tCO2). Restoring all degraded forests all over the world could capture about 205 GtC (750 Gt). With increased intensive agriculture and urbanization, there is an increase in the amount of abandoned farmland. By some estimates, for every acre of original old-growth forest cut down, more than 50 acres of new secondary forests are growing. In some countries, promoting regrowth on abandoned farmland could offset years of emissions. Planting new trees can be expensive and a risky investment. For example, about 80 percent of planted trees in the Sahel die within two years. Reforestation has higher carbon storage potential than afforestation. Even long-deforested areas still contain an "underground forest" of living roots and tree stumps. Helping native species sprout naturally is cheaper than planting new trees and they are more likely to survive. This could include pruning and coppicing to accelerate growth. This also provides woodfuel, which is otherwise a major source of deforestation. Such practices, called farmer-managed natural regeneration, are centuries old but the biggest obstacle towards implementation is ownership of the trees by the state. The state often sells timber rights to businesses which leads to locals uprooting seedlings because they see them as a liability. Legal aid for locals and changes to property law such as in Mali and Niger have led to significant changes. Scientists describe them as the largest positive environmental transformation in Africa. It is possible to discern from space the border between Niger and the more barren land in Nigeria, where the law has not changed. Soils There are many measures to increase soil carbon. This makes it complex and hard to measure and account for. One advantage is that there are fewer trade-offs for these measures than for BECCS or afforestation, for example. Globally, protecting healthy soils and restoring the soil carbon sponge could remove 7.6 billion tonnes of carbon dioxide from the atmosphere annually. This is more than the annual emissions of the US. Trees capture while growing above ground and exuding larger amounts of carbon below ground. Trees contribute to the building of a soil carbon sponge. Carbon formed above ground is released as immediately when wood is burned. If dead wood remains untouched, only some of the carbon returns to the atmosphere as decomposition proceeds. Farming can deplete soil carbon and render soil incapable of supporting life. However, conservation farming can protect carbon in soils, and repair damage over time. The farming practice of cover crops is a form of carbon farming. Methods that enhance carbon sequestration in soil include no-till farming, residue mulching and crop rotation. Scientists have described the best management practices for European soils to increase soil organic carbon. These are conversion of arable land to grassland, straw incorporation, reduced tillage, straw incorporation combined with reduced tillage, ley cropping system and cover crops. Another mitigation option is the production of biochar and its storage in soils This is the solid material that remains after the pyrolysis of biomass. Biochar production releases half of the carbon from the biomass—either released into the atmosphere or captured with CCS—and retains the other half in the stable biochar. It can endure in soil for thousands of years. Biochar may increase the soil fertility of acidic soils and increase agricultural productivity. During production of biochar, heat is released which may be used as bioenergy. Wetlands Wetland restoration is an important mitigation measure. It has moderate to great mitigation potential on a limited land area with low trade-offs and costs. Wetlands perform two important functions in relation to climate change. They can sequester carbon, converting carbon dioxide to solid plant material through photosynthesis. They also store and regulate water. Wetlands store about 45 million tonnes of carbon per year globally. Some wetlands are a significant source of methane emissions. Some also emit nitrous oxide. Peatland globally covers just 3% of the land's surface. But it stores up to 550 gigatonnes (Gt) of carbon. This represents 42% of all soil carbon and exceeds the carbon stored in all other vegetation types, including the world's forests. The threat to peatlands includes draining the areas for agriculture. Another threat is cutting down trees for lumber, as the trees help hold and fix the peatland. Additionally, peat is often sold for compost. It is possible to restore degraded peatlands by blocking drainage channels in the peatland, and allowing natural vegetation to recover. Mangroves, salt marshes and seagrasses make up the majority of the ocean's vegetated habitats. They only equal 0.05% of the plant biomass on land. But they store carbon 40 times faster than tropical forests. Bottom trawling, dredging for coastal development and fertilizer runoff have damaged coastal habitats. Notably, 85% of oyster reefs globally have been removed in the last two centuries. Oyster reefs clean the water and help other species thrive. This increases biomass in that area. In addition, oyster reefs mitigate the effects of climate change by reducing the force of waves from hurricanes. They also reduce the erosion from rising sea levels. Restoration of coastal wetlands is thought to be more cost-effective than restoration of inland wetlands. Deep ocean These options focus on the carbon which ocean reservoirs can store. They include ocean fertilization, ocean alkalinity enhancement or enhanced weathering. The IPCC found in 2022 ocean-based mitigation options currently have only limited deployment potential. But it assessed that their future mitigation potential is large. It found that in total, ocean-based methods could remove 1–100 Gt of per year. Their costs are in the order of US$40–500 per tonne of . Most of these options could also help to reduce ocean acidification. This is the drop in pH value caused by increased atmospheric CO2 concentrations. Blue carbon management is another type of ocean-based biological carbon dioxide removal (CDR). It can involve land-based as well as ocean-based measures. The term usually refers to the role that tidal marshes, mangroves and seagrasses can play in carbon sequestration. Some of these efforts can also take place in deep ocean waters. This is where the vast majority of ocean carbon is held. These ecosystems can contribute to climate change mitigation and also to ecosystem-based adaptation. Conversely, when blue carbon ecosystems are degraded or lost they release carbon back to the atmosphere. There is increasing interest in developing blue carbon potential. Scientists have found that in some cases these types of ecosystems remove far more carbon per area than terrestrial forests. However, the long-term effectiveness of blue carbon as a carbon dioxide removal solution remains under discussion. Enhanced weathering Enhanced weathering could remove 2–4 Gt of per year. This process aims to accelerate natural weathering by spreading finely ground silicate rock, such as basalt, onto surfaces. This speeds up chemical reactions between rocks, water, and air. It removes carbon dioxide from the atmosphere, permanently storing it in solid carbonate minerals or ocean alkalinity. Cost estimates are in the US$50–200 per tonne range of . Other methods to capture and store CO2 In addition to traditional land-based methods to remove carbon dioxide (CO2) from the air, other technologies are under development. These could reduce CO2 emissions and lower existing atmospheric CO2 levels. Carbon capture and storage (CCS) is a method to mitigate climate change by capturing CO2 from large point sources, such as cement factories or biomass power plants. It then stores it away safely instead of releasing it into the atmosphere. The IPCC estimates that the costs of halting global warming would double without CCS. Bioenergy with carbon capture and storage (BECCS) expands on the potential of CCS and aims to lower atmospheric CO2 levels. This process uses biomass grown for bioenergy. The biomass yields energy in useful forms such as electricity, heat, biofuels, etc. through consumption of the biomass via combustion, fermentation, or pyrolysis. The process captures the CO2 that was extracted from the atmosphere when it grew. It then stores it underground or via land application as biochar. This effectively removes it from the atmosphere. This makes BECCS a negative emissions technology (NET). Scientists estimated the potential range of negative emissions from BECCS in 2018 as 0–22 Gt per year. , BECCS was capturing approximately 2 million tonnes per year of CO2 annually. The cost and availability of biomass limits wide deployment of BECCS. BECCS currently forms a big part of achieving climate targets beyond 2050 in modelling, such as by the Integrated Assessment Models (IAMs) associated with the IPCC process. But many scientists are sceptical due to the risk of loss of biodiversity. Direct air capture is a process of capturing directly from the ambient air. This is in contrast to CCS which captures carbon from point sources. It generates a concentrated stream of for sequestration, utilization or production of carbon-neutral fuel and windgas. Artificial processes vary, and there are concerns about the long-term effects of some of these processes. Mitigation by sector Buildings The building sector accounts for 23% of global energy-related emissions. About half of the energy is used for space and water heating. Building insulation can reduce the primary energy demand significantly. Heat pump loads may also provide a flexible resource that can participate in demand response to integrate variable renewable resources into the grid. Solar water heating uses thermal energy directly. Sufficiency measures include moving to smaller houses when the needs of households change, mixed use of spaces and the collective use of devices. Planners and civil engineers can construct new buildings using passive solar building design, low-energy building, or zero-energy building techniques. In addition, it is possible to design buildings that are more energy-efficient to cool by using lighter-coloured, more reflective materials in the development of urban areas. Heat pumps efficiently heat buildings, and cool them by air conditioning. A modern heat pump typically transports around three to five times more thermal energy than electrical energy consumed. The amount depends on the coefficient of performance and the outside temperature. Refrigeration and air conditioning account for about 10% of global emissions caused by fossil fuel-based energy production and the use of fluorinated gases. Alternative cooling systems, such as passive cooling building design and passive daytime radiative cooling surfaces, can reduce air conditioning use. Suburbs and cities in hot and arid climates can significantly reduce energy consumption from cooling with daytime radiative cooling. Energy consumption for cooling is likely to rise significantly due to increasing heat and availability of devices in poorer countries. Of the 2.8 billion people living in the hottest parts of the world, only 8% currently have air conditioners, compared with 90% of people in the US and Japan. Adoption of air conditioners typically increases in warmer areas at above $10,000 annual household income. By combining energy efficiency improvements and decarbonising electricity for air conditioning with the transition away from super-polluting refrigerants, the world could avoid cumulative greenhouse gas emissions of up to 210–460 Gt-eq over the next four decades. A shift to renewable energy in the cooling sector comes with two advantages: Solar energy production with mid-day peaks corresponds with the load required for cooling and additionally, cooling has a large potential for load management in the electric grid. Urban planning Cities emitted 28 GtCO2-eq in 2020 of combined CO2 and emissions. This was from producing and consuming goods and services. Climate-smart urban planning aims to reduce sprawl to reduce the distance travelled. This lowers emissions from transportation. Switching from cars by improving walkability and cycling infrastructure is beneficial to a country's economy as a whole. Urban forestry, lakes and other blue and green infrastructure can reduce emissions directly and indirectly by reducing energy demand for cooling. Methane emissions from municipal solid waste can be reduced by segregation, composting, and recycling. Transport Transportation accounts for 15% of emissions worldwide. Increasing the use of public transport, low-carbon freight transport and cycling are important components of transport decarbonisation. Electric vehicles and environmentally friendly rail help to reduce the consumption of fossil fuels. In most cases, electric trains are more efficient than air transport and truck transport. Other efficiency means include improved public transport, smart mobility, carsharing and electric hybrids. Fossil-fuel for passenger cars can be included in emissions trading. Furthermore, moving away from a car-dominated transport system towards low-carbon advanced public transport system is important. Heavyweight, large personal vehicles (such as cars) require a lot of energy to move and take up much urban space. Several alternatives modes of transport are available to replace these. The European Union has made smart mobility part of its European Green Deal. In smart cities, smart mobility is also important. The World Bank is helping lower income countries buy electric buses. Their purchase price is higher than diesel buses. But lower running costs and health improvements due to cleaner air can offset this higher price. Between one quarter and three quarters of cars on the road by 2050 are forecast to be electric vehicles. Hydrogen may be a solution for long-distance heavy freight trucks, if batteries alone are too heavy. Shipping In the shipping industry, the use of liquefied natural gas (LNG) as a marine bunker fuel is driven by emissions regulations. Ship operators must switch from heavy fuel oil to more expensive oil-based fuels, implement costly flue gas treatment technologies or switch to LNG engines. Methane slip, when gas leaks unburned through the engine, lowers the advantages of LNG. Maersk, the world's biggest container shipping line and vessel operator, warns of stranded assets when investing in transitional fuels like LNG. The company lists green ammonia as one of the preferred fuel types of the future. It has announced the first carbon-neutral vessel on the water by 2023, running on carbon-neutral methanol. Cruise operators are trialling partially hydrogen-powered ships. Hybrid and all electric ferries are suitable for short distances. Norway's goal is an all electric fleet by 2025. Air transport Jet airliners contribute to climate change by emitting carbon dioxide, nitrogen oxides, contrails and particulates. Their radiative forcing is estimated at 1.3–1.4 that of alone, excluding induced cirrus cloud. In 2018, global commercial operations generated 2.4% of all emissions. The aviation industry has become more fuel efficient. But overall emissions have risen as the volume of air travel has increased. By 2020, aviation emissions were 70% higher than in 2005 and they could grow by 300% by 2050. It is possible to reduce aviation's environmental footprint by better fuel economy in aircraft. Optimising flight routes to lower non- effects on climate from nitrogen oxides, particulates or contrails can also help. Aviation biofuel, carbon emission trading and carbon offsetting, part of the 191 nation ICAO's Carbon Offsetting and Reduction Scheme for International Aviation (CORSIA), can lower emissions. Short-haul flight bans, train connections, personal choices and taxation on flights can lead to fewer flights. Hybrid electric aircraft and electric aircraft or hydrogen-powered aircraft may replace fossil fuel-powered aircraft. Experts expect emissions from aviation to rise in most projections, at least until 2040. They currently amount to 180 Mt of or 11% of transport emissions. Aviation biofuel and hydrogen can only cover a small proportion of flights in the coming years. Experts expect hybrid-driven aircraft to start commercial regional scheduled flights after 2030. Battery-powered aircraft are likely to enter the market after 2035. Under CORSIA, flight operators can purchase carbon offsets to cover their emissions above 2019 levels. CORSIA will be compulsory from 2027. Agriculture, forestry and land use Almost 20% of greenhouse gas emissions come from the agriculture and forestry sector. To significantly reduce these emissions, annual investments in the agriculture sector need to increase to $260 billion by 2030. The potential benefits from these investments are estimated at about $4.3 trillion by 2030, offering a substantial economic return of 16-to-1. Mitigation measures in the food system can be divided into four categories. These are demand-side changes, ecosystem protections, mitigation on farms, and mitigation in supply chains. On the demand side, limiting food waste is an effective way to reduce food emissions. Changes to a diet less reliant on animal products such as plant-based diets are also effective. With 21% of global methane emissions, cattle are a major driver of global warming. When rainforests are cut and the land is converted for grazing, the impact is even higher. In Brazil, producing 1 kg of beef can result in the emission of up to 335 kg CO2-eq. Other livestock, manure management and rice cultivation also emit greenhouse gases, in addition to fossil fuel combustion in agriculture. Important mitigation options for reducing the greenhouse gas emissions from livestock include genetic selection, introduction of methanotrophic bacteria into the rumen, vaccines, feeds, diet modification and grazing management. Other options are diet changes towards ruminant-free alternatives, such as milk substitutes and meat analogues. Non-ruminant livestock, such as poultry, emit far fewer GHGs. It is possible to cut methane emissions in rice cultivation by improved water management, combining dry seeding and one drawdown, or executing a sequence of wetting and drying. This results in emission reductions of up to 90% compared to full flooding and even increased yields. Industry Industry is the largest emitter of greenhouse gases when direct and indirect emissions are included. Electrification can reduce emissions from industry. Green hydrogen can play a major role in energy-intensive industries for which electricity is not an option. Further mitigation options involve the steel and cement industry, which can switch to a less polluting production process. Products can be made with less material to reduce emission-intensity and industrial processes can be made more efficient. Finally, circular economy measures reduce the need for new materials. This also saves on emissions that would have been released from the mining of collecting of those materials. The decarbonisation of cement production requires new technologies, and therefore investment in innovation. Bioconcrete is one possibility to reduce emissions. But no technology for mitigation is yet mature. So CCS will be necessary at least in the short-term. Another sector with a significant carbon footprint is the steel sector, which is responsible for about 7% of global emissions. Emissions can be reduced by using electric arc furnaces to melt and recycle scrap steel. To produce virgin steel without emissions, blast furnaces could be replaced by hydrogen direct reduced iron and electric arc furnaces. Alternatively, carbon capture and storage solutions can be used. Coal, gas and oil production often come with significant methane leakage. In the early 2020s some governments recognized the scale of the problem and introduced regulations. Methane leaks at oil and gas wells and processing plants are cost-effective to fix in countries which can easily trade gas internationally. There are leaks in countries where gas is cheap; such as Iran, Russia, and Turkmenistan. Nearly all this can be stopped by replacing old components and preventing routine flaring. Coalbed methane may continue leaking even after the mine has been closed. But it can be captured by drainage and/or ventilation systems. Fossil fuel firms do not always have financial incentives to tackle methane leakage. Co-benefits Co-benefits of climate change mitigation, also often referred to as ancillary benefits, were firstly dominated in the scientific literature by studies that describe how lower GHG emissions lead to better air quality and consequently impact human health positively. The scope of co-benefits research expanded to its economic, social, ecological and political implications. Positive secondary effects that occur from climate mitigation and adaptation measures have been mentioned in research since the 1990s. The IPCC first mentioned the role of co-benefits in 2001, followed by its fourth and fifth assessment cycle stressing improved working environment, reduced waste, health benefits and reduced capital expenditures. In the early 2000s the OECD was further fostering its efforts in promoting ancillary benefits. The IPCC pointed out in 2007: "Co-benefits of GHG mitigation can be an important decision criteria in analyses carried out by policy-makers, but they are often neglected" and added that the co-benefits are "not quantified, monetised or even identified by businesses and decision-makers". Appropriate consideration of co-benefits can greatly "influence policy decisions concerning the timing and level of mitigation action", and there can be "significant advantages to the national economy and technical innovation". An analysis of climate action in the UK found that public health benefits are a major component of the total benefits derived from climate action. Employment and economic development Co-benefits can positively impact employment, industrial development, states' energy independence and energy self-consumption. The deployment of renewable energies can foster job opportunities. Depending on the country and deployment scenario, replacing coal power plants with renewable energy can more than double the number of jobs per average MW capacity. Investments in renewable energies, especially in solar- and wind energy, can boost the value of production. Countries which rely on energy imports can enhance their energy independence and ensure supply security by deploying renewables. National energy generation from renewables lowers the demand for fossil fuel imports which scales up annual economic saving. The European Commission forecasts a shortage of 180,000 skilled workers in hydrogen production and 66,000 in solar photovoltaic power by 2030. Energy security A higher share of renewables can additionally lead to more energy security. Socioeconomic co-benefits have been analysed such as energy access in rural areas and improved rural livelihoods. Rural areas which are not fully electrified can benefit from the deployment of renewable energies. Solar-powered mini-grids can remain economically viable, cost-competitive and reduce the number of power cuts. Energy reliability has additional social implications: stable electricity improves the quality of education. The International Energy Agency (IEA) spelled out the "multiple benefits approach" of energy efficiency while the International Renewable Energy Agency (IRENA) operationalised the list of co-benefits of the renewable energy sector. Health and well-being The health benefits from climate change mitigation are significant. Potential measures can not only mitigate future health impacts from climate change but also improve health directly. Climate change mitigation is interconnected with various health co-benefits, such as those from reduced air pollution. Air pollution generated by fossil fuel combustion is both a major driver of global warming and the cause of a large number of annual deaths. Some estimates are as high as excess deaths during 2018. A 2023 study estimated that fossil fuels kill over 5 million people each year, as of 2019, by causing diseases such as heart attack, stroke and chronic obstructive pulmonary disease. Particulate air pollution kills by far the most, followed by ground-level ozone. Mitigation policies can also promote healthier diets such as less red meat, more active lifestyles, and increased exposure to green urban spaces. Access to urban green spaces provides benefits to mental health as well. The increased use of green and blue infrastructure can reduce the urban heat island effect. This reduces heat stress on people. Climate change adaptation Some mitigation measures have co-benefits in the area of climate change adaptation. This is for example the case for many nature-based solutions. Examples in the urban context include urban green and blue infrastructure which provide mitigation as well as adaptation benefits. This can be in the form of urban forests and street trees, green roofs and walls, urban agriculture and so forth. The mitigation is achieved through the conservation and expansion of carbon sinks and reduced energy use of buildings. Adaptation benefits come for example through reduced heat stress and flooding risk. Negative side effects Mitigation measures can also have negative side effects and risks. In agriculture and forestry, mitigation measures can affect biodiversity and ecosystem functioning. In renewable energy, mining for metals and minerals can increase threats to conservation areas. There is some research into ways to recycle solar panels and electronic waste. This would create a source for materials so there is no need to mine them. Scholars have found that discussions about risks and negative side effects of mitigation measures can lead to deadlock or the feeling that there are insuperable barriers to taking action. Costs and funding Several factors affect mitigation cost estimates. One is the baseline. This is a reference scenario that the alternative mitigation scenario is compared with. Others are the way costs are modelled, and assumptions about future government policy. Cost estimates for mitigation for specific regions depend on the quantity of emissions allowed for that region in future, as well as the timing of interventions. Mitigation costs will vary according to how and when emissions are cut. Early, well-planned action will minimize the costs. Globally, the benefits of keeping warming under 2 °C exceed the costs. Economists estimate the cost of climate change mitigation at between 1% and 2% of GDP. While this is a large sum, it is still far less than the subsidies governments provide to the ailing fossil fuel industry. The International Monetary Fund estimated this at more than $5 trillion per year. Another estimate says that financial flows for climate mitigation and adaptation are going to be over $800 billion per year. These financial requirements are predicted to exceed $4 trillion per year by 2030. Globally, limiting warming to 2 °C may result in higher economic benefits than economic costs. The economic repercussions of mitigation vary widely across regions and households, depending on policy design and level of international cooperation. Delayed global cooperation increases policy costs across regions, especially in those that are relatively carbon intensive at present. Pathways with uniform carbon values show higher mitigation costs in more carbon-intensive regions, in fossil-fuels exporting regions and in poorer regions. Aggregate quantifications expressed in GDP or monetary terms undervalue the economic effects on households in poorer countries. The actual effects on welfare and well-being are comparatively larger. Cost–benefit analysis may be unsuitable for analysing climate change mitigation as a whole. But it is still useful for analysing the difference between a 1.5 °C target and 2 °C. One way of estimating the cost of reducing emissions is by considering the likely costs of potential technological and output changes. Policymakers can compare the marginal abatement costs of different methods to assess the cost and amount of possible abatement over time. The marginal abatement costs of the various measures will differ by country, by sector, and over time. Eco-tariffs on only imports contribute to reduced global export competitiveness and to deindustrialization. Avoided costs of climate change effects It is possible to avoid some of the costs of the effects of climate change by limiting climate change. According to the Stern Review, inaction can be as high as the equivalent of losing at least 5% of global gross domestic product (GDP) each year, now and forever. This can be up to 20% of GDP or more when including a wider range of risks and impacts. But mitigating climate change will only cost about 2% of GDP. Also it may not be a good idea from a financial perspective to delay significant reductions in greenhouse gas emissions. Mitigation solutions are often evaluated in terms of costs and greenhouse gas reduction potentials. This fails to take into account the direct effects on human well-being. Distributing emissions abatement costs Mitigation at the speed and scale required to limit warming to 2 °C or below implies deep economic and structural changes. These raise multiple types of distributional concerns across regions, income classes and sectors. There have been different proposals on how to allocate responsibility for cutting emissions. These include egalitarianism, basic needs according to a minimum level of consumption, proportionality and the polluter-pays principle. A specific proposal is "equal per capita entitlements". This approach has two categories. In the first category, emissions are allocated according to national population. In the second category, emissions are allocated in a way that attempts to account for historical or cumulative emissions. Funding In order to reconcile economic development with mitigating carbon emissions, developing countries need particular support. This would be both financial and technical. The IPCC found that accelerated support would also tackle inequities in financial and economic vulnerability to climate change. One way to achieve this is the Kyoto Protocol's Clean Development Mechanism (CDM). Policies National policies Climate change mitigation policies can have a large and complex impact on the socio-economic status of individuals and countries This can be both positive and negative. It is important to design policies well and make them inclusive. Otherwise climate change mitigation measures can impose higher financial costs on poor households. An evaluation was conducted on 1,500 climate policy interventions made between 1998 and 2022. The interventions took place in 41 countries and across 6 continents, which together contributed 81% of the world's total emissions as of 2019. The evaluation found 63 successful interventions that resulted in significant emission reductions; the total release averted by these interventions was between 0.6 and 1.8 billion metric tonnes. The study focused on interventions with at least 4.5% emission reductions, but the researchers noted that meeting the reductions required by the Paris Agreement would require 23 billion metric tonnes per year. Generally, carbon pricing was found to be most effective in developed countries, while regulation was most effective in the developing countries. Complementary policy mixes benefited from synergies, and were mostly found to be more effective interventions than the implementation of isolated policies. The OECD recognise 48 distinct climate mitigation policies suitable for implementation at national level. Broadly, these can be categorised into three types: market based instruments, non market based instruments and other policies. Other policies include the Establishing an Independent climate advisory body. Non market based policies include the Implementing or tighening of Regulatory standards. These set technology or performance standards. They can be effective in addressing the market failure of informational barriers. Among market based policies, the carbon price has been found to be the most effective (at least for developed economies), and has its own section below. Additional market based policy instruments for climate change mitigation include: Emissions taxes These often require domestic emitters to pay a fixed fee or tax for every tonne of CO2 emissions they release into the atmosphere. Methane emissions from fossil fuel extraction are also occasionally taxed. But methane and nitrous oxide from agriculture are typically not subject to tax. Removing unhelpful subsidies: Many countries provide subsidies for activities that affect emissions. For example, significant fossil fuel subsidies are present in many countries. Phasing-out fossil fuel subsidies is crucial to address the climate crisis. It must however be done carefully to avoid protests and making poor people poorer. Creating helpful subsidies: Creating subsidies and financial incentives. One example is energy subsidies to support clean generation which is not yet commercially viable such as tidal power. Tradable permits: A permit system can limit emissions. Carbon pricing Imposing additional costs on greenhouse gas emissions can make fossil fuels less competitive and accelerate investments into low-carbon sources of energy. A growing number of countries raise a fixed carbon tax or participate in dynamic carbon emission trading (ETS) systems. In 2021, more than 21% of global greenhouse gas emissions were covered by a carbon price. This was a big increase from earlier due to the introduction of the Chinese national carbon trading scheme. Trading schemes offer the possibility to limit emission allowances to certain reduction targets. However, an oversupply of allowances keeps most ETS at low price levels around $10 with a low impact. This includes the Chinese ETS which started with $7/t in 2021. One exception is the European Union Emission Trading Scheme where prices began to rise in 2018. They reached about €80/t in 2022. This results in additional costs of about €0.04/KWh for coal and €0.02/KWh for gas combustion for electricity, depending on the emission intensity. Industries which have high energy requirements and high emissions often pay only very low energy taxes, or even none at all. While this is often part of national schemes, carbon offsets and credits can be part of a voluntary market as well such as on the international market. Notably, the company Blue Carbon of the UAE has bought ownership over an area equivalent to the United Kingdom to be preserved in return for carbon credits. International agreements Almost all countries are parties to the United Nations Framework Convention on Climate Change (UNFCCC). The ultimate objective of the UNFCCC is to stabilize atmospheric concentrations of greenhouse gases at a level that would prevent dangerous human interference with the climate system. Although not designed for this purpose, the Montreal Protocol has benefited climate change mitigation efforts. The Montreal Protocol is an international treaty that has successfully reduced emissions of ozone-depleting substances such as CFCs. These are also greenhouse gases. Paris Agreement History Historically efforts to deal with climate change have taken place at a multinational level. They involve attempts to reach a consensus decision at the United Nations, under the United Nations Framework Convention on Climate Change (UNFCCC). This is the dominant approach historically of engaging as many international governments as possible in taking action on a worldwide public issue. The Montreal Protocol in 1987 is a precedent that this approach can work. But some critics say the top-down framework of only utilizing the UNFCCC consensus approach is ineffective. They put forward counter-proposals of bottom-up governance. At this same time this would lessen the emphasis on the UNFCCC. The Kyoto Protocol to the UNFCCC adopted in 1997 set out legally binding emission reduction commitments for the "Annex 1" countries. The Protocol defined three international policy instruments ("Flexibility Mechanisms") which could be used by the Annex 1 countries to meet their emission reduction commitments. According to Bashmakov, use of these instruments could significantly reduce the costs for Annex 1 countries in meeting their emission reduction commitments. The Paris Agreement reached in 2015 succeeded the Kyoto Protocol which expired in 2020. Countries that ratified the Kyoto protocol committed to reduce their emissions of carbon dioxide and five other greenhouse gases, or engage in carbon emissions trading if they maintain or increase emissions of these gases. In 2015, the UNFCCC's "structured expert dialogue" came to the conclusion that, "in some regions and vulnerable ecosystems, high risks are projected even for warming above 1.5 °C". Together with the strong diplomatic voice of the poorest countries and the island nations in the Pacific, this expert finding was the driving force leading to the decision of the 2015 Paris Climate Conference to lay down this 1.5 °C long-term target on top of the existing 2 °C goal. Society and culture Commitments to divest More than 1000 organizations with investments worth US$8 trillion have made commitments to fossil fuel divestment. Socially responsible investing funds allow investors to invest in funds that meet high environmental, social and corporate governance (ESG) standards. Barriers There are individual, institutional and market barriers to achieving climate change mitigation. They differ for all the different mitigation options, regions and societies. Difficulties with accounting for carbon dioxide removal can act as economic barriers. This would apply to BECCS (bioenergy with carbon capture and storage). The strategies that companies follow can act as a barrier. But they can also accelerate decarbonisation. In order to decarbonise societies the state needs to play a predominant role. This is because it requires a massive coordination effort. This strong government role can only work well if there is social cohesion, political stability and trust. For land-based mitigation options, finance is a major barrier. Other barriers are cultural values, governance, accountability and institutional capacity. Developing countries face further barriers to mitigation. The cost of capital increased in the early 2020s. A lack of available capital and finance is common in developing countries. Together with the absence of regulatory standards, this barrier supports the proliferation of inefficient equipment. There are also financial and capacity barrier in many of these countries. One study estimates that only 0.12% of all funding for climate-related research goes on the social science of climate change mitigation. Vastly more funding goes on natural science studies of climate change. Considerable sums also go on studies of the impact of climate change and adaptation to it. Impacts of the COVID-19 pandemic The COVID-19 pandemic led some governments to shift their focus away from climate action, at least temporarily. This obstacle to environmental policy efforts may have contributed to slowed investment in green energy technologies. The economic slowdown resulting from COVID-19 added to this effect. In 2020, carbon dioxide emissions fell by 6.4% or 2.3 billion tonnes globally. Greenhouse gas emissions rebounded later in the pandemic as many countries began lifting restrictions. The direct impact of pandemic policies had a negligible long-term impact on climate change. Examples by country United States China China has committed to peak emissions by 2030 and reach net zero by 2060. Warming cannot be limited to 1.5 °C if any coal plants in China (without carbon capture) operate after 2045. The Chinese national carbon trading scheme started in 2021. European Union The European Commission estimates that an additional €477 million in annual investment is needed for the European Union to meet its Fit-for-55 decarbonization goals. In the European Union, government-driven policies and the European Green Deal have helped position greentech (as an example) as a vital area for venture capital investment. By 2023, venture capital in the EU's greentech sector equaled that of the United States, reflecting a concerted effort to drive innovation and mitigate climate change through targeted financial support. The European Green Deal has fostered policies that contributed to a 30% rise in venture capital for greentech companies in the EU from 2021 to 2023, despite a downturn in other sectors during the same period. While overall venture capital investment in the EU remains about six times lower than in the United States, the greentech sector has closed this gap significantly, attracting substantial funding. Key areas benefitting from increased investments are energy storage, circular economy initiatives, and agricultural technology. This is supported by the EU's ambitious goal to reduce greenhouse gas emissions by at least 55% by 2030. See also Carbon budget Carbon offsets and credits Carbon price Climate movement Climate change denial Tipping points in the climate system References Biogeochemical cycle Biogeography Cycle Chemical oceanography Climate change policy Geochemistry Numerical climate and weather models Soil
0.76442
0.995714
0.761144
Gas exchange
Gas exchange is the physical process by which gases move passively by diffusion across a surface. For example, this surface might be the air/water interface of a water body, the surface of a gas bubble in a liquid, a gas-permeable membrane, or a biological membrane that forms the boundary between an organism and its extracellular environment. Gases are constantly consumed and produced by cellular and metabolic reactions in most living things, so an efficient system for gas exchange between, ultimately, the interior of the cell(s) and the external environment is required. Small, particularly unicellular organisms, such as bacteria and protozoa, have a high surface-area to volume ratio. In these creatures the gas exchange membrane is typically the cell membrane. Some small multicellular organisms, such as flatworms, are also able to perform sufficient gas exchange across the skin or cuticle that surrounds their bodies. However, in most larger organisms, which have small surface-area to volume ratios, specialised structures with convoluted surfaces such as gills, pulmonary alveoli and spongy mesophylls provide the large area needed for effective gas exchange. These convoluted surfaces may sometimes be internalised into the body of the organism. This is the case with the alveoli, which form the inner surface of the mammalian lung, the spongy mesophyll, which is found inside the leaves of some kinds of plant, or the gills of those molluscs that have them, which are found in the mantle cavity. In aerobic organisms, gas exchange is particularly important for respiration, which involves the uptake of oxygen and release of carbon dioxide. Conversely, in oxygenic photosynthetic organisms such as most land plants, uptake of carbon dioxide and release of both oxygen and water vapour are the main gas-exchange processes occurring during the day. Other gas-exchange processes are important in less familiar organisms: e.g. carbon dioxide, methane and hydrogen are exchanged across the cell membrane of methanogenic archaea. In nitrogen fixation by diazotrophic bacteria, and denitrification by heterotrophic bacteria (such as Paracoccus denitrificans and various pseudomonads), nitrogen gas is exchanged with the environment, being taken up by the former and released into it by the latter, while giant tube worms rely on bacteria to oxidize hydrogen sulfide extracted from their deep sea environment, using dissolved oxygen in the water as an electron acceptor. Diffusion only takes place with a concentration gradient. Gases will flow from a high concentration to a low concentration. A high oxygen concentration in the alveoli and low oxygen concentration in the capillaries causes oxygen to move into the capillaries. A high carbon dioxide concentration in the capillaries and low carbon dioxide concentration in the alveoli causes carbon dioxide to move into the alveoli. Physical principles of gas-exchange Diffusion and surface area The exchange of gases occurs as a result of diffusion down a concentration gradient. Gas molecules move from a region in which they are at high concentration to one in which they are at low concentration. Diffusion is a passive process, meaning that no energy is required to power the transport, and it follows Fick's law: In relation to a typical biological system, where two compartments ('inside' and 'outside'), are separated by a membrane barrier, and where a gas is allowed to spontaneously diffuse down its concentration gradient: J is the flux, the amount of gas diffusing per unit area of membrane per unit time. Note that this is already scaled for the area of the membrane. D is the diffusion coefficient, which will differ from gas to gas, and from membrane to membrane, according to the size of the gas molecule in question, and the nature of the membrane itself (particularly its viscosity, temperature and hydrophobicity). φ is the concentration of the gas. x is the position across the thickness of the membrane. dφ/dx is therefore the concentration gradient across the membrane. If the two compartments are individually well-mixed, then this is simplifies to the difference in concentration of the gas between the inside and outside compartments divided by the thickness of the membrane. The negative sign indicates that the diffusion is always in the direction that - over time - will destroy the concentration gradient, i.e. the gas moves from high concentration to low concentration until eventually the inside and outside compartments reach equilibrium. Gases must first dissolve in a liquid in order to diffuse across a membrane, so all biological gas exchange systems require a moist environment. In general, the higher the concentration gradient across the gas-exchanging surface, the faster the rate of diffusion across it. Conversely, the thinner the gas-exchanging surface (for the same concentration difference), the faster the gases will diffuse across it. In the equation above, J is the flux expressed per unit area, so increasing the area will make no difference to its value. However, an increase in the available surface area, will increase the amount of gas that can diffuse in a given time. This is because the amount of gas diffusing per unit time (dq/dt) is the product of J and the area of the gas-exchanging surface, A: Single-celled organisms such as bacteria and amoebae do not have specialised gas exchange surfaces, because they can take advantage of the high surface area they have relative to their volume. The amount of gas an organism produces (or requires) in a given time will be in rough proportion to the volume of its cytoplasm. The volume of a unicellular organism is very small; thus, it produces (and requires) a relatively small amount of gas in a given time. In comparison to this small volume, the surface area of its cell membrane is very large, and adequate for its gas-exchange needs without further modification. However, as an organism increases in size, its surface area and volume do not scale in the same way. Consider an imaginary organism that is a cube of side-length, L. Its volume increases with the cube (L3) of its length, but its external surface area increases only with the square (L2) of its length. This means the external surface rapidly becomes inadequate for the rapidly increasing gas-exchange needs of a larger volume of cytoplasm. Additionally, the thickness of the surface that gases must cross (dx in Fick's law) can also be larger in larger organisms: in the case of a single-celled organism, a typical cell membrane is only 10 nm thick; but in larger organisms such as roundworms (Nematoda) the equivalent exchange surface - the cuticle - is substantially thicker at 0.5 μm. Interaction with circulatory systems In multicellular organisms therefore, specialised respiratory organs such as gills or lungs are often used to provide the additional surface area for the required rate of gas exchange with the external environment. However the distances between the gas exchanger and the deeper tissues are often too great for diffusion to meet gaseous requirements of these tissues. The gas exchangers are therefore frequently coupled to gas-distributing circulatory systems, which transport the gases evenly to all the body tissues regardless of their distance from the gas exchanger. Some multicellular organisms such as flatworms (Platyhelminthes) are relatively large but very thin, allowing their outer body surface to act as a gas exchange surface without the need for a specialised gas exchange organ. Flatworms therefore lack gills or lungs, and also lack a circulatory system. Other multicellular organisms such as sponges (Porifera) have an inherently high surface area, because they are very porous and/or branched. Sponges do not require a circulatory system or specialised gas exchange organs, because their feeding strategy involves one-way pumping of water through their porous bodies using flagellated collar cells. Each cell of the sponge's body is therefore exposed to a constant flow of fresh oxygenated water. They can therefore rely on diffusion across their cell membranes to carry out the gas exchange needed for respiration. In organisms that have circulatory systems associated with their specialized gas-exchange surfaces, a great variety of systems are used for the interaction between the two. In a countercurrent flow system, air (or, more usually, the water containing dissolved air) is drawn in the opposite direction to the flow of blood in the gas exchanger. A countercurrent system such as this maintains a steep concentration gradient along the length of the gas-exchange surface (see lower diagram in Fig. 2). This is the situation seen in the gills of fish and many other aquatic creatures. The gas-containing environmental water is drawn unidirectionally across the gas-exchange surface, with the blood-flow in the gill capillaries beneath flowing in the opposite direction. Although this theoretically allows almost complete transfer of a respiratory gas from one side of the exchanger to the other, in fish less than 80% of the oxygen in the water flowing over the gills is generally transferred to the blood. Alternative arrangements are cross current systems found in birds. and dead-end air-filled sac systems found in the lungs of mammals. In a cocurrent flow system, the blood and gas (or the fluid containing the gas) move in the same direction through the gas exchanger. This means the magnitude of the gradient is variable along the length of the gas-exchange surface, and the exchange will eventually stop when an equilibrium has been reached (see upper diagram in Fig. 2). Cocurrent flow gas exchange systems are not known to be used in nature. Mammals The gas exchanger in mammals is internalized to form lungs, as it is in most of the larger land animals. Gas exchange occurs in microscopic dead-end air-filled sacs called alveoli, where a very thin membrane (called the blood-air barrier) separates the blood in the alveolar capillaries (in the walls of the alveoli) from the alveolar air in the sacs. Exchange membrane The membrane across which gas exchange takes place in the alveoli (i.e. the blood-air barrier) is extremely thin (in humans, on average, 2.2 μm thick). It consists of the alveolar epithelial cells, their basement membranes and the endothelial cells of the pulmonary capillaries (Fig. 4). The large surface area of the membrane comes from the folding of the membrane into about 300 million alveoli, with diameters of approximately 75-300 μm each. This provides an extremely large surface area (approximately 145 m2) across which gas exchange can occur. Alveolar air Air is brought to the alveoli in small doses (called the tidal volume), by breathing in (inhalation) and out (exhalation) through the respiratory airways, a set of relatively narrow and moderately long tubes which start at the nose or mouth and end in the alveoli of the lungs in the chest. Air moves in and out through the same set of tubes, in which the flow is in one direction during inhalation, and in the opposite direction during exhalation. During each inhalation, at rest, approximately 500 ml of fresh air flows in through the nose. It is warmed and moistened as it flows through the nose and pharynx. By the time it reaches the trachea the inhaled air's temperature is 37 °C and it is saturated with water vapor. On arrival in the alveoli it is diluted and thoroughly mixed with the approximately 2.5–3.0 liters of air that remained in the alveoli after the last exhalation. This relatively large volume of air that is semi-permanently present in the alveoli throughout the breathing cycle is known as the functional residual capacity (FRC). At the beginning of inhalation the airways are filled with unchanged alveolar air, left over from the last exhalation. This is the dead space volume, which is usually about 150 ml. It is the first air to re-enter the alveoli during inhalation. Only after the dead space air has returned to the alveoli does the remainder of the tidal volume (500 ml - 150 ml = 350 ml) enter the alveoli. The entry of such a small volume of fresh air with each inhalation, ensures that the composition of the FRC hardly changes during the breathing cycle (Fig. 5). The alveolar partial pressure of oxygen remains very close to 13–14 kPa (100 mmHg), and the partial pressure of carbon dioxide varies minimally around 5.3 kPa (40 mmHg) throughout the breathing cycle (of inhalation and exhalation). The corresponding partial pressures of oxygen and carbon dioxide in the ambient (dry) air at sea level are 21 kPa (160 mmHg) and 0.04 kPa (0.3 mmHg) respectively. This alveolar air, which constitutes the FRC, completely surrounds the blood in the alveolar capillaries (Fig. 6). Gas exchange in mammals occurs between this alveolar air (which differs significantly from fresh air) and the blood in the alveolar capillaries. The gases on either side of the gas exchange membrane equilibrate by simple diffusion. This ensures that the partial pressures of oxygen and carbon dioxide in the blood leaving the alveolar capillaries, and ultimately circulates throughout the body, are the same as those in the FRC. The marked difference between the composition of the alveolar air and that of the ambient air can be maintained because the functional residual capacity is contained in dead-end sacs connected to the outside air by long, narrow, tubes (the airways: nose, pharynx, larynx, trachea, bronchi and their branches and sub-branches down to the bronchioles). This anatomy, and the fact that the lungs are not emptied and re-inflated with each breath, provides mammals with a "portable atmosphere", whose composition differs significantly from the present-day ambient air. The composition of the air in the FRC is carefully monitored, by measuring the partial pressures of oxygen and carbon dioxide in the arterial blood. If either gas pressure deviates from normal, reflexes are elicited that change the rate and depth of breathing in such a way that normality is restored within seconds or minutes. Pulmonary circulation All the blood returning from the body tissues to the right side of the heart flows through the alveolar capillaries before being pumped around the body again. On its passage through the lungs the blood comes into close contact with the alveolar air, separated from it by a very thin diffusion membrane which is only, on average, about 2 μm thick. The gas pressures in the blood will therefore rapidly equilibrate with those in the alveoli, ensuring that the arterial blood that circulates to all the tissues throughout the body has an oxygen tension of 13−14 kPa (100 mmHg), and a carbon dioxide tension of 5.3 kPa (40 mmHg). These arterial partial pressures of oxygen and carbon dioxide are homeostatically controlled. A rise in the arterial , and, to a lesser extent, a fall in the arterial , will reflexly cause deeper and faster breathing until the blood gas tensions return to normal. The converse happens when the carbon dioxide tension falls, or, again to a lesser extent, the oxygen tension rises: the rate and depth of breathing are reduced until blood gas normality is restored. Since the blood arriving in the alveolar capillaries has a of, on average, 6 kPa (45 mmHg), while the pressure in the alveolar air is 13 kPa (100 mmHg), there will be a net diffusion of oxygen into the capillary blood, changing the composition of the 3 liters of alveolar air slightly. Similarly, since the blood arriving in the alveolar capillaries has a of also about 6 kPa (45 mmHg), whereas that of the alveolar air is 5.3 kPa (40 mmHg), there is a net movement of carbon dioxide out of the capillaries into the alveoli. The changes brought about by these net flows of individual gases into and out of the functional residual capacity necessitate the replacement of about 15% of the alveolar air with ambient air every 5 seconds or so. This is very tightly controlled by the continuous monitoring of the arterial blood gas tensions (which accurately reflect partial pressures of the respiratory gases in the alveolar air) by the aortic bodies, the carotid bodies, and the blood gas and pH sensor on the anterior surface of the medulla oblongata in the brain. There are also oxygen and carbon dioxide sensors in the lungs, but they primarily determine the diameters of the bronchioles and pulmonary capillaries, and are therefore responsible for directing the flow of air and blood to different parts of the lungs. It is only as a result of accurately maintaining the composition of the 3 liters alveolar air that with each breath some carbon dioxide is discharged into the atmosphere and some oxygen is taken up from the outside air. If more carbon dioxide than usual has been lost by a short period of hyperventilation, respiration will be slowed down or halted until the alveolar has returned to 5.3 kPa (40 mmHg). It is therefore strictly speaking untrue that the primary function of the respiratory system is to rid the body of carbon dioxide "waste". In fact the total concentration of carbon dioxide in arterial blood is about 26 mM (or 58 ml per 100 ml), compared to the concentration of oxygen in saturated arterial blood of about 9 mM (or 20 ml per 100 ml blood). This large concentration of carbon dioxide plays a pivotal role in the determination and maintenance of the pH of the extracellular fluids. The carbon dioxide that is breathed out with each breath could probably be more correctly be seen as a byproduct of the body's extracellular fluid carbon dioxide and pH homeostats If these homeostats are compromised, then a respiratory acidosis, or a respiratory alkalosis will occur. In the long run these can be compensated by renal adjustments to the H+ and HCO3− concentrations in the plasma; but since this takes time, the hyperventilation syndrome can, for instance, occur when agitation or anxiety cause a person to breathe fast and deeply thus blowing off too much CO2 from the blood into the outside air, precipitating a set of distressing symptoms which result from an excessively high pH of the extracellular fluids. Oxygen has a very low solubility in water, and is therefore carried in the blood loosely combined with hemoglobin. The oxygen is held on the hemoglobin by four ferrous iron-containing heme groups per hemoglobin molecule. When all the heme groups carry one O2 molecule each the blood is said to be "saturated" with oxygen, and no further increase in the partial pressure of oxygen will meaningfully increase the oxygen concentration of the blood. Most of the carbon dioxide in the blood is carried as HCO3− ions in the plasma. However the conversion of dissolved CO2 into HCO3− (through the addition of water) is too slow for the rate at which the blood circulates through the tissues on the one hand, and alveolar capillaries on the other. The reaction is therefore catalyzed by carbonic anhydrase, an enzyme inside the red blood cells. The reaction can go in either direction depending on the prevailing partial pressure of carbon dioxide. A small amount of carbon dioxide is carried on the protein portion of the hemoglobin molecules as carbamino groups. The total concentration of carbon dioxide (in the form of bicarbonate ions, dissolved CO2, and carbamino groups) in arterial blood (i.e. after it has equilibrated with the alveolar air) is about 26 mM (or 58 ml/100 ml), compared to the concentration of oxygen in saturated arterial blood of about 9 mM (or 20 ml/100 ml blood). Other vertebrates Fish The dissolved oxygen content in fresh water is approximately 8–10 milliliters per liter compared to that of air which is 210 milliliters per liter. Water is 800 times more dense than air and 100 times more viscous. Therefore, oxygen has a diffusion rate in air 10,000 times greater than in water. The use of sac-like lungs to remove oxygen from water would therefore not be efficient enough to sustain life. Rather than using lungs, gaseous exchange takes place across the surface of highly vascularized gills. Gills are specialised organs containing filaments, which further divide into lamellae. The lamellae contain capillaries that provide a large surface area and short diffusion distances, as their walls are extremely thin. Gill rakers are found within the exchange system in order to filter out food, and keep the gills clean. Gills use a countercurrent flow system that increases the efficiency of oxygen-uptake (and waste gas loss). Oxygenated water is drawn in through the mouth and passes over the gills in one direction while blood flows through the lamellae in the opposite direction. This countercurrent maintains steep concentration gradients along the entire length of each capillary (see the diagram in the "Interaction with circulatory systems" section above). Oxygen is able to continually diffuse down its gradient into the blood, and the carbon dioxide down its gradient into the water. The deoxygenated water will eventually pass out through the operculum (gill cover). Although countercurrent exchange systems theoretically allow an almost complete transfer of a respiratory gas from one side of the exchanger to the other, in fish less than 80% of the oxygen in the water flowing over the gills is generally transferred to the blood. Amphibians Amphibians have three main organs involved in gas exchange: the lungs, the skin, and the gills, which can be used singly or in a variety of different combinations. The relative importance of these structures differs according to the age, the environment and species of the amphibian. The skin of amphibians and their larvae are highly vascularised, leading to relatively efficient gas exchange when the skin is moist. The larvae of amphibians, such as the pre-metamorphosis tadpole stage of frogs, also have external gills. The gills are absorbed into the body during metamorphosis, after which the lungs will then take over. The lungs are usually simpler than in the other land vertebrates, with few internal septa and larger alveoli; however, toads, which spend more time on land, have a larger alveolar surface with more developed lungs. To increase the rate of gas exchange by diffusion, amphibians maintain the concentration gradient across the respiratory surface using a process called buccal pumping. The lower floor of the mouth is moved in a "pumping" manner, which can be observed by the naked eye. Reptiles All reptiles breathe using lungs. In squamates (the lizards and snakes) ventilation is driven by the axial musculature, but this musculature is also used during movement, so some squamates rely on buccal pumping to maintain gas exchange efficiency. Due to the rigidity of turtle and tortoise shells, significant expansion and contraction of the chest is difficult. Turtles and tortoises depend on muscle layers attached to their shells, which wrap around their lungs to fill and empty them. Some aquatic turtles can also pump water into a highly vascularised mouth or cloaca to achieve gas-exchange. Crocodiles have a structure similar to the mammalian diaphragm - the diaphragmaticus - but this muscle helps create a unidirectional flow of air through the lungs rather than a tidal flow: this is more similar to the air-flow seen in birds than that seen in mammals. During inhalation, the diaphragmaticus pulls the liver back, inflating the lungs into the space this creates. Air flows into the lungs from the bronchus during inhalation, but during exhalation, air flows out of the lungs into the bronchus by a different route: this one-way movement of gas is achieved by aerodynamic valves in the airways. Birds Birds have lungs but no diaphragm. They rely mostly on air sacs for ventilation. These air sacs do not play a direct role in gas exchange, but help to move air unidirectionally across the gas exchange surfaces in the lungs. During inhalation, fresh air is taken from the trachea down into the posterior air sacs and into the parabronchi which lead from the posterior air sacs into the lung. The air that enters the lungs joins the air which is already in the lungs, and is drawn forward across the gas exchanger into anterior air sacs. During exhalation, the posterior air sacs force air into the same parabronchi of the lungs, flowing in the same direction as during inhalation, allowing continuous gas exchange irrespective of the breathing cycle. Air exiting the lungs during exhalation joins the air being expelled from the anterior air sacs (both consisting of "spent air" that has passed through the gas exchanger) entering the trachea to be exhaled (Fig. 10). Selective bronchoconstriction at the various bronchial branch points ensures that the air does not ebb and flow through the bronchi during inhalation and exhalation, as it does in mammals, but follows the paths described above. The unidirectional airflow through the parabronchi exchanges respiratory gases with a crosscurrent blood flow (Fig. 9). The partial pressure of O2 in the parabronchioles declines along their length as O2 diffuses into the blood. The capillaries leaving the exchanger near the entrance of airflow take up more O2 than capillaries leaving near the exit end of the parabronchi. When the contents of all capillaries mix, the final of the mixed pulmonary venous blood is higher than that of the exhaled air, but lower than that of the inhaled air. Plants Gas exchange in plants is dominated by the roles of carbon dioxide, oxygen and water vapor. is the only carbon source for autotrophic growth by photosynthesis, and when a plant is actively photosynthesising in the light, it will be taking up carbon dioxide, and losing water vapor and oxygen. At night, plants respire, and gas exchange partly reverses: water vapor is still lost (but to a smaller extent), but oxygen is now taken up and carbon dioxide released. Plant gas exchange occurs mostly through the leaves. Gas exchange between a leaf and the atmosphere occurs simultaneously through two pathways: 1) epidermal cells and cuticular waxes (usually referred as 'cuticle') which are always present at each leaf surface, and 2) stomata, which typically control the majority of the exchange. Gases enter into the photosynthetic tissue of the leaf through dissolution onto the moist surface of the palisade and spongy mesophyll cells. The spongy mesophyll cells are loosely packed, allowing for an increased surface area, and consequently an increased rate of gas-exchange. Uptake of carbon dioxide necessarily results in some loss of water vapor, because both molecules enter and leave by the same stomata, so plants experience a gas exchange dilemma: gaining enough without losing too much water. Therefore, water loss from other parts of the leaf is minimised by the waxy cuticle on the leaf's epidermis. The size of a stoma is regulated by the opening and closing of its two guard cells: the turgidity of these cells determines the state of the stomatal opening, and this itself is regulated by water stress. Plants showing crassulacean acid metabolism are drought-tolerant xerophytes and perform almost all their gas-exchange at night, because it is only during the night that these plants open their stomata. By opening the stomata only at night, the water vapor loss associated with carbon dioxide uptake is minimised. However, this comes at the cost of slow growth: the plant has to store the carbon dioxide in the form of malic acid for use during the day, and it cannot store unlimited amounts. Gas exchange measurements are important tools in plant science: this typically involves sealing the plant (or part of a plant) in a chamber and measuring changes in the concentration of carbon dioxide and water vapour with an infrared gas analyzer. If the environmental conditions (humidity, concentration, light and temperature) are fully controlled, the measurements of uptake and water release reveal important information about the assimilation and transpiration rates. The intercellular concentration reveals important information about the photosynthetic condition of the plants. Simpler methods can be used in specific circumstances: hydrogencarbonate indicator can be used to monitor the consumption of in a solution containing a single plant leaf at different levels of light intensity, and oxygen generation by the pondweed Elodea can be measured by simply collecting the gas in a submerged test-tube containing a small piece of the plant. Invertebrates The mechanism of gas exchange in invertebrates depends their size, feeding strategy, and habitat (aquatic or terrestrial). The sponges (Porifera) are sessile creatures, meaning they are unable to move on their own and normally remain attached to their substrate. They obtain nutrients through the flow of water across their cells, and they exchange gases by simple diffusion across their cell membranes. Pores called ostia draw water into the sponge and the water is subsequently circulated through the sponge by cells called choanocytes which have hair-like structures that move the water through the sponge. The cnidarians include corals, sea anemones, jellyfish and hydras. These animals are always found in aquatic environments, ranging from fresh water to salt water. They do not have any dedicated respiratory organs; instead, every cell in their body can absorb oxygen from the surrounding water, and release waste gases to it. One key disadvantage of this feature is that cnidarians can die in environments where water is stagnant, as they deplete the water of its oxygen supply. Corals often form symbiosis with other organisms, particularly photosynthetic dinoflagellates. In this symbiosis, the coral provides shelter and the other organism provides nutrients to the coral, including oxygen. The roundworms (Nematoda), flatworms (Platyhelminthes), and many other small invertebrate animals living in aquatic or otherwise wet habitats do not have a dedicated gas-exchange surface or circulatory system. They instead rely on diffusion of and directly across their cuticle. The cuticle is the semi-permeable outermost layer of their bodies. Other aquatic invertebrates such as most molluscs (Mollusca) and larger crustaceans (Crustacea) such as lobsters, have gills analogous to those of fish, which operate in a similar way. Unlike the invertebrates groups mentioned so far, insects are usually terrestrial, and exchange gases across a moist surface in direct contact with the atmosphere, rather than in contact with surrounding water. The insect's exoskeleton is impermeable to gases, including water vapor, so they have a more specialised gas exchange system, requiring gases to be directly transported to the tissues via a complex network of tubes. This respiratory system is separated from their circulatory system. Gases enter and leave the body through openings called spiracles, located laterally along the thorax and abdomen. Similar to plants, insects are able to control the opening and closing of these spiracles, but instead of relying on turgor pressure, they rely on muscle contractions. These contractions result in an insect's abdomen being pumped in and out. The spiracles are connected to tubes called tracheae, which branch repeatedly and ramify into the insect's body. These branches terminate in specialised tracheole cells which provides a thin, moist surface for efficient gas exchange, directly with cells. The other main group of terrestrial arthropod, the arachnids (spiders, scorpion, mites, and their relatives) typically perform gas exchange with a book lung. Summary of main gas exchange systems See also References Biological processes
0.765958
0.993706
0.761137
Evolutionarily stable strategy
An evolutionarily stable strategy (ESS) is a strategy (or set of strategies) that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy (or set of strategies) which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science. In game-theoretical terms, an ESS is an equilibrium refinement of the Nash equilibrium, being a Nash equilibrium that is also "evolutionarily stable." Thus, once fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from replacing it (although this does not preclude the possibility that a better strategy, or set of strategies, will emerge in response to selective pressures resulting from environmental change). History Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973 Nature paper. Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972 essay by Maynard Smith in a book of essays titled On Evolution. The 1972 essay is sometimes cited instead of the 1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology. Maynard Smith explains further in his 1982 book Evolution and the Theory of Games. Sometimes these are cited instead. In fact, the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar with it. Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article for publication, he offered to add Price as co-author. The concept was derived from R. H. MacArthur and W. D. Hamilton's work on sex ratios, derived from Fisher's principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the 1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of game theory to the evolution of behaviour. Uses of ESS: The ESS was a major element used to analyze evolution in Richard Dawkins' bestselling 1976 book The Selfish Gene. The ESS was first used in the social sciences by Robert Axelrod in his 1984 book The Evolution of Cooperation. Since then, it has been widely used in the social sciences, including anthropology, economics, philosophy, and political science. In the social sciences, the primary interest is not in an ESS as the end of biological evolution, but as an end point in cultural evolution or individual learning. In evolutionary psychology, ESS is used primarily as a model for human biological evolution. Motivation The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies. Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection, with the payoffs of the game representing reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives. Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes. Nash equilibrium An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if for both players, for any strategy T: E(S,S) ≥ E(T,S) In this definition, a strategy T≠S can be a neutral alternative to S (scoring equally well, but not better). A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS. Maynard Smith and Price specify two conditions for a strategy S to be an ESS. For all T≠S, either E(S,S) > E(T,S), or E(S,S) = E(T,S) and E(S,T) > E(T,T) The first condition is sometimes called a strict Nash equilibrium. The second is sometimes called "Maynard Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T. There is also an alternative, stronger definition of ESS, due to Thomas. This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all T≠S E(S,S) ≥ E(T,S), and E(S,T) > E(T,T) In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second. In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second player keeps his strategy S and the payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T. This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set. Examples of differences between Nash equilibria and ESSes In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the prisoner's dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS. Some games may have Nash equilibria that are not ESSes. For example, in harm thy neighbor (whose payoff matrix is shown here) both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since E(A, A) = E(B, A), but it is not the case that E(A,B) > E(B,B). Nash equilibria with equally scoring alternatives can be ESSes. For example, in the game Harm everyone, C is an ESS because it satisfies Maynard Smith's second condition. D strategists may temporarily invade a population of C strategists by scoring equally well against C, but they pay a price when they begin to play against each other; C scores better against D than does D. So here although E(C, C) = E(D, C), it is also the case that E(C,D) > E(D,D). As a result, C is an ESS. Even if a game has pure strategy Nash equilibria, it might be that none of those pure strategies are ESS. Consider the Game of chicken. There are two pure strategy Nash equilibria in this game (Swerve, Stay) and (Stay, Swerve). However, in the absence of an uncorrelated asymmetry, neither Swerve nor Stay are ESSes. There is a third Nash equilibrium, a mixed strategy which is an ESS for this game (see Hawk-dove game and Best response for explanation). This last example points to an important difference between Nash equilibria and ESS. Nash equilibria are defined on strategy sets (a specification of a strategy for each player), while ESS are defined in terms of strategies themselves. The equilibria defined by ESS must always be symmetric, and thus have fewer equilibrium points. Vs. evolutionarily stable state In population biology, the two concepts of an evolutionarily stable strategy (ESS) and an evolutionarily stable state are closely linked but describe different situations. In an evolutionarily stable strategy, if all the members of a population adopt it, no mutant strategy can invade. Once virtually all members of the population use this strategy, there is no 'rational' alternative. ESS is part of classical game theory. In an evolutionarily stable state, a population's genetic composition is restored by selection after a disturbance, if the disturbance is not too large. An evolutionarily stable state is a dynamic property of a population that returns to using a strategy, or mix of strategies, if it is perturbed from that initial state. It is part of population genetics, dynamical system, or evolutionary game theory. This is now called convergent stability. B. Thomas (1984) applies the term ESS to an individual strategy which may be mixed, and evolutionarily stable population state to a population mixture of pure strategies which may be formally equivalent to the mixed ESS. Whether a population is evolutionarily stable does not relate to its genetic diversity: it can be genetically monomorphic or polymorphic. Stochastic ESS In the classic definition of an ESS, no mutant strategy can invade. In finite populations, any mutant could in principle invade, albeit at low probability, implying that no ESS can exist. In an infinite population, an ESS can instead be defined as a strategy which, should it become invaded by a new mutant strategy with probability p, would be able to counterinvade from a single starting individual with probability >p, as illustrated by the evolution of bet-hedging. Prisoner's dilemma A common model of altruism and social cooperation is the Prisoner's dilemma. Here a group of players would collectively be better off if they could play Cooperate, but since Defect fares better each individual player has an incentive to play Defect. One solution to this problem is to introduce the possibility of retaliation by having individuals play the game repeatedly against the same player. In the so-called iterated Prisoner's dilemma, the same two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies (Cooperate and Defect), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual can have different contingency plan for each history and the game may be repeated an indefinite number of times, there may in fact be an infinite number of such contingency plans. Three simple contingency plans which have received substantial attention are Always Defect, Always Cooperate, and Tit for Tat. The first two strategies do the same thing regardless of the other player's actions, while the latter responds on the next round by doing what was done to it on the previous round—it responds to Cooperate with Cooperate and Defect with Defect. If the entire population plays Tit-for-Tat and a mutant arises who plays Always Defect, Tit-for-Tat will outperform Always Defect. If the population of the mutant becomes too large — the percentage of the mutant will be kept small. Tit for Tat is therefore an ESS, with respect to only these two strategies. On the other hand, an island of Always Defect players will be stable against the invasion of a few Tit-for-Tat players, but not against a large number of them. If we introduce Always Cooperate, a population of Tit-for-Tat is no longer an ESS. Since a population of Tit-for-Tat players always cooperates, the strategy Always Cooperate behaves identically in this population. As a result, a mutant who plays Always Cooperate will not be eliminated. However, even though a population of Always Cooperate and Tit-for-Tat can coexist, if there is a small percentage of the population that is Always Defect, the selective pressure is against Always Cooperate, and in favour of Tit-for-Tat. This is due to the lower payoffs of cooperating than those of defecting in case the opponent defects. This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces, and has motivated some to consider alternatives. Human behavior The fields of sociobiology and evolutionary psychology attempt to explain animal and human behavior and social structures, largely in terms of evolutionarily stable strategies. Sociopathy (chronic antisocial or criminal behavior) may be a result of a combination of two such strategies. Evolutionarily stable strategies were originally considered for biological evolution, but they can apply to other contexts. In fact, there are stable states for a large class of adaptive dynamics. As a result, they can be used to explain human behaviours that lack any genetic influences. See also Antipredator adaptation Behavioral ecology Evolutionary psychology Fitness landscape Hawk–dove game Koinophilia Sociobiology War of attrition (game) References Further reading Classic reference textbook. . An 88-page mathematical introduction; see Section 3.8. Free online at many universities. Parker, G. A. (1984) Evolutionary stable strategies. In Behavioural Ecology: an Evolutionary Approach (2nd ed) Krebs, J. R. & Davies N.B., eds. pp 30–61. Blackwell, Oxford. . A comprehensive reference from a computational perspective; see Section 7.7. Downloadable free online. Maynard Smith, John. (1982) Evolution and the Theory of Games. . Classic reference. External links Evolutionarily Stable Strategies at Animal Behavior: An Online Textbook by Michael D. Breed. Game Theory and Evolutionarily Stable Strategies, Kenneth N. Prestwich's site at College of the Holy Cross. Evolutionarily stable strategies knol Archived: https://web.archive.org/web/20091005015811/http://knol.google.com/k/klaus-rohde/evolutionarily-stable-strategies-and/xk923bc3gp4/50# Game theory equilibrium concepts Evolutionary game theory
0.771035
0.987162
0.761136
Nitrogen cycle
The nitrogen cycle is the biogeochemical cycle by which nitrogen is converted into multiple chemical forms as it circulates among atmospheric, terrestrial, and marine ecosystems. The conversion of nitrogen can be carried out through both biological and physical processes. Important processes in the nitrogen cycle include fixation, ammonification, nitrification, and denitrification. The majority of Earth's atmosphere (78%) is atmospheric nitrogen, making it the largest source of nitrogen. However, atmospheric nitrogen has limited availability for biological use, leading to a scarcity of usable nitrogen in many types of ecosystems. The nitrogen cycle is of particular interest to ecologists because nitrogen availability can affect the rate of key ecosystem processes, including primary production and decomposition. Human activities such as fossil fuel combustion, use of artificial nitrogen fertilizers, and release of nitrogen in wastewater have dramatically altered the global nitrogen cycle. Human modification of the global nitrogen cycle can negatively affect the natural environment system and also human health. Processes Nitrogen is present in the environment in a wide variety of chemical forms including organic nitrogen, ammonium, nitrite, nitrate, nitrous oxide, nitric oxide (NO) or inorganic nitrogen gas. Organic nitrogen may be in the form of a living organism, humus or in the intermediate products of organic matter decomposition. The processes in the nitrogen cycle is to transform nitrogen from one form to another. Many of those processes are carried out by microbes, either in their effort to harvest energy or to accumulate nitrogen in a form needed for their growth. For example, the nitrogenous wastes in animal urine are broken down by nitrifying bacteria in the soil to be used by plants. The diagram alongside shows how these processes fit together to form the nitrogen cycle. Nitrogen fixation The conversion of nitrogen gas into nitrates and nitrites through atmospheric, industrial and biological processes is called nitrogen fixation. Atmospheric nitrogen must be processed, or "fixed", into a usable form to be taken up by plants. Between 5 and 10 billion kg per year are fixed by lightning strikes, but most fixation is done by free-living or symbiotic bacteria known as diazotrophs. These bacteria have the nitrogenase enzyme that combines gaseous nitrogen with hydrogen to produce ammonia, which is converted by the bacteria into other organic compounds. Most biological nitrogen fixation occurs by the activity of molybdenum (Mo)-nitrogenase, found in a wide variety of bacteria and some Archaea. Mo-nitrogenase is a complex two-component enzyme that has multiple metal-containing prosthetic groups. An example of free-living bacteria is Azotobacter. Symbiotic nitrogen-fixing bacteria such as Rhizobium usually live in the root nodules of legumes (such as peas, alfalfa, and locust trees). Here they form a mutualistic relationship with the plant, producing ammonia in exchange for carbohydrates. Because of this relationship, legumes will often increase the nitrogen content of nitrogen-poor soils. A few non-legumes can also form such symbioses. Today, about 30% of the total fixed nitrogen is produced industrially using the Haber-Bosch process, which uses high temperatures and pressures to convert nitrogen gas and a hydrogen source (natural gas or petroleum) into ammonia. Assimilation Plants can absorb nitrate or ammonium from the soil by their root hairs. If nitrate is absorbed, it is first reduced to nitrite ions and then ammonium ions for incorporation into amino acids, nucleic acids, and chlorophyll. In plants that have a symbiotic relationship with rhizobia, some nitrogen is assimilated in the form of ammonium ions directly from the nodules. It is now known that there is a more complex cycling of amino acids between Rhizobia bacteroids and plants. The plant provides amino acids to the bacteroids so ammonia assimilation is not required and the bacteroids pass amino acids (with the newly fixed nitrogen) back to the plant, thus forming an interdependent relationship. While many animals, fungi, and other heterotrophic organisms obtain nitrogen by ingestion of amino acids, nucleotides, and other small organic molecules, other heterotrophs (including many bacteria) are able to utilize inorganic compounds, such as ammonium as sole N sources. Utilization of various N sources is carefully regulated in all organisms. Ammonification When a plant or animal dies or an animal expels waste, the initial form of nitrogen is organic. Bacteria or fungi convert the organic nitrogen within the remains back into ammonium, a process called ammonification or mineralization. Enzymes involved are: GS: Gln Synthetase (cytosolic & plastic) GOGAT: Glu 2-oxoglutarate aminotransferase (Ferredoxin & NADH-dependent) GDH: Glu Dehydrogenase: Minor role in ammonium assimilation. Important in amino acid catabolism. Nitrification The conversion of ammonium to nitrate is performed primarily by soil-living bacteria and other nitrifying bacteria. In the primary stage of nitrification, the oxidation of ammonium is performed by bacteria such as the Nitrosomonas species, which converts ammonia to nitrites. Other bacterial species such as Nitrobacter, are responsible for the oxidation of the nitrites into nitrates. It is important for the ammonia to be converted to nitrates or nitrites because ammonia gas is toxic to plants. Due to their very high solubility and because soils are highly unable to retain anions, nitrates can enter groundwater. Elevated nitrate in groundwater is a concern for drinking water use because nitrate can interfere with blood-oxygen levels in infants and cause methemoglobinemia or blue-baby syndrome. Where groundwater recharges stream flow, nitrate-enriched groundwater can contribute to eutrophication, a process that leads to high algal population and growth, especially blue-green algal populations. While not directly toxic to fish life, like ammonia, nitrate can have indirect effects on fish if it contributes to this eutrophication. Nitrogen has contributed to severe eutrophication problems in some water bodies. Since 2006, the application of nitrogen fertilizer has been increasingly controlled in Britain and the United States. This is occurring along the same lines as control of phosphorus fertilizer, restriction of which is normally considered essential to the recovery of eutrophied waterbodies. Denitrification Denitrification is the reduction of nitrates back into nitrogen gas, completing the nitrogen cycle. This process is performed by bacterial species such as Pseudomonas and Paracoccus, under anaerobic conditions. They use the nitrate as an electron acceptor in the place of oxygen during respiration. These facultatively (meaning optionally) anaerobic bacteria can also live in aerobic conditions. Denitrification happens in anaerobic conditions e.g. waterlogged soils. The denitrifying bacteria use nitrates in the soil to carry out respiration and consequently produce nitrogen gas, which is inert and unavailable to plants. Denitrification occurs in free-living microorganisms as well as obligate symbionts of anaerobic ciliates. Dissimilatory nitrate reduction to ammonium Dissimilatory nitrate reduction to ammonium (DNRA), or nitrate/nitrite ammonification, is an anaerobic respiration process. Microbes which undertake DNRA oxidise organic matter and use nitrate as an electron acceptor, reducing it to nitrite, then ammonium. Both denitrifying and nitrate ammonification bacteria will be competing for nitrate in the environment, although DNRA acts to conserve bioavailable nitrogen as soluble ammonium rather than producing dinitrogen gas. Anaerobic ammonia oxidation The ANaerobic AMMonia OXidation process is also known as the ANAMMOX process, an abbreviation coined by joining the first syllables of each of these three words. This biological process is a redox comproportionation reaction, in which ammonia (the reducing agent giving electrons) and nitrite (the oxidizing agent accepting electrons) transfer three electrons and are converted into one molecule of diatomic nitrogen gas and two water molecules. This process makes up a major proportion of nitrogen conversion in the oceans. The stoichiometrically balanced formula for the ANAMMOX chemical reaction can be written as following, where an ammonium ion includes the ammonia molecule, its conjugated base: (ΔG° = ). This an exergonic process (here also an exothermic reaction) releasing energy, as indicated by the negative value of ΔG°, the difference in Gibbs free energy between the products of reaction and the reagents. Other processes Though nitrogen fixation is the primary source of plant-available nitrogen in most ecosystems, in areas with nitrogen-rich bedrock, the breakdown of this rock also serves as a nitrogen source. Nitrate reduction is also part of the iron cycle, under anoxic conditions Fe(II) can donate an electron to and is oxidized to Fe(III) while is reduced to , and depending on the conditions and microbial species involved. The fecal plumes of cetaceans also act as a junction in the marine nitrogen cycle, concentrating nitrogen in the epipelagic zones of ocean environments before its dispersion through various marine layers, ultimately enhancing oceanic primary productivity. Marine nitrogen cycle The nitrogen cycle is an important process in the ocean as well. While the overall cycle is similar, there are different players and modes of transfer for nitrogen in the ocean. Nitrogen enters the water through the precipitation, runoff, or as from the atmosphere. Nitrogen cannot be utilized by phytoplankton as so it must undergo nitrogen fixation which is performed predominately by cyanobacteria. Without supplies of fixed nitrogen entering the marine cycle, the fixed nitrogen would be used up in about 2000 years. Phytoplankton need nitrogen in biologically available forms for the initial synthesis of organic matter. Ammonia and urea are released into the water by excretion from plankton. Nitrogen sources are removed from the euphotic zone by the downward movement of the organic matter. This can occur from sinking of phytoplankton, vertical mixing, or sinking of waste of vertical migrators. The sinking results in ammonia being introduced at lower depths below the euphotic zone. Bacteria are able to convert ammonia to nitrite and nitrate but they are inhibited by light so this must occur below the euphotic zone. Ammonification or Mineralization is performed by bacteria to convert organic nitrogen to ammonia. Nitrification can then occur to convert the ammonium to nitrite and nitrate. Nitrate can be returned to the euphotic zone by vertical mixing and upwelling where it can be taken up by phytoplankton to continue the cycle. can be returned to the atmosphere through denitrification. Ammonium is thought to be the preferred source of fixed nitrogen for phytoplankton because its assimilation does not involve a redox reaction and therefore requires little energy. Nitrate requires a redox reaction for assimilation but is more abundant so most phytoplankton have adapted to have the enzymes necessary to undertake this reduction (nitrate reductase). There are a few notable and well-known exceptions that include most Prochlorococcus and some Synechococcus that can only take up nitrogen as ammonium. The nutrients in the ocean are not uniformly distributed. Areas of upwelling provide supplies of nitrogen from below the euphotic zone. Coastal zones provide nitrogen from runoff and upwelling occurs readily along the coast. However, the rate at which nitrogen can be taken up by phytoplankton is decreased in oligotrophic waters year-round and temperate water in the summer resulting in lower primary production. The distribution of the different forms of nitrogen varies throughout the oceans as well. Nitrate is depleted in near-surface water except in upwelling regions. Coastal upwelling regions usually have high nitrate and chlorophyll levels as a result of the increased production. However, there are regions of high surface nitrate but low chlorophyll that are referred to as HNLC (high nitrogen, low chlorophyll) regions. The best explanation for HNLC regions relates to iron scarcity in the ocean, which may play an important part in ocean dynamics and nutrient cycles. The input of iron varies by region and is delivered to the ocean by dust (from dust storms) and leached out of rocks. Iron is under consideration as the true limiting element to ecosystem productivity in the ocean. Ammonium and nitrite show a maximum concentration at 50–80 m (lower end of the euphotic zone) with decreasing concentration below that depth. This distribution can be accounted for by the fact that nitrite and ammonium are intermediate species. They are both rapidly produced and consumed through the water column. The amount of ammonium in the ocean is about 3 orders of magnitude less than nitrate. Between ammonium, nitrite, and nitrate, nitrite has the fastest turnover rate. It can be produced during nitrate assimilation, nitrification, and denitrification; however, it is immediately consumed again. New vs. regenerated nitrogen Nitrogen entering the euphotic zone is referred to as new nitrogen because it is newly arrived from outside the productive layer. The new nitrogen can come from below the euphotic zone or from outside sources. Outside sources are upwelling from deep water and nitrogen fixation. If the organic matter is eaten, respired, delivered to the water as ammonia, and re-incorporated into organic matter by phytoplankton it is considered recycled/regenerated production. New production is an important component of the marine environment. One reason is that only continual input of new nitrogen can determine the total capacity of the ocean to produce a sustainable fish harvest. Harvesting fish from regenerated nitrogen areas will lead to a decrease in nitrogen and therefore a decrease in primary production. This will have a negative effect on the system. However, if fish are harvested from areas of new nitrogen the nitrogen will be replenished. Future acidification As illustrated by the diagram on the right, additional carbon dioxide is absorbed by the ocean and reacts with water, carbonic acid is formed and broken down into both bicarbonate and hydrogen ions (gray arrow), which reduces bioavailable carbonate and decreases ocean pH (black arrow). This is likely to enhance nitrogen fixation by diazotrophs (gray arrow), which utilize ions to convert nitrogen into bioavailable forms such as ammonia and ammonium ions. However, as pH decreases, and more ammonia is converted to ammonium ions (gray arrow), there is less oxidation of ammonia to nitrite (NO), resulting in an overall decrease in nitrification and denitrification (black arrows). This in turn would lead to a further build-up of fixed nitrogen in the ocean, with the potential consequence of eutrophication. Gray arrows represent an increase while black arrows represent a decrease in the associated process. Human influences on the nitrogen cycle As a result of extensive cultivation of legumes (particularly soy, alfalfa, and clover), growing use of the Haber–Bosch process in the production of chemical fertilizers, and pollution emitted by vehicles and industrial plants, human beings have more than doubled the annual transfer of nitrogen into biologically available forms. In addition, humans have significantly contributed to the transfer of nitrogen trace gases from Earth to the atmosphere and from the land to aquatic systems. Human alterations to the global nitrogen cycle are most intense in developed countries and in Asia, where vehicle emissions and industrial agriculture are highest. Generation of Nr, reactive nitrogen, has increased over 10 fold in the past century due to global industrialisation. This form of nitrogen follows a cascade through the biosphere via a variety of mechanisms, and is accumulating as the rate of its generation is greater than the rate of denitrification. Nitrous oxide has risen in the atmosphere as a result of agricultural fertilization, biomass burning, cattle and feedlots, and industrial sources. has deleterious effects in the stratosphere, where it breaks down and acts as a catalyst in the destruction of atmospheric ozone. Nitrous oxide is also a greenhouse gas and is currently the third largest contributor to global warming, after carbon dioxide and methane. While not as abundant in the atmosphere as carbon dioxide, it is, for an equivalent mass, nearly 300 times more potent in its ability to warm the planet. Ammonia in the atmosphere has tripled as the result of human activities. It is a reactant in the atmosphere, where it acts as an aerosol, decreasing air quality and clinging to water droplets, eventually resulting in nitric acid (HNO3) that produces acid rain. Atmospheric ammonia and nitric acid also damage respiratory systems. The very high temperature of lightning naturally produces small amounts of , , and , but high-temperature combustion has contributed to a 6- or 7-fold increase in the flux of to the atmosphere. Its production is a function of combustion temperature - the higher the temperature, the more is produced. Fossil fuel combustion is a primary contributor, but so are biofuels and even the burning of hydrogen. However, the rate that hydrogen is directly injected into the combustion chambers of internal combustion engines can be controlled to prevent the higher combustion temperatures that produce . Ammonia and nitrous oxides actively alter atmospheric chemistry. They are precursors of tropospheric (lower atmosphere) ozone production, which contributes to smog and acid rain, damages plants and increases nitrogen inputs to ecosystems. Ecosystem processes can increase with nitrogen fertilization, but anthropogenic input can also result in nitrogen saturation, which weakens productivity and can damage the health of plants, animals, fish, and humans. Decreases in biodiversity can also result if higher nitrogen availability increases nitrogen-demanding grasses, causing a degradation of nitrogen-poor, species-diverse heathlands. Consequence of human modification of the nitrogen cycle Impacts on natural systems Increasing levels of nitrogen deposition is shown to have several adverse effects on both terrestrial and aquatic ecosystems. Nitrogen gases and aerosols can be directly toxic to certain plant species, affecting the aboveground physiology and growth of plants near large point sources of nitrogen pollution. Changes to plant species may also occur as nitrogen compound accumulation increases availability in a given ecosystem, eventually changing the species composition, plant diversity, and nitrogen cycling. Ammonia and ammonium – two reduced forms of nitrogen – can be detrimental over time due to increased toxicity toward sensitive species of plants, particularly those that are accustomed to using nitrate as their source of nitrogen, causing poor development of their roots and shoots. Increased nitrogen deposition also leads to soil acidification, which increases base cation leaching in the soil and amounts of aluminum and other potentially toxic metals, along with decreasing the amount of nitrification occurring and increasing plant-derived litter. Due to the ongoing changes caused by high nitrogen deposition, an environment's susceptibility to ecological stress and disturbance – such as pests and pathogens – may increase, thus making it less resilient to situations that otherwise would have little impact on its long-term vitality. Additional risks posed by increased availability of inorganic nitrogen in aquatic ecosystems include water acidification; eutrophication of fresh and saltwater systems; and toxicity issues for animals, including humans. Eutrophication often leads to lower dissolved oxygen levels in the water column, including hypoxic and anoxic conditions, which can cause death of aquatic fauna. Relatively sessile benthos, or bottom-dwelling creatures, are particularly vulnerable because of their lack of mobility, though large fish kills are not uncommon. Oceanic dead zones near the mouth of the Mississippi in the Gulf of Mexico are a well-known example of algal bloom-induced hypoxia. The New York Adirondack Lakes, Catskills, Hudson Highlands, Rensselaer Plateau and parts of Long Island display the impact of nitric acid rain deposition, resulting in the killing of fish and many other aquatic species. Ammonia is highly toxic to fish, and the level of ammonia discharged from wastewater treatment facilities must be closely monitored. Nitrification via aeration before discharge is often desirable to prevent fish deaths. Land application can be an attractive alternative to aeration. Impacts on human health: nitrate accumulation in drinking water Leakage of Nr (reactive nitrogen) from human activities can cause nitrate accumulation in the natural water environment, which can create harmful impacts on human health. Excessive use of N-fertilizer in agriculture has been a significant source of nitrate pollution in groundwater and surface water. Due to its high solubility and low retention by soil, nitrate can easily escape from the subsoil layer to the groundwater, causing nitrate pollution. Some other non-point sources for nitrate pollution in groundwater originate from livestock feeding, animal and human contamination, and municipal and industrial waste. Since groundwater often serves as the primary domestic water supply, nitrate pollution can be extended from groundwater to surface and drinking water during potable water production, especially for small community water supplies, where poorly regulated and unsanitary waters are used. The WHO standard for drinking water is 50 mg L−1 for short-term exposure, and for 3 mg L−1 chronic effects. Once it enters the human body, nitrate can react with organic compounds through nitrosation reactions in the stomach to form nitrosamines and nitrosamides, which are involved in some types of cancers (e.g., oral cancer and gastric cancer). Impacts on human health: air quality Human activities have also dramatically altered the global nitrogen cycle by producing nitrogenous gases associated with global atmospheric nitrogen pollution. There are multiple sources of atmospheric reactive nitrogen (Nr) fluxes. Agricultural sources of reactive nitrogen can produce atmospheric emission of ammonia, nitrogen oxides and nitrous oxide. Combustion processes in energy production, transportation, and industry can also form new reactive nitrogen via the emission of , an unintentional waste product. When those reactive nitrogens are released into the lower atmosphere, they can induce the formation of smog, particulate matter, and aerosols, all of which are major contributors to adverse health effects on human health from air pollution. In the atmosphere, can be oxidized to nitric acid, and it can further react with to form ammonium nitrate, which facilitates the formation of particulate nitrate. Moreover, can react with other acid gases (sulfuric and hydrochloric acids) to form ammonium-containing particles, which are the precursors for the secondary organic aerosol particles in photochemical smog. See also References Cycle Biogeochemical cycle Soil biology Metabolism Biogeography
0.762387
0.998357
0.761134
Transhumance
Transhumance is a type of pastoralism or nomadism, a seasonal movement of livestock between fixed summer and winter pastures. In montane regions (vertical transhumance), it implies movement between higher pastures in summer and lower valleys in winter. Herders have a permanent home, typically in valleys. Generally only the herds travel, with a certain number of people necessary to tend them, while the main population stays at the base. In contrast, horizontal transhumance is more susceptible to being disrupted by climatic, economic, or political change. Traditional or fixed transhumance has occurred throughout the inhabited world, particularly Europe and western Asia. It is often important to pastoralist societies, as the dairy products of transhumance flocks and herds (milk, butter, yogurt and cheese) may form much of the diet of such populations. In many languages there are words for the higher summer pastures, and frequently these words have been used as place names: e.g. hafod in Wales and shieling in Scotland, or alp in Germany, Austria and German-speaking regions of Switzerland. Etymology and definition The word transhumance comes from French and derives from the Latin words "across" and "ground". Literally meaning crossing the land. Transhumance developed on every inhabited continent. Although there are substantial cultural and technological variations, the underlying practices for taking advantage of remote seasonal pastures are similar linguistically. Khazanov categorizes nomadic forms of pastoralism into five groups as follows: "pure pastoral nomadism", "semi-nomadic pastoralism", "semi-sedentary pastoralism", "distant-pastures husbandry" and "seasonal transhumance". Eickelman does not make a distinction between transhumant pastoralism and seminomadism, but he clearly distinguishes between nomadic pastoralism and seminomadism. In prehistory There is evidence that transhumance was practised world-wide prior to recorded history: in Europe, isotope studies of livestock bones suggest that certain animals were moved seasonally. The prevalence of various groups of Hill people around the world suggests that indigenous knowledge regarding transhumance must have developed and survived over generations to allow for the acquisition of sufficient skills to thrive in mountainous regions. Most drovers are conversant with subsistence agriculture, pastoralism as well as forestry and frozen water and fast stream management. Europe Alps Balkans In the Balkans, Albanians, Greek Sarakatsani, Eastern Romance (Romanians, Aromanians, Megleno-Romanians and Istro-Romanians) and Turkish Yörük peoples traditionally spent summer months in the mountains and returned to lower plains in the winter. When the area was part of the Austro-Hungary and Ottoman Empires, borders between Greece, Albania, Bulgaria and the former Yugoslavia were relatively unobstructed. In summer, some groups went as far north as the Balkan Mountains, and they would spend the winter on warmer plains in the vicinity of the Aegean Sea. The Morlach or Karavlachs were a population of Eastern Romance shepherds ("ancestors" of the Istro-Romanians) who lived in the Dinaric Alps (western Balkans in modern use), constantly migrating in search of better pastures for their sheep flocks. But as national states appeared in the area of the former Ottoman Empire, new state borders were developed that divided the summer and winter habitats of many of the pastoral groups. These prevented easy movement across borders, particularly at times of war, which have been frequent. Poland In Poland it is called redyk, it is the ceremonial departure of shepherds with their flocks of sheep and shepherd dogs, to graze in the mountain pastures (spring redyk), as well as their return from grazing (autumn redyk). In the local mountain dialect, autumn redyk is called uosod, which comes from the Polish word "uosiadć" (ôsawiedź), to return sheep to individual farms. There is also a theory that it comes from the word uozchod (ôzchod), which means the separation of sheep from individual gazdōwek (farms). In this word, there may have been a complete loss of the pronunciation of ch, which in the Podhale dialect (Poland) in this type of positions is pronounced as a barely audible h (so-called sonorous h), similarly to the word schować, which in the highlander language is pronounced as sowa. In the Memoir of the Tatra Society from 1876, the way in which this is done is described: "(...) they herd sheep from the entire village to one agreed place, give them to the shepherds and shepherds one by one, then mix them together and count the number of the whole herd (flock). ) pieces is (what they call "the reading"). The reading is done in such a way that one juhas, holding the chaplet in his hand, puts one bead for each ten sheep counted. The second one takes one sheep from the flock and, as he lets it out of the fenced barracks, counts: one, two, three, etc. up to ten, and after each ten he calls out: "desat" The leading of the sheep was preceded by magical procedures that were supposed to protect them from bad fate and from being enchanted. For this purpose, bonfires were lit and sheep were led through the fire. It began with the resurrection of the "holy fire" in the kolyba (shepherd's hut). Custom dictated that from that day on, it was to be kept burning continuously by the main shepherd – the shepherd. Next, the sheep were led around a small chevron or spruce tree stuck in the ground – the so-called mojka (which was supposed to symbolize the health and strength of everyone present in the hall) and they were fumigated with burning herbs and a połazzka brought to the sałasz. This was intended to cleanse them of diseases and prevent misfortune. Then the flock was herded around it three times, which was intended to concentrate the sheep into one group and prevent individual animals from escaping. Baca's task was to pull the sheep behind him, helping himself with salt which he sprinkled on the flock. With the help of dogs and whistles, the Juhasi encouraged the herd and made sure that the sheep followed the shepherd. Sheep that fell outside the circle boded ill. It was believed that the number of sheep that fell outside the circle would die in the coming season. The stay in the pasture (hala) begins on St. Wojciech's Day (April 23), and ends on Michaelmas Day (September 29). This method of sheep grazing is a relic of transhumant agriculture, which was once very common in the Carpathians. (Carpathian transhumance agriculture). In the pastoral culture in Poland, Redyk was perceived as the greatest village festival. Farmers who gave their sheep to a shepherd for the entire season, before grazing in the pastures, listed them (most often by marking them with notches on the shepherd's stick, stick or beam), marked them and placed them in a basket made of tynin. In the Memoir of the Tatra Society from 1876, the way in which this is done is described: "(...) they herd sheep from the entire village to one agreed place, give them to the shepherds and shepherds one by one, then mix them together and count the number of the whole herd (flock). The sheep of all the shepherds were gathered in one place at the foot of the mountains, and then one large herd was driven to the szalas. The entrance to the hala was also particularly emphasized: there was shooting, honking and shouting all the way. This was intended to drive away evil spirits from their animals and to keep the entire herd together. At the end of the ceremony, there was music and dancing together. The musicians played traditional instruments: gajdas and violins. To the accompaniment of music, the Sałashniks performed the oldest individual dance – the owiedziok, the owczarza, the kolomajka, the swinszczok, the masztołka. Redyk included many local practices, rituals and celebrations. In modern time it is mainly a part of local traditional entertainment. The modern spring and autumn Redyk (sheep drive) has the character of a folkloric spectacle addressed to locals and tourists, but also to the highlanders themselves, who to identify with their traditions. Sometimes common redyk was organised also in Czechia, Slovakia and Romania. In Poland, the organisers was the Transhumant Pastoral Foundation. Britain Wales In most parts of Wales, farm workers and sometimes the farmer would spend the summer months at a hillside summer house, or , where the livestock would graze. During the late autumn the farm family and workers would drive the flocks down to the valleys and stay at the main house or . This system of transhumance has generally not been practised for almost a century; it continued in Snowdonia after it ceased elsewhere in Wales, and remnants of the practice can still be found in rural farming communities in the region to this day. Both "Hafod" and "Hendref" survive in Wales as place names and house names and in one case as the name of a raw milk cow cheese (Hafod). Today, cattle and sheep that summer on many hill farms are still transported to lowland winter pastures, but by truck rather than being driven overland. Scotland In many hilly and mountainous areas of Scotland, agricultural workers spent summer months in bothies or shielings ( or in Scottish Gaelic). Major drovers' roads in the eastern part of Scotland include the Cairnamounth, Elsick Mounth and Causey Mounth. This practice has largely stopped but was practised within living memory in the Hebrides and in the Scottish Highlands. Today much transhumance is carried out by truck, with upland flocks being transported under agistment to lower-lying pasture during winter. England Evidence exists of transhumance being practised in England since at least medieval times, from Cornwall in the south-west, through to the north of England. In the Lake District, hill sheep breeds, such as the Herdwick and Swaledale are moved between moor and valley in summer and winter. This led to a trait and system known as "hefting", whereby sheep and flock remain in the farmer's allotted area (heaf) of the commons, which is still practised. Ireland In Ireland, transhumance is known as "booleying". Transhumance pastures were known as , variously anglicised as , , or . These names survive in many place names such as Buaile h'Anraoi in Kilcommon parish, Erris, North Mayo, where the landscape still clearly shows the layout of the rundale system of agriculture. The livestock, usually cattle, was moved from a permanent lowland village to summer pastures in the mountains. The appearance of "Summerhill" in many place names also bears witness to the practice. This transfer alleviated pressure on the growing crops and provided fresh pasture for the livestock. Mentioned in the Brehon Laws, booleying dates back to the Early Medieval period or even earlier. The practice was widespread in the west of Ireland up until the time of the Second World War. Seasonal migration of workers to Scotland and England for the winter months superseded this ancient system, together with more permanent emigration to the United States. Italy In Southern Italy, the practice of driving herds to hilly pasture in summer was also known in some parts of the regions l and has had a long-documented history until the 1950s and 1960s with the advent of alternative road transport. Drovers' roads, or , up to wide and more than long, permitted the passage and grazing of herds, principally sheep, and attracted regulation by law and the establishment of a mounted police force as far back as the 17th century. The tratturi remain public property and subject to conservation by the law protecting cultural heritage. The Molise region candidates the tratturi to the UNESCO as a world heritage. Spain Transhumance is historically widespread throughout much of Spain, particularly in the regions of Castile, Leon and Extremadura, where nomadic cattle and sheep herders travel long distances in search of greener pastures in summer and warmer climatic conditions in winter. Spanish transhumance is the origin of numerous related cultures in the Americas such as the cowboys of the United States and the Gauchos of Argentina, Paraguay and Brazil. A network of droveways, or , crosses the whole peninsula, running mostly south-west to north-east. They have been charted since ancient times, and classified according to width; the standard is between wide, with some (meaning royal droveways) being wide at certain points. The land within the droveways is publicly owned and protected by law. In some high valleys of the Pyrenees and the Cantabrian Mountains, transhumant herding has been the main, or only, economic activity. Regulated passes and pasturage have been distributed among different valleys and communities according to the seasonal range of use and community jurisdiction. Unique social groups associated with the transhumant lifestyle are sometimes identified as a remnant of an older ethnic culture now surviving in isolated minorities, such as the "Pasiegos" in Cantabria, "Agotes" in Navarre, and "Vaqueiros de alzada" in Asturias and León. The Pyrenees Transhumance in the Pyrenees involves relocation of livestock (cows, sheep, horses) to high mountains during the summer months, because farms in the lowland are too small to support a larger herd all year round. The mountain period starts in late May or early June, and ends in early October. Until the 1970s, transhumance was used mainly for dairy cows, and cheese-making was an important activity in the summer months. In some regions, nearly all members of a family decamped to higher mountains with their cows, living in rudimentary stone cabins for the summer grazing season. That system, which evolved during the Middle Ages, lasted into the 20th century. It declined and broke down under pressure from industrialisation, as people left the countryside for jobs in cities. However, the importance of transhumance continues to be recognised through its celebration in popular festivals. The Mont Perdu / Monte Perdido region of the Pyrenees has been designated as a UNESCO World Heritage Site by virtue of its association with the transhumance system of agriculture. Scandinavian Peninsula In Scandinavia, transhumance is practised to a certain extent; however, livestock are transported between pastures by motorised vehicles, changing the character of the movement. The Sami people practice transhumance with reindeer by a different system than is described immediately below. The common mountain or forest pasture used for transhumance in summer is called or / . The same term is used for a related mountain cabin, which was used as a summer residence. In summer (usually late June), livestock is moved to a mountain farm, often quite distant from a home farm, to preserve meadows in valleys for producing hay. Livestock is typically tended during the summer by girls and younger women, who also milk and make cheese. Bulls usually remain at the home farm. As autumn approaches and grazing is exhausted, livestock is returned to the farm. In Sweden, this system was predominantly used in Värmland, Dalarna, Härjedalen, Jämtland, Hälsingland, Medelpad and Ångermanland. The practice was common throughout most of Norway, due to its highly mountainous nature and limited areas of lowland for cultivation. While previously many farms had their own seters, it is more usual for several farmers to share a modernised common seter. Most of the old seters have been left to decay or are used as recreational cabins. The name for the common mountain pasture in most Scandinavian languages derives from the Old Norse term . In Norwegian, the term is or ; in Swedish, . The place name appears in Sweden in several forms as and , and as a suffix: -, -, - and -. Those names appear extensively across Sweden with a centre in the Mälaren basin and in Östergötland. The surname "Satter" is derived from these words. In the heartland of the Swedish transhumance region, the most commonly used term is or (the word is also used for small storage houses and the like; it has evolved in English as booth); in modern Standard Swedish, . The oldest mention of in Norway is in Heimskringla, the saga of Olaf II of Norway's travel through Valldal to Lesja. Caucasus and northern Anatolia In the heavily forested Caucasus and Pontic mountain ranges, various peoples still practice transhumance to varying degrees. During the relatively short summer, wind from the Black Sea brings moist air up the steep valleys, which supports fertile grasslands at altitudes up to , and a rich tundra at altitudes up to . Traditionally, villages were divided into two, three or even four distinct settlements (one for each season) at different heights of a mountain slope. Much of this rural life came to an end during the first half of the 20th century, as the Kemalist and later Soviet governments tried to modernise the societies and stress urban development, rather than maintaining rural traditions. In the second half of the 20th century, migration for work from the Pontic mountains to cities in Turkey and western Europe, and from the northern Caucasus to Moscow, dramatically reduced the number of people living in transhumance. It is estimated, however, that tens of thousands of rural people still practice these traditions in villages on the northern and southwestern slopes of the Caucasus, in the lesser Caucasus in Armenia, and in the Turkish Black Sea region. Some communities continue to play out ancient migration patterns. For example, the Pontic Greeks visit the area and the monastery Sumela in the summer. Turks from cities in Europe have built a summer retreat on the former yayla grazing land. Transhumance related to sheep farming is still practised in Georgia. The shepherds with their flocks have to cross the high Abano Pass from the mountains of Tusheti to the plains of Kakheti. Up until the dissolution of Soviet Union they intensively used the Kizlyar plains of Northern Dagestan for the same purpose. Asia Afghanistan The central Afghan highlands of Afghanistan, which surround the Koh-i-Baba and continue eastward into the Hindu Kush range, there are very cold winters, and short and cool summers. These highlands have mountain pastures during summer, watered by many small streams and rivers. There are also pastures available during winter in the neighboring warm lowlands, which makes the region ideal for seasonal transhumance. The Afghan Highlands contain about of summer pasture, which is used by both settled communities and nomadic pastoralists like the Pashtun Kuchis. Major pastures in the region include the Nawur pasture in northern Ghazni Province (whose area is about 600 km2 at elevation of up to 3,350 m), and the Shewa pasture and the Little Pamir in eastern Badakhshan Province. The Little Pamir pasture, whose elevation is above , is used by the Afghan Kyrgyz to raise livestock. In Nuristan, the inhabitants live in permanent villages surrounded by arable fields on irrigated terraces. Most of the livestock are goats. They are taken up to a succession of summer pastures each spring by herdsmen while most of the villagers remain behind to irrigate the terraced fields and raise millet, maize, and wheat; work mostly done by the women. In the autumn after the grain and fruit harvest, livestock are brought back to spend the winter stall-fed in stables. India Jammu and Kashmir in India has the world's highest transhumant population as per a survey conducted by a team led by Dr Shahid Iqbal Choudhary, IAS, Secretary to the Government of Jammu and Kashmir, Tribal Affairs Department. The 1st Survey of Transhumance in 2021 captured details of 6,12,000 members of ethnic tribal communities viz Gujjars, Bakkerwals, Gaddis and Sippis. The survey was carried out for development and welfare planning for these communities notified as Scheduled tribes under the Constitution of India. Subsequent to the survey a number of flagship initiatives were launched by the Government for their welfare and development especially in sectors like healthcare, veterinary services, education, livelihood and transportation support for migration. Transhumance in Jammu and Kashmir is mostly vertical while some families in the plains of the Jammu, Samba and Kathua districts also practice lateral or horizontal transhumance. More than 85% of the migratory transhumant population moves within the Union Territory of Jammu and Kashmir while the remaining 15% undertakes inter-state movement to the neighbouring Punjab State and to the Ladakh Union territory. Gujjars – a migratory tribe – also sparsely inhabit several areas in parts of Punjab, Himachal Pradesh and Uttarakhand. The Gujjar-Bakkerwal tribe represents the highest transhumant population in the world and accounts for nearly 98% of the transhumant population in Jammu and Kashmir. The Bhotiya communities of Uttarakhand historically practiced transhumance. They would spend the winter months at low altitude settlements in the Himalayan foothills, gathering resources to trade in Tibet over the summer. In the summer, they would move up to high-altitude settlements along various river valleys. Some people would remain at these settlements to cultivate farms; some would head to trade marts, crossing high mountain passes into western Tibet, while some others would practice nomadic pastoralism. This historic way of life came to an abrupt halt due to the closure of the Sino-Indian border following the Sino-Indian War of 1962. In the decades following this war, transhumance as a way of life rapidly declined among the Bhotiya people. Iran The Bakhtiari tribe of Iran still practised this way of life in the mid-20th century. All along the Zagros Mountains from Azerbaijan to the Arabian Sea, pastoral tribes move back and forth with their herds annually according to the seasons, between their permanent homes in the valley and one in the foothills. The Qashqai (Kashkai) are a Turkic tribe of southern Iran, who in the mid-20th century still practised transhumance. The tribe was said to have settled in ancient times in the province of Fars, near the Persian Gulf, and by the mid-20th century lived beyond the Makran mountains. In their yearly migrations for fresh pastures, the Kashkai drove their livestock from south to north, where they lived in summer quarters, known as , in the high mountains from April to October. They traditionally grazed their flocks on the slopes of the Kuh-e-Dinar, a group of mountains from , part of the Zagros chain. In autumn the Kashkai broke camp, leaving the highlands to winter in warmer regions near Firuzabad, Kazerun, Jerrè, Farashband, on the banks of the Mond River. Their winter quarters were known as . The migration was organised and controlled by the Kashkai Chief. The tribes avoided villages and towns, such as Shiraz and Isfahan, because their large flocks, numbering seven million head, could cause serious damage. In the 1950s, the Kashkai tribes were estimated to number 400,000 people in total. There have been many social changes since that time. Lebanon Examples of fixed transhumance are found in the North Governorate of Lebanon. Towns and villages located in the Qadisha valley are at an average altitude of . Some settlements, like Ehden and Kfarsghab, are used during summer periods from the beginning of June until mid-October. Inhabitants move in October to coastal towns situated at an average of above sea level. The transhumance is motivated by agricultural activities (historically by the mulberry silkworm culture). The main crops in the coastal towns are olive, grape and citrus. For the mountain towns, the crops are summer fruits, mainly apples and pears. Other examples of transhumance exist in Lebanon. Kyrgyzstan In Kyrgyzstan, transhumance practices, which never ceased during the Soviet period, have undergone a resurgence in the difficult economic times following independence in 1991. Transhumance is integral to Kyrgyz national culture. The people use a wool felt tent, known as the yurt or , while living on these summer pastures. It is symbolised on their national flag. Those shepherds prize a fermented drink made from mare's milk, known as . A tool used in its production is the namesake for Bishkek, the country's capital city. South and East Asia Transhumance practices are found in temperate areas, above ≈ in the Himalaya–Hindu Kush area (referred to below as Himalaya); and the cold semi-arid zone north of the Himalaya, through the Tibetan Plateau and northern China to the Eurasian Steppe. Mongolia, China, Kazakhstan, Kyrgyzstan, Bhutan, India, Nepal and Pakistan all have vestigial transhumance cultures. The Bamar people of Myanmar were transhumance prior to their arrival to the region. In Mongolia, transhumance is used to avoid livestock losses during harsh winters, known as zuds. For regions of the Himalaya, transhumance still provides mainstay for several near-subsistence economiesfor example, that of Zanskar in northwest India, Van Gujjars and Bakarwals of Jammu and Kashmir in India, Kham Magar in western Nepal and Gaddis of Bharmaur region of Himachal Pradesh. In some cases, the distances travelled by the people with their livestock may be great enough to qualify as nomadic pastoralism. Oceania Australia In Australia, which has a large station (i.e., ranch) culture, stockmen provide the labour to move the herds to seasonal pastures. Transhumant grazing is an important aspect of the cultural heritage of the Australian Alps, an area of which has been included on the Australian National Heritage List. Colonists started using this region for summer grazing in the 1830s, when pasture lower down was poor. The practice continued during the 19th and 20th centuries, helping make pastoralism in Australia viable. Transhumant grazing created a distinctive way of life that is an important part of Australia's pioneering history and culture. There are features in the area that are reminders of transhumant grazing, including abandoned stockman's huts, stock yards and stock routes. Africa North Africa The Berber people of North Africa were traditionally farmers, living in mountains relatively close to the Mediterranean coast, or oasis dwellers. However, the Tuareg and Zenaga of the southern Sahara practice nomadic transhumance. Other groups, such as the Chaouis, practised fixed transhumance. Horn of Africa In rural areas, the Somali and Afar of Northeast Africa also traditionally practise nomadic transhumance. Their pastoralism is centred on camel husbandry, with additional sheep and goat herding. The classic, "fixed" transhumance is practiced in the Ethiopian Highlands. During the cropping season the lands around the villages are not accessible for grazing. For instance, farmers with livestock in Dogu'a Tembien organise annual transhumance, particularly towards remote and vast grazing grounds, deep in valleys (where the grass grows early due to temperature) or mountain tops. Livestock will stay there overnight (transhumance) with children and a few adults keeping them. For instance, the cattle of Addi Geza'iti are brought every rainy season to the gorge of River Tsaliet that holds dense vegetation. The cattle keepers establish enclosures for the cattle and places for them to sleep, often in rock shelters. The cattle stay there until harvesting time, when they are needed for threshing, and when the stubble becomes available for grazing. Many cattle of Haddinnet and also Ayninbirkekin in Dogu'a Tembien are brought to the foot of the escarpment at Ab'aro. Cattle stay on there on wide rangelands. Some cattle keepers move far down to open woodland and establish their camp in large caves in sandstone. East Africa The Pokot community are semi-nomadic pastoralists who are predominantly found in northwestern Kenya and Amudat district of Uganda. The community practices nomadic transhumance, with seasonal movement occurring between grasslands of Kenya (North Pokot sub-county) and Uganda (Amudat, Nakapiripirit and Moroto districts) (George Magak Oguna, 2014). The Maasai are semi-nomadic people located primarily in Kenya and northern Tanzania who have transhumance cultures that revolve around their cattle. Nigeria Fulani is the Hausa word for the pastoral peoples of Nigeria belonging to the Fulbe migratory ethnic group. The Fulani rear the majority of Nigeria's cattle, traditionally estimated at 83% pastoral, 17% village cattle and 0.3% peri-urban). Cattle fulfil multiple roles in agro-pastoralist communities, providing meat, milk and draught power while sales of stock generate income and provide insurance against disasters. They also play a key role in status and prestige and for cementing social relationships such as kinship and marriage. For pastoralists, cattle represent the major household asset. Pastoralism, as a livelihood, is coming under increased pressure across Africa, due to changing social, economic, political and environmental conditions. Prior to the 1950s, a symbiotic relationship existed between pastoralists, crop farmers and their environment with pastoralists practising transhumance. During the dry season, pastoralists migrated to the southern parts of the Guinea savannah zone, where there was ample pasture and a lower density of crop farmers. In the wet season, these areas faced high challenge from African animal trypanosomiasis transmitted by tsetse flies, so pastoralists would migrate to visit farmlands within the northern Sudan savannah zone, supplying dairy products to the local farming community. Reciprocally, the farming community supplied pastoralists with grain, and after the harvest, cattle were permitted to graze on crop residues in fields leaving behind valuable manure. Angola In Southern Angola, several peoples, chiefly the Ovambo and part of the Nyaneka-Khumbi, have cultures that are entirely organised according to the practice of transhumance. Lesotho The traditional economy of the Basotho in Lesotho is based on rearing cattle. They practise a seasonal migration between valley and high plateaus of the Maloti (basalt mountains of Lesotho). Pressure on pasture land has increased due to increases in population, as well as construction of large storage dams in these mountains to provide water to South Africa's arid industrial heartland. Growing pressure on pastures is contributing to degradation of sensitive grasslands and could contribute to sedimentation in man-made lakes. The traditional transhumance pattern has become modified. South Africa In South Africa the transhumance lifestyle of the Nama clan of the Khoikhoi continues in the Richtersveld, a montane-desert located close to the Atlantic coast in the northwestern area of the country. In this area, people move seasonally (three or four times per annum) with their herds of sheep and goats. Transhumance is based on small family units, which use the same camps each year. A portable, dome tent, called a (Afrikaans for "mat house") or (meaning "rush house" in Nama) is a feature of Khoikhoi culture. These dwellings are used in their seasonal camps in the Richtersveld. It consists of a frame traditionally covered with rush mats. In the 21st century, the people sometimes use a variety of manufactured materials. In recognition of its significance, the Richtersveld has been designated as a UNESCO World Heritage Site. North America In the southern Appalachians of the United States in the 19th and early 20th centuries, settlers often pastured livestock, especially sheep, on grassy bald mountain tops where wild oats predominate. Historians have speculated that these "balds" are remnants of ancient bison grazing lands (which were possibly maintained by early native peoples of North America). In the absence of transhumance, these balds have been becoming covered by forest since the late 20th century. It is unclear whether efforts will be made to preserve these historic managed ecosystems. Transhumance, in most cases relying on use of public land, continues to be an important ranching practice in the western United States. In the northern areas, this tradition was based on moving herds to higher ground with the greening of highland pastures in spring and summer. These uplands are part of large public lands, often under the jurisdiction of the United States Forest Service. In the winter, herds use lowland steppe or desert, also often government land under the jurisdiction of the Bureau of Land Management. In California and Texas, a greater proportion of the range is held as private land, due to differing historical development of these areas. The general pattern is that in summer, ranch families, hired shepherds, or hired cowboys travel to the mountains and stay in a line camp during the summer. They may also visit the upland ranch regularly, using trailers to transport horses for use in the high country. Traditionally in the American West, shepherds spent most of the year with a sheep herd, searching for the best forage in each season. This type of shepherding peaked in the late nineteenth century. Cattle and sheep herds are generally based on private land, although this may be a small part of the total range when all seasons are included. Some farmers who raised sheep recruited Basque shepherds to care for the herds, including managing migration between grazing lands. Workers from Peru, Chile (often Native Americans), and Mongolia have now taken shepherd roles; the Basque have bought their own ranches or moved to urban jobs. Shepherds take the sheep into the mountains in the summer (documented in the 2009 film Sweetgrass) and out on the desert in the winter, at times using crop stubble and pasture on private land when it is available. There are a number of different forms of transhumance in the United States: The Navajo began practicing transhumance in the 1850s, after they were forced out of their traditional homeland in the San Juan River valley. They maintain many sheep. In California, the home ranch tends to have more private land, largely because of the legacy of the Spanish land grant system. For this reason, extensive acreages of Mediterranean oak woodlands and grasslands are stewarded by ranches whose economy depends on summer range on government land under the jurisdiction of the U.S. Forest Service. South America South American transhumance partially relies on "cowboy" counterparts, the of Argentina, Uruguay, Paraguay and (with the spelling "") southern Brazil, the of Venezuela, and the of Chile. Transhumance is currently practised at least in Argentina, Chile, Peru and Bolivia, as well as in the Brazilian Pantanal. It mainly involves movement of cattle in the Pantanal and in parts of Argentina. In the Altiplano, communities of indigenous people depend on raising camelids, especially llamas. Herds of goats are managed by transhumance in North Neuquén and South Mendoza, while sheep are more used in the Patagonian plains. Criollos and indigenous peoples use transhumant practices in areas of South America. See also Altitudinal migration Kuchis Rarámuri Sarakatsani Seasonal human migration Yaylak Sources Jones, Schuyler. "Transhumance Reconsidered". Journal of the Royal Anthropological Institute, London, 2005. Costello, Eugene & Svensson, Eva (eds.). Historical Archaeologies Of Transhumance Across Europe Routledge, London, 2018. Jones, Schuyler. Men of Influence: Social Control & Dispute Settlement in Waigal Valley, Afghanistan. Seminar Press, London & New York, 1974. References External links U.S. Department of Agriculture Discussion on Asia U.S. Department of Agriculture Discussion on Africa Transhumance and 'The Waiting Zone' in North Africa Limited traditional transhumance in Australia Pastoralism Short mention of transhumance in North America Swiss land registry of alpine pastures (German) La transhumancia in Madrid Spain The transhumance from Schnals Valley (Italy) to Ötz Valley (Austria) Interview with Lionel Martorell, one of the last transhumant pastors in Eastern Spain
0.763757
0.996562
0.761132
Gene structure
Gene structure is the organisation of specialised sequence elements within a gene. Genes contain most of the information necessary for living cells to survive and reproduce. In most organisms, genes are made of DNA, where the particular DNA sequence determines the function of the gene. A gene is transcribed (copied) from DNA into RNA, which can either be non-coding (ncRNA) with a direct function, or an intermediate messenger (mRNA) that is then translated into protein. Each of these steps is controlled by specific sequence elements, or regions, within the gene. Every gene, therefore, requires multiple sequence elements to be functional. This includes the sequence that actually encodes the functional protein or ncRNA, as well as multiple regulatory sequence regions. These regions may be as short as a few base pairs, up to many thousands of base pairs long. Much of gene structure is broadly similar between eukaryotes and prokaryotes. These common elements largely result from the shared ancestry of cellular life in organisms over 2 billion years ago. Key differences in gene structure between eukaryotes and prokaryotes reflect their divergent transcription and translation machinery. Understanding gene structure is the foundation of understanding gene annotation, expression, and function. Common features The structures of both eukaryotic and prokaryotic genes involve several nested sequence elements. Each element has a specific function in the multi-step process of gene expression. The sequences and lengths of these elements vary, but the same general functions are present in most genes. Although DNA is a double-stranded molecule, typically only one of the strands encodes information that the RNA polymerase reads to produce protein-coding mRNA or non-coding RNA. This 'sense' or 'coding' strand, runs in the 5' to 3' direction where the numbers refer to the carbon atoms of the backbone's ribose sugar. The open reading frame (ORF) of a gene is therefore usually represented as an arrow indicating the direction in which the sense strand is read. Regulatory sequences are located at the extremities of genes. These sequence regions can either be next to the transcribed region (the promoter) or separated by many kilobases (enhancers and silencers). The promoter is located at the 5' end of the gene and is composed of a core promoter sequence and a proximal promoter sequence. The core promoter marks the start site for transcription by binding RNA polymerase and other proteins necessary for copying DNA to RNA. The proximal promoter region binds transcription factors that modify the affinity of the core promoter for RNA polymerase. Genes may be regulated by multiple enhancer and silencer sequences that further modify the activity of promoters by binding activator or repressor proteins. Enhancers and silencers may be distantly located from the gene, many thousands of base pairs away. The binding of different transcription factors, therefore, regulates the rate of transcription initiation at different times and in different cells. Regulatory elements can overlap one another, with a section of DNA able to interact with many competing activators and repressors as well as RNA polymerase. For example, some repressor proteins can bind to the core promoter to prevent polymerase binding. For genes with multiple regulatory sequences, the rate of transcription is the product of all of the elements combined. Binding of activators and repressors to multiple regulatory sequences has a cooperative effect on transcription initiation. Although all organisms use both transcriptional activators and repressors, eukaryotic genes are said to be 'default off', whereas prokaryotic genes are 'default on'. The core promoter of eukaryotic genes typically requires additional activation by promoter elements for expression to occur. The core promoter of prokaryotic genes, conversely, is sufficient for strong expression and is regulated by repressors. An additional layer of regulation occurs for protein coding genes after the mRNA has been processed to prepare it for translation to protein. Only the region between the start and stop codons encodes the final protein product. The flanking untranslated regions (UTRs) contain further regulatory sequences. The 3' UTR contains a terminator sequence, which marks the endpoint for transcription and releases the RNA polymerase. The 5’ UTR binds the ribosome, which translates the protein-coding region into a string of amino acids that fold to form the final protein product. In the case of genes for non-coding RNAs, the RNA is not translated but instead folds to be directly functional. Eukaryotes The structure of eukaryotic genes includes features not found in prokaryotes. Most of these relate to post-transcriptional modification of pre-mRNAs to produce mature mRNA ready for translation into protein. Eukaryotic genes typically have more regulatory elements to control gene expression compared to prokaryotes. This is particularly true in multicellular eukaryotes, humans for example, where gene expression varies widely among different tissues. A key feature of the structure of eukaryotic genes is that their transcripts are typically subdivided into exon and intron regions. Exon regions are retained in the final mature mRNA molecule, while intron regions are spliced out (excised) during post-transcriptional processing. Indeed, the intron regions of a gene can be considerably longer than the exon regions. Once spliced together, the exons form a single continuous protein-coding regions, and the splice boundaries are not detectable. Eukaryotic post-transcriptional processing also adds a 5' cap to the start of the mRNA and a poly-adenosine tail to the end of the mRNA. These additions stabilise the mRNA and direct its transport from the nucleus to the cytoplasm, although neither of these features are directly encoded in the structure of a gene. Prokaryotes The overall organisation of prokaryotic genes is markedly different from that of the eukaryotes. The most obvious difference is that prokaryotic ORFs are often grouped into a polycistronic operon under the control of a shared set of regulatory sequences. These ORFs are all transcribed onto the same mRNA and so are co-regulated and often serve related functions. Each ORF typically has its own ribosome binding site (RBS) so that ribosomes simultaneously translate ORFs on the same mRNA. Some operons also display translational coupling, where the translation rates of multiple ORFs within an operon are linked. This can occur when the ribosome remains attached at the end of an ORF and simply translocates along to the next without the need for a new RBS. Translational coupling is also observed when translation of an ORF affects the accessibility of the next RBS through changes in RNA secondary structure. Having multiple ORFs on a single mRNA is only possible in prokaryotes because their transcription and translation take place at the same time and in the same subcellular location. The operator sequence next to the promoter is the main regulatory element in prokaryotes. Repressor proteins bound to the operator sequence physically obstructs the RNA polymerase enzyme, preventing transcription. Riboswitches are another important regulatory sequence commonly present in prokaryotic UTRs. These sequences switch between alternative secondary structures in the RNA depending on the concentration of key metabolites. The secondary structures then either block or reveal important sequence regions such as RBSs. Introns are extremely rare in prokaryotes and therefore do not play a significant role in prokaryotic gene regulation. References External links GSDS – Gene Structure Display Server Regulatory sequences Gene expression
0.771911
0.986016
0.761116
De-extinction
De-extinction (also known as resurrection biology, or species revivalism) is the process of generating an organism that either resembles or is an extinct species. There are several ways to carry out the process of de-extinction. Cloning is the most widely proposed method, although genome editing and selective breeding have also been considered. Similar techniques have been applied to certain endangered species, in hopes to boost their genetic diversity. The only method of the three that would provide an animal with the same genetic identity is cloning. There are benefits and drawbacks to the process of de-extinction ranging from technological advancements to ethical issues. Methods Cloning Cloning is a commonly suggested method for the potential restoration of an extinct species. It can be done by extracting the nucleus from a preserved cell from the extinct species and swapping it into an egg, without a nucleus, of that species' nearest living relative. The egg can then be inserted into a host from the extinct species' nearest living relative. This method can only be used when a preserved cell is available, meaning it would be most feasible for recently extinct species. Cloning has been used by scientists since the 1950s. One of the most well known clones is Dolly the sheep. Dolly was born in the mid 1990s and lived normally until the abrupt midlife onset of health complications resembling premature aging, that led to her death. Other known cloned animal species include domestic cats, dogs, pigs, and horses. Genome editing Genome editing has been rapidly advancing with the help of the CRISPR/Cas systems, particularly CRISPR/Cas9. The CRISPR/Cas9 system was originally discovered as part of the bacterial immune system. Viral DNA that was injected into the bacterium became incorporated into the bacterial chromosome at specific regions. These regions are called clustered regularly interspaced short palindromic repeats, otherwise known as CRISPR. Since the viral DNA is within the chromosome, it gets transcribed into RNA. Once this occurs, the Cas9 binds to the RNA. Cas9 can recognize the foreign insert and cleaves it. This discovery was very crucial because now the Cas protein can be viewed as a scissor in the genome editing process. By using cells from a closely related species to the extinct species, genome editing can play a role in the de-extinction process. Germ cells may be edited directly, so that the egg and sperm produced by the extant parent species will produce offspring of the extinct species, or somatic cells may be edited and transferred via somatic cell nuclear transfer. The result is an animal which is not completely the extinct species, but rather a hybrid of the extinct species and the closely related, non-extinct species. Because it is possible to sequence and assemble the genome of extinct organisms from highly degraded tissues, this technique enables scientists to pursue de-extinction in a wider array of species, including those for which no well-preserved remains exist. However, the more degraded and old the tissue from the extinct species is, the more fragmented the resulting DNA will be, making genome assembly more challenging. Back-breeding Back breeding is a form of selective breeding. As opposed to breeding animals for a trait to advance the species in selective breeding, back breeding involves breeding animals for an ancestral characteristic that may not be seen throughout the species as frequently. This method can recreate the traits of an extinct species, but the genome will differ from the original species. Back breeding, however, is contingent on the ancestral trait of the species still being in the population in any frequency. Back breeding is also a form of artificial selection by the deliberate selective breeding of domestic animals, in an attempt to achieve an animal breed with a phenotype that resembles a wild type ancestor, usually one that has gone extinct. Iterative evolution A natural process of de-extinction is iterative evolution. This occurs when a species becomes extinct, but then after some time a different species evolves into an almost identical creature. For example, the Aldabra rail was a flightless bird that lived on the island of Aldabra. It had evolved some time in the past from the flighted white-throated rail, but became extinct about 136,000 years ago due to an unknown event that caused sea levels to rise. About 100,000 years ago, sea levels dropped and the island reappeared, with no fauna. The white-throated rail recolonized the island, but soon evolved into a flightless species physically identical to the extinct species. Herbarium specimens for de-extincting plants Not all extinct plants have herbarium specimens that contain seeds. Of those that do, there is ongoing discussion on how to coax barely alive embryos back to life. Advantages of de-extinction The technologies being developed for de-extinction could lead to large advances in various fields: An advance in genetic technologies that are used to improve the cloning process for de-extinction could be used to prevent endangered species from becoming extinct. By studying revived previously extinct animals, cures to diseases could be discovered. Revived species may support conservation initiatives by acting as "flagship species" to generate public enthusiasm and funds for conserving entire ecosystems. Prioritising de-extinction could lead to the improvement of current conservation strategies. Conservation measures would initially be necessary in order to reintroduce a species into the ecosystem, until the revived population can sustain itself in the wild. Reintroduction of an extinct species could also help improve ecosystems that had been destroyed by human development. It may also be argued that reviving species driven to extinction by humans is an ethical obligation. Disadvantages of de-extinction The reintroduction of extinct species could have a negative impact on extant species and their ecosystem. The extinct species' ecological niche may have been filled in its former habitat, making it an invasive species. This could lead to the extinction of other species due to competition for food or other competitive exclusion. It could lead to the extinction of prey species if they have more predators in an environment that had few predators before the reintroduction of an extinct species. If a species has been extinct for a long period of time the environment they are introduced to could be wildly different from the one that they can survive in. The changes in the environment due to human development could mean that the species may not survive if reintroduced into that ecosystem. A species could also become extinct again after de-extinction if the reasons for its extinction are still a threat. The woolly mammoth might be hunted by poachers just like elephants for their ivory and could go extinct again if this were to happen. Or, if a species is reintroduced into an environment with disease for which it has no immunity, the reintroduced species could be wiped out by a disease that current species can survive. De-extinction is a very expensive process. Bringing back one species can cost millions of dollars. The money for de-extinction would most likely come from current conservation efforts. These efforts could be weakened if funding is taken from conservation and put into de-extinction. This would mean that critically endangered species would start to go extinct faster because there are no longer resources that are needed to maintain their populations. Also, since cloning techniques cannot perfectly replicate a species as it existed in the wild, the reintroduction of the species may not bring about positive environmental benefits. They may not have the same role in the food chain that they did before and therefore cannot restore damaged ecosystems. Current candidate species for de-extinction Woolly mammoth The existence of preserved soft tissue remains and DNA from woolly mammoths (Mammuthus primigenius) has led to the idea that the species could be recreated by scientific means. Two methods have been proposed to achieve this: The first would be to use the cloning process; however, even the most intact mammoth samples have had little usable DNA because of their conditions of preservation. There is not enough DNA intact to guide the production of an embryo. The second method would involve artificially inseminating an elephant egg cell with preserved sperm of the mammoth. The resulting offspring would be a hybrid of the mammoth and its closest living relative the Asian elephant. After several generations of cross-breeding these hybrids, an almost pure woolly mammoth could be produced. However, sperm cells of modern mammals are typically potent for up to 15 years after deep-freezing, which could hinder this method. Whether the hybrid embryo would be carried through the two-year gestation is unknown; in one case, an Asian elephant and an African elephant produced a live calf named Motty, but it died of defects at less than two weeks old. In 2008, a Japanese team found usable DNA in the brains of mice that had been frozen for 16 years. They hope to use similar methods to find usable mammoth DNA. In 2011, Japanese scientists announced plans to clone mammoths within six years. In March 2014, the Russian Association of Medical Anthropologists reported that blood recovered from a frozen mammoth carcass in 2013 would now provide a good opportunity for cloning the woolly mammoth. Another way to create a living woolly mammoth would be to migrate genes from the mammoth genome into the genes of its closest living relative, the Asian elephant, to create hybridized animals with the notable adaptations that it had for living in a much colder environment than modern day elephants. This is currently being done by a team led by Harvard geneticist George Church. The team has made changes in the elephant genome with the genes that gave the woolly mammoth its cold-resistant blood, longer hair, and an extra layer of fat. According to geneticist Hendrik Poinar, a revived woolly mammoth or mammoth-elephant hybrid may find suitable habitat in the tundra and taiga forest ecozones. George Church has hypothesized the positive effects of bringing back the extinct woolly mammoth would have on the environment, such as the potential for reversing some of the damage caused by global warming. He and his fellow researchers predict that mammoths would eat the dead grass allowing the sun to reach the spring grass; their weight would allow them to break through dense, insulating snow in order to let cold air reach the soil; and their characteristic of felling trees would increase the absorption of sunlight. In an editorial condemning de-extinction, Scientific American pointed out that the technologies involved could have secondary applications, specifically to help species on the verge of extinction regain their genetic diversity. Pyrenean ibex The Pyrenean ibex (Capra pyrenaica pyrenaica) was a subspecies of Iberian ibex that lived on the Iberian Peninsula. While it was abundant through medieval times, over-hunting in the 19th and 20th centuries led to its demise. In 1999, only a single female named Celia was left alive in Ordesa National Park. Scientists captured her, took a tissue sample from her ear, collared her, then released her back into the wild, where she lived until she was found dead in 2000, having been crushed by a fallen tree. In 2003, scientists used the tissue sample to attempt to clone Celia and resurrect the extinct subspecies. Despite having successfully transferred nuclei from her cells into domestic goat egg cells and impregnating 208 female goats, only one came to term. The baby ibex that was born had a lung defect, and lived for only seven minutes before suffocating from being incapable of breathing oxygen. Nevertheless, her birth was seen as a triumph and is considered the first de-extinction. In late 2013, scientists announced that they would again attempt to resurrect the Pyrenean ibex. A problem to be faced, in addition to the many challenges of reproduction of a mammal by cloning, is that only females can be produced by cloning the female individual Celia, and no males exist for those females to reproduce with. This could potentially be addressed by breeding female clones with the closely related Southeastern Spanish ibex, and gradually creating a hybrid animal that will eventually bear more resemblance to the Pyrenean ibex than the Southeastern Spanish ibex. Aurochs The aurochs (Bos primigenius) was widespread across Eurasia, North Africa, and the Indian subcontinent during the Pleistocene, but only the European aurochs (B. p. primigenius) survived into historical times. This species is heavily featured in European cave paintings, such as Lascaux and Chauvet cave in France, and was still widespread during the Roman era. Following the fall of the Roman Empire, overhunting of the aurochs by nobility caused its population to dwindle to a single population in the Jaktorów forest in Poland, where the last wild one died in 1627. However, because the aurochs is ancestral to most modern cattle breeds, it is possible for it to be brought back through selective or back breeding. The first attempt at this was by Heinz and Lutz Heck using modern cattle breeds, which resulted in the creation of Heck cattle. This breed has been introduced to nature preserves across Europe; however, it differs strongly from the aurochs in physical characteristics, and some modern attempts claim to try to create an animal that is nearly identical to the aurochs in morphology, behavior, and even genetics. There are several projects that aim to create a cattle breed similar to the aurochs through selectively breeding primitive cattle breeds over a course of twenty years to create a self-sufficient bovine grazer in herds of at least 150 animals in rewilded nature areas across Europe, for example the Tauros Programme and the separate Taurus Project. This organization is partnered with the organization Rewilding Europe to help revert some European natural ecosystems to their prehistoric form. A competing project to recreate the aurochs is the Uruz Project by the True Nature Foundation, which aims to recreate the aurochs by a more efficient breeding strategy using genome editing, in order to decrease the number of generations of breeding needed and the ability to quickly eliminate undesired traits from the population of aurochs-like cattle. It is hoped that aurochs-like cattle will reinvigorate European nature by restoring its ecological role as a keystone species, and bring back biodiversity that disappeared following the decline of European megafauna, as well as helping to bring new economic opportunities related to European wildlife viewing. Quagga The quagga (Equus quagga quagga) is a subspecies of the plains zebra that was distinct in that it was striped on its face and upper torso, but its rear abdomen was a solid brown. It was native to South Africa, but was wiped out in the wild due to overhunting for sport, and the last individual died in 1883 in the Amsterdam Zoo. However, since it is technically the same species as the surviving plains zebra, it has been argued that the quagga could be revived through artificial selection. The Quagga Project aims to breed a similar form of zebra by selective breeding of plains zebras. This process is also known as back breeding. It also aims to release these animals onto the western Cape once an animal that fully resembles the quagga is achieved, which could have the benefit of eradicating introduced species of trees such as the Brazilian pepper tree, Tipuana tipu, Acacia saligna, bugweed, camphor tree, stone pine, cluster pine, weeping willow and Acacia mearnsii. Thylacine The thylacine (Thylacinus cynocephalus), commonly known as the Tasmanian tiger, was native to the Australian mainland, Tasmania and New Guinea. It is believed to have become extinct in the 20th century. The thylacine had become extremely rare or extinct on the Australian mainland before British settlement of the continent. The last known thylacine died at the Hobart Zoo, on September 7, 1936. He is believed to have died as the result of neglect—locked out of his sheltered sleeping quarters, he was exposed to a rare occurrence of extreme Tasmanian weather: extreme heat during the day and freezing temperatures at night. Official protection of the species by the Tasmanian government was introduced on July 10, 1936, roughly 59 days before the last known specimen died in captivity. In December 2017, it was announced in Nature Ecology and Evolution that the full nuclear genome of the thylacine had been successfully sequenced, marking the completion of the critical first step toward de-extinction that began in 2008, with the extraction of the DNA samples from the preserved pouch specimen. The thylacine genome was reconstructed by using the genome editing method. The Tasmanian devil was used as a reference for the assembly of the full nuclear genome. Andrew J. Pask from the University of Melbourne has stated that the next step toward de-extinction will be to create a functional genome, which will require extensive research and development, estimating that a full attempt to resurrect the species may be possible as early as 2027. In August 2022, the University of Melbourne and Colossal Biosciences announced a partnership to accelerate de-extinction of the thylacine via genetic modification of one of its closest living relatives, the fat-tailed dunnart. In 2024, a 99.9% complete genome of the thylacine was created from a well-preserved skull that is estimated to be 110 years old. Passenger pigeon The passenger pigeon (Ectopistes migratorius) numbered in the billions before being wiped out due to unsustainable commercial hunting and habitat loss during the early 20th century. The non-profit Revive & Restore obtained DNA from the passenger pigeon from museum specimens and skins; however, this DNA is degraded because it is so old. For this reason, simple cloning would not be an effective way to perform de-extinction for this species because parts of the genome would be missing. Instead, Revive & Restore focuses on identifying mutations in the DNA that would cause a phenotypic difference between the extinct passenger pigeon and its closest living relative, the band-tailed pigeon. In doing this, they can determine how to modify the DNA of the band-tailed pigeon to change the traits to mimic the traits of the passenger pigeon. In this sense, the de-extinct passenger pigeon would not be genetically identical to the extinct passenger pigeon, but it would have the same traits. In 2015, the de-extinct passenger pigeon hybrid was forecast ready for captive breeding by 2025 and released into the wild by 2030. Maclear's rat The Maclear's rat (Rattus macleari), also known as the Christmas Island rat, was a large rat endemic to Christmas Island in the Indian Ocean. It is believed Maclear's rat might have been responsible for keeping the population of Christmas Island red crab in check. It is thought that the accidental introduction of black rats by the Challenger expedition infected the Maclear's rats with a disease (possibly a trypanosome), which resulted in the species' decline. The last recorded sighting was in 1903. In March 2022, researchers discovered the Maclear's rat shared about 95% of its genes with the living brown rat, thus sparking hopes in bringing the species back to life. Although scientists were mostly successful in using CRISPR technology to edit the DNA of the living species to match that of the extinct one, a few key genes were missing, which would mean resurrected rats would not be genetically pure replicas. Dodo The dodo (Raphus cucullatus) was a flightless bird endemic to the island of Mauritius in the Indian Ocean. Due to various factors such as the inability to feel fear caused by isolation from significant predators, predation from humans and introduced invasive species such as pigs, dogs, cats, rats, and crab-eating macaques, competition for food with invasive species, habitat loss, and the birds naturally slow reproduction, the species' numbers declined rapidly. The last widely accepted recorded sighting was in 1662. Since then, the bird has become a symbol for extinction and is often cited as the primary example of man-made extinction. In January 2023, Colossal Biosciences announced their project to revive the dodo alongside their previously announced projects for reviving the woolly mammoth and thylacine in hopes of restoring biodiversity to Mauritius and changing the dodo's status as a symbol of extinction to de-extinction. Northern white rhinoceros The northern white rhinoceros or northern white rhino (Ceratotherium simum cottoni) is a subspecies of the of white rhinoceros endemic to East and Central Africa south of the Sahara. Due to widespread and uncontrollable poaching and civil warfare in their former range, the subspecies' numbers dropped quickly over the course of the late 1900s and early 2000s. Unlike the majority of the potential candidates for de-extinction, the northern white rhinoceros is not extinct, but functionally extinct and is believed to be extinct in the wild with only two known female members left, Najin and Fatu who reside on the Ol Pejeta Conservancy in Kenya. The BioRescue Team in collaboration with Colossal Biosciences plan to implement 30 northern white rhinoceros embryos made from egg cells collected from Najin and Fatu and preserved sperm from dead male individuals into female southern white rhinoceros by the end of 2024. Ivory-billed woodpecker The ivory-billed woodpecker (Campephilus principalis) is the largest woodpecker endemic to the United States with a subspecies in Cuba. The species numbers have declined since the late 1800s due to logging and hunting. Similar to the northern white rhinoceros, the ivory-billed woodpecker is not completely extinct, but functionally extinct with occasional sightings that suggest that 50 or less individuals are left. In October 2024, Colossal Biosciences announced their non-profit Colossal Foundation, a foundation dedicated to conservation of extant species with their first projects being the Sumatran rhinoceros, vaquita, red wolf, pink pigeon, northern quoll, and ivory-billed woodpecker. Colossal plans to revive or rediscover the species through genome editing of its closest living relatives and using drones and AI to identify any potential remaining individuals in the wild. Heath hen The heath hen (Tympanuchus cupido cupido) was a subspecies of greater prairie chicken endemic to the heathland barrens of costal North America. It is even speculated that the pilgrims' first Thanksgiving featured this bird as the main course instead of wild turkey. Due to overhunting caused by its perceived abundancy, the population became extinct in mainland North America by 1870, leaving a population of 300 individuals left on Martha's Vineyard. Despite conservation efforts, the subspecies became extinct in 1932 following the disappearance and presumed death of Booming Ben, the final known member of the subspecies. In the summer of 2014, non-profit organisation, Revive & Restore held a meeting with the community of Martha's Vineyard to announce their project to revive the heath hen in hopes of restoring and maintaining the sandplain grasslands. As of September 2024, the latest update of the project was on April 8th, 2020, in which germs cells were collected from greater prairie chicken eggs at Texas A&M. Future potential candidates for de-extinction A "De-extinction Task Force" was established in April 2014 under the auspices of the Species Survival Commission (SSC) and charged with drafting a set of Guiding Principles on Creating Proxies of Extinct Species for Conservation Benefit to position the IUCN SSC on the rapidly emerging technological feasibility of creating a proxy of an extinct species. Birds Little bush moa – A slender species of moa slightly larger than a turkey that went extinct abruptly, around 500–600 years ago following the arrival and proliferation of the Māori people in New Zealand, as well as the introduction of Polynesian dogs. Scientists at Harvard University assembled the first nearly complete genome of the species from toe bones, thus bringing the species a step closer to being "resurrected". New Zealand politician Trevor Mallard had previously suggested bringing back a medium-sized species of moa. Giant moa – The tallest birds to have ever lived, but not as heavy as the elephant bird. Both the northern and southern species became extinct by 1500 due to overhunting by the Polynesian settlers and Māori in New Zealand. Elephant bird – The heaviest birds to have ever lived, the elephant birds were driven to extinction by the early colonization of Madagascar. Ancient DNA has been obtained from the eggshells but may be too degraded for use in de-extinction. Carolina parakeet - One of the only indigenous parrots to North America, it was driven to extinction by destruction of its habitat, overhunting, competition from introduced honeybees, and persecution for crop damages. Hundreds of specimens with viable DNA still exist in museums around the world, making them a prime candidate for revival. In 2019, a full genome of the carolina parakeet was sequenced. Great auk - A flightless bird native to the North Atlantic similar to the penguin. The great auk went extinct in the 1800s due to overhunting by humans for food. The last two known great auks lived on an island near Iceland and were clubbed to death by sailors. There have been no known sightings since. The great auk has been identified as a good candidate for de-extinction by Revive and Restore, a non-profit organization. Because the great auk is extinct it cannot be cloned, but its DNA can be used to alter the genome of its closest relative, the razorbill, and breed the hybrids to create a species that will be very similar to the original great auks. The plan is to introduce them back into their original habitat, which they would then share with razorbills and puffins, who are also at risk for extinction. This would help restore the biodiversity and restore that part of the ecosystem. Imperial woodpecker – A large possibly extinct woodpecker endemic to Mexico that has not been seen in over 50 years due to habitat destruction and hunting. Cuban macaw – A colourful macaw that was native to Cuba and Isla de la Juventud. It became extinct in the late 19th century due to overhunting, pet trade, and habitat loss. Labrador duck – A duck that was native to North America. it became extinct in the late 19th century due to colonisation in their former range combined with an already naturally low population. It is also the first known endemic North American bird species to become extinct following the Columbian Exchange. Huia – A species of Callaeidae that was native to New Zealand. It became extinct in 1907 due to overhunting from both the Māori and European settlers, habitat loss, and predation from introduced invasive species. In 1999, students of Hastings Boys' High School proposed the idea of de-extinction of the huia, the school's emblem through cloning. The Ngāti Huia tribe approved of the idea and the de-extinction process would have been performed by the University of Otago with $100,000 funding from a Californian based internet startup. However, due to the poor state of DNA in the specimens at Museum of New Zealand Te Papa Tongarewa, a complete huia genome could not be created, making this method of de-extinction improbable to succeed. Moho – An entire genus of Hawaiian birds that were native to various islands in Hawaii. The genus became extinct in 1987 following the extinction of its final living member, Kauaʻi ʻōʻō. The reasons for the genus' decline were overhunting for their plumage, habitat loss caused by both colonisation of Hawaii and natural disasters, mosquito-borne diseases, and predation from introduced invasive species. Mammals Caribbean monk seal – A species of monk seal that was native to the Caribbean. It became extinct in 1950 due to poaching and starvation caused by overfishing of its natural prey. Irish elk – The largest deer to have ever lived, formerly inhabiting Eurasia from present day Ireland to present day Sibera during the Pleistocene. It became extinct 5-10 thousand years ago due to suspected overhunting. Cave lion – The discovery of two preserved cubs in the Sakha Republic ignited a project to clone the animal. Steppe bison – The discovery of the mummified steppe bison of 9,000 years ago could help people clone the ancient bison species back, even though the steppe bison would not be the first to be "resurrected". Russian and South Korean scientists are collaborating to clone steppe bison in the future, using DNA preserved from an 8,000-year-old tail, in wood bison, which themselves have been introduced to Yakutia to fulfill a similar niche. Tarpan – A population of free-ranging horses in Europe that went extinct in 1909. Much like the aurochs, there have been many attempts to breed tarpan-like horses from domestic horses, the first being by the Heck brothers, creating the Heck horse as a result. Though it is not a genetic copy, it is claimed to bear many similarities to the tarpan. Other attempts were made to create tarpan-like horses. A breeder named Harry Hegardt was able to breed a line of horses from American Mustangs. Other breeds of supposedly tarpan-like horse include the Konik and Strobel's horse. Baiji – A freshwater dolphin native to the Yangtze River in China. Unlike most potential candidates for de-extinction, the baiji is not completely extinct, but instead functionally extinct with a low population in the wild due to entanglement in nets, collision with boats, and pollution of the Yangtze River with occasional sightings, with the most recent in 2024. There are plans to help save the species if a living specimen is found. Vaquita – The smallest cetacean to have ever lived that is endemic to the upper Gulf of California in Mexico. Similar to the baiji, the vaquita is not completely extinct, but functionally extinct with an estimate of 8 or less members left due to entanglement in gillnets meant to poach totoabas, a fish with a highly valued swim bladder on black markets due to its perceived medicinal values. In October 2024, Colossal Biosciences launched their Colossal Foundation, a non-profit foundation dedicated to conservation of extant species with one of their first projects being the vaquita. In addition to using technology to monitor the final remaining individuals, they aim to collect tissue samples from vaquitas in order to revive it if it does become extinct in the near future. Steller's sea cow – A sirenian that was endemic to Bering Sea, first described by Georg Wilhelm Steller in 1741. It was hunted to extinction in 1768. Woolly rhinoceros – A species of rhinoceros that was endemic to Northern Eurasia during the Pleistocene. It is believed to have become extinct as a result of both climate change and overhunting by early humans. In November 2023, scientists managed to create a woolly rhinoceros genome from faeces of cave hyenas in addition to the existence of frozen specimens. However, the woolly rhinoceros' closest living relative is the critically endangered Sumatran rhinoceros with an estimate of only 80 individuals left in the wild, which presents ethical dilemmas similar to the woolly mammoth. Cave bear – A species of bear that was endemic to Eurasia during the Pleistocene. It is estimated to have become extinct 24 thousand years ago due to climate change and suspected competition with early humans. Reptiles Floreana giant tortoise – A subspecies of the Galápagos tortoise that became extinct in 1950. In 2008, mitochondrial DNA from the Floreana tortoise species was found in museum specimens. In theory, a breeding program could be established to "resurrect" a pure Floreana species from living hybrids. Amphibians Gastric-brooding frog – An entire genus of ground frogs that were native to Queensland, Australia. They became extinct in the mid-1980s primarily due to Chytridiomycosis. In 2013, scientists in Australia successfully created a living embryo from non-living preserved genetic material, and hope that by using somatic-cell nuclear transfer methods, they can produce an embryo that can survive to the tadpole stage. Insects Xerces blue – A species of butterfly that was native to the Sunset District of San Francisco in the American state of California. It is estimated that the species became extinct in the early 1940s due to urbanization of their former habitat. Similar species to the Xerces blue, such as Glaucopsyche lygdamus and the Palos Verdes blue, have been released into the Xerces blue's former range to substitute its role. On April 15, 2024, non-profit organisation Revive & Restore announced the early stages of their plans to potentially revive the species.Plants Paschalococos – A genus of coccoid palm trees that were native to Easter Island, Chile. It is believed to have become extinct around 1650 due to its disappearance from the pollen records. Hyophorbe amaricaulis – A species of palm tree from the Arecales family that is native to the island of Mauritius. Unlike the majority of potential candidates, this palm is not completely extinct, but functionally extinct and is believed to be extinct in the wild with only one known specimen left in the Curepipe Botanic Gardens. In 2010, there was an attempt to revive the species through germination in vitro in which Isolated and growing embryos were extracted from seeds in tissue culture, but these seedlings only lived for three months. Successful de-extinctions Judean date palm The Judean date palm is a species of date palm native to Judea that is estimated to have originally become extinct around the 15th century due to climate change and human activity in the region. In 2005, preserved seeds found in the 1960s excavations of Herod the Great's palace were given to Sarah Sallon by Bar-Ilan University after she came up with the initiative to germinate some ancient seeds. Sallan later challenged her friend, Elaine Solowey of the Center for Sustainable Agriculture at the Arava Institute for Environmental Studies with the task of germinating the seeds. Solowey managed to revive several of the provided seeds after hydrating them with a common household baby bottle warmer along with average fertiliser and growth hormones. The first plant grown was named after Lamech's father, Methuselah, the oldest living man in the Bible. In 2012, there were plans to crossbreed the male palm with what was considered its closest living relative, the Hayani date of Egypt to generate fruit by 2022. However, two female Judean date palms have been sprouted since then. By 2015 Methuselah had produced pollen that has been used successfully to pollinate female date palms. In June 2021, one of the female plants, Hannah, produced dates. The harvested fruits are currently being studied to determine their properties and nutritional values. The de-extinct Judean date palms are currently at a Kibbutz located in Ketura, Israel. Rastreador Brasilerio The Rastreador Brasilerio (Brazilian Tracker) is a large scent hound from Brazil that was bred in the 1950s to hunt jaguars and wild pigs. It was originally declared extinct and delisted by the Fédération Cynologique Internationale and Confederação Brasileira de Cinofilia in 1973 due to tick-borne diseases and subsequent poisoning from insecticides in attempt to get rid of the ectoparasites. In the early 2000s, a group named Grupo de Apoio ao Resgate do Rastreador Brasileiro (Brazilian Tracker Rescue Support Group) dedicated to reviving the breed and having it relisted by Confederação Brasileira de Cinofilia began work to locate dogs in Brazil that had genetics of the extinct breed to breed a purebred Rastreador Brasilerio. In 2013, the breed was de-extinct through preservation breeding from descendants of the final original members and was relisted by the FCI. Unknown Commiphora In 2010, Sarah Sallon of Arava Institute for Environmental Studies grew a seed found in excavations of a cave in the northern Judean desert in 1986. The specimen, Sheba reached maturity in 2024 and is believed to be an entirely new species of Commiphora with many believing that she may be the tsori or Judean balsam, plants that are said to have healing properties in the Bible. See also Breeding back Preservation breeding Cryoconservation of animal genetic resources Endangered species Functional extinction Endling Holocene extinction List of introduced species Pleistocene Park Pleistocene rewilding Colossal Biosciences Arava Institute for Environmental Studies List of resurrected species References Further reading Pilcher, Helen (2016). Bring Back the King: The New Science of De-extinction ''. Bloomsbury Press External links TEDx DeExtinction March 15, 2013 conference sponsored by Revive and Restore project of the Long Now Foundation, supported by TEDx and hosted by the National Geographic Society, that helped popularize the public understanding of the science of de-extinction. Video proceedings, meeting report, and links to press coverage freely available. De-Extinction: Bringing Extinct Species Back to Life April 2013 article by Carl Zimmer for National Geographic magazine reporting on 2013 conference. Cloning Conservation biology Evolution of the biosphere Extinction Science fiction themes
0.762938
0.997607
0.761112
Oasis
In ecology, an oasis (; : oases ) is a fertile area of a desert or semi-desert environment that sustains plant life and provides habitat for animals. Surface water may be present, or water may only be accessible from wells or underground channels created by humans. In geography, an oasis may be a current or past rest stop on a transportation route, or less-than-verdant location that nonetheless provides access to underground water through deep wells created and maintained by humans. Although they depend on a natural condition, such as the presence of water that may be stored in reservoirs and used for irrigation, most oases, as we know them, are artificial. The word oasis came into English from , from , , which in turn is a direct borrowing from Demotic Egyptian. The word for oasis in the latter-attested Coptic language (the descendant of Demotic Egyptian) is wahe or ouahe which means a "dwelling place". Oasis in Arabic is wāḥa. Description Oases develop in "hydrologically favored" locations that have attributes such as a high water table, seasonal lakes, or blockaded wadis. Oases are made when sources of freshwater, such as underground rivers or aquifers, irrigate the surface naturally or via man-made wells. The presence of water on the surface or underground is necessary and the local or regional management of this essential resource is strategic, but not sufficient to create such areas: continuous human work and know-how (a technical and social culture) are essential to maintain such ecosystems. Some of the possible human contributions to maintaining an oasis include digging and maintaining wells, digging and maintaining canals, and continuously removing opportunistic plants that threaten to gorge themselves on water and fertility needed to maintain human and animal food supplies. Stereotypically, an oasis has a "central pool of open water surrounded by a ring of water-dependent shrubs and trees…which are in turn encircled by an outlying transition zone to desert plants." Rain showers provide subterranean water to sustain natural oases, such as the Tuat. Substrata of impermeable rock and stone can trap water and retain it in pockets, or on long faulting subsurface ridges or volcanic dikes water can collect and percolate to the surface. Any incidence of water is then used by migrating birds, which also pass seeds with their droppings which will grow at the water's edge forming an oasis. It can also be used to plant crops. Geography Oases in the Middle East and North Africa cover about , however, they support the livelihood of about 10 million inhabitants. The stark ratio of oasis to desert land in the world means that the oasis ecosystem is "relatively minute, rare and precious." There are 90 “major oases” within the Sahara Desert. Some of their fertility may derive from irrigation systems called foggaras, khettaras, lkhttarts, or a variety of other regional names. In some oases systems, there is "a geometrical system of raised channels that release controlled amounts of the water into individual plots, soaking the soil." History Oases often have human histories that are measured in millennia. Archeological digs at Ein Gedi in the Dead Sea Valley have found evidence of settlement dating to 6,000 BC. Al-Ahsa on the Arabian Peninsula shows evidence of human residence dating to the Neolithic. Anthropologically, the oasis is "an area of sedentary life, which associates the city [medina] or village [ksar] with its surrounding feeding source, the palm grove, within a relational and circulatory nomadic system." The location of oases has been of critical importance for trade and transportation routes in desert areas; caravans must travel via oases so that supplies of water and food can be replenished. Thus, political or military control of an oasis has in many cases meant control of trade on a particular route. For example, the oases of Awjila, Ghadames and Kufra, situated in modern-day Libya, have at various times been vital to both north–south and east–west trade in the Sahara Desert. The location of oases also informed the Darb El Arba'īn trade route from Sudan to Egypt, as well as the caravan route from the Niger River to Tangier, Morocco. The Silk Road "traced its course from water hole to water hole, relying on oasis communities such as Turpan in China and Samarkand in Uzbekistan." According to the United Nations, "Oases are at the very heart of the overall development of peri-Saharan countries due to their geographical location and the fact they are preferred migration routes in times of famine or insecurity in the region." Oases in Oman, on the Arabian Peninsula near the Persian Gulf, vary somewhat from the Saharan form. While still located in an arid or semi-arid zone with a date palm overstory, these oases are usually located below plateaus and "watered either by springs or by aflaj, tunnel systems dug into the ground or carved into the rock to tap underground aquifers." This rainwater harvesting system "never developed a serious salinity problem." Palm oases In the drylands of southwestern North America, there is a habitat form called Palm Oasis (alternately Palm Series or Oasis Scrub Woodland) that has the native California fan palm as the overstory species. These Palm Oases can be found in California, Arizona, Baja California, and Sonora. Agroforestry People who live in an oasis must manage land and water use carefully. The most important plant in an oasis is the date palm (Phoenix dactylifera L.), which forms the upper layer. These palm trees provide shade for smaller understory trees like apricots, dates, figs, olives, and peach trees, which form the middle layer. Market-garden vegetables, some cereals (such as sorghum, barley, millet, and wheat), and/or mixed animal fodder, are grown in the bottom layer where there is more moisture. The oasis is integrated into its desert environment through an often close association with nomadic transhumant livestock farming (very often pastoral and sedentary populations are clearly distinguished). The fertility of the oasis soil is restored by "cyclic organic inputs of animal origin." In summary, an oasis palm grove is a highly anthropized and irrigated area that supports a traditionally intensive and polyculture-based agriculture. Responding to environmental constraints, the three strata create what is called the "oasis effect". The three layers and all their interaction points create a variety of combinations of "horizontal wind speed, relative air temperature and relative air humidity." The plantings—through a virtuous cycle of wind reduction, increased shade and evapotranspiration—create a microclimate favorable to crops; "measurements taken in different oases have showed that the potential evapotranspiration of the areas was reduced by 30 to 50 percent within the oasis." The keystone date palm trees are "a main income source and staple food for local populations in many countries in which they are cultivated, and have played significant roles in the economy, society, and environment of those countries." Challenges for date palm oasis polycultures include "low rainfall, high temperatures, water resources often high in salt content, and high incidence of pests." Distressed systems Many historic oases have struggled with drought and inadequate maintenance. According to a United Nations report on the future of oases in the Sahara and Sahel, "Increasingly... oases are subject to various pressures, heavily influenced by the effects of climate change, decreasing groundwater levels and a gradual loss of cultural heritage due to a fading historical memory concerning traditional water management techniques. These natural pressures are compounded by demographic pressures and the introduction of modern water pumping techniques that can disrupt traditional resource management schemes, particularly in the North Saharan oases." For example, five historic oases in the Western Desert of Egypt (Kharga, Dakhla, Farafra, Baharyia, and Siwa) once had "flowing spring and wells" but due to the decline of groundwater heads because of overuse for land reclamation projects those water sources are no more and the oases suffer as a result. Morocco has lost two-thirds of its oasis habitat over the last 100 years due to heat, drought, and water scarcity. The Ferkla Oases in Morocco once drew on water from the Ferkla, Sat and Tangarfa Rivers but they are now dry but for a few days a year. List of places called oases New World dryland systems with oasis-like attributes Huacachina, Peru Quitobaquito, Organ Pipe Cactus National Monument, Arizona Kitowok, Sonora, Mexico Fish Springs National Wildlife Refuge in Utah, United States Havasu Falls, Grand Canyon, Arizona Zzyzx in Mojave National Preserve, California Cuatro Ciengas basin, Chihuahuan Desert, Mexico Oasis Spring Ecological Reserve, Salton Sea, California Gallery of oases Practical matters A 1920 USGS publication about watering holes in the deserts of California and Arizona gave this advice for travelers seeking oases: See also – the world's largest irrigation project; developed in Libya to connect cities with fossil water. Lençóis Maranhenses National Park (Brazil) Great Green Wall (disambiguation) Aflaj Irrigation Systems of Oman Palmeral of Elche Fog oasis (South America) References Bibliography External links Lacustrine landforms Waystations
0.762019
0.998774
0.761085
Adaptationism
Adaptationism is a scientific perspective on evolution that focuses on accounting for the products of evolution as collections of adaptive traits, each a product of natural selection with some adaptive rationale or raison d'etre. A formal alternative would be to look at the products of evolution as the result of neutral evolution, in terms of structural constraints, or in terms of a mixture of factors including (but not limited to) natural selection. The most obvious justification for an adaptationist perspective is the belief that traits are, in fact, always adaptations built by natural selection for their functional role. This position is called "empirical adaptationism" by Godfrey-Smith. However, Godfrey-Smith also identifies "methodological" and "explanatory" flavors of adaptationism, and argues that all three are found in the evolutionary literature (see for explanation). Although adaptationism has always existed— the view that the features of organisms are wonderfully adapted predates evolutionary thinking— and was sometimes criticized for its "Panglossian" excesses (e.g., by Bateson or Haldane), concerns about the role of adaptationism in scientific research did not become a major issue of debate until evolutionary biologists Stephen Jay Gould and Richard Lewontin penned a famous critique, "The Spandrels of San Marco and the Panglossian Paradigm". According to Gould and Lewontin, evolutionary biologists had a habit of proposing adaptive explanations for any trait by default without considering non-adaptive alternatives, and often by conflating products of adaptation with the process of natural selection. They identified neutral evolution and developmental constraints as potentially important non-adaptive factors and called for alternative research agendas. This critique provoked defenses by Mayr, Reeve and Sherman and others, who argued that the adaptationist research program was unquestionably highly successful, and that the causal and methodological basis for considering alternatives was weak. The "Spandrels paper" (as it came to be known) also added fuel to the emergence of an alternative "evo-devo" agenda focused on developmental "constraints" Today, molecular evolutionists often cite neutral evolution as the null hypothesis in evolutionary studies, i.e., offering a direct contrast to the adaptationist approach. Constructive neutral evolution has been suggested as a means by which complex systems emerge through neutral transitions, and has been invoked to help understand the origins of a wide variety of features from the spliceosome of eukaryotes to the interdependency and simplification widespread in microbial communities. Today, adaptationism is associated with the "reverse engineering" approach. Richard Dawkins noted in The Blind Watchmaker that evolution, an impersonal process, produces organisms that give the appearance of having been designed for a purpose. This observation justifies looking for the function of traits observed in biological organisms. This reverse engineering is used in disciplines such as psychology and economics to explain the features of human cognition. Reverse engineering can, in particular, help explain cognitive biases as adaptive solutions that assist individuals in decision-making when considering constraints such as the cost of processing information. This approach is valuable in understanding how seemingly irrational behaviors may, in fact, be optimal given the environmental and informational limitations under which human cognition operates. Overview Criteria to identify a trait as an adaptation Adaptationism is an approach to studying the evolution of form and function. It attempts to frame the existence and persistence of traits, assuming that each of them arose independently and improved the reproductive success of the organism's ancestors. A trait is an adaptation if it fulfils the following criteria: The trait is a variation of an earlier form. The trait is heritable through the transmission of genes. The trait enhances reproductive success. Constraints on the power of evolution Genetic constraints Genetic reality provides constraints on the power of random mutation followed by natural selection. With pleiotropy, some genes control multiple traits, so that adaptation of one trait is impeded by effects on other traits that are not necessarily adaptive. Selection that influences epistasis is a case where the regulation or expression of one gene, depends on one or several others. This is true for a good number of genes though to differing extents. The reason why this leads to muddied responses is that selection for a trait that is epistatically based can mean that an allele for a gene that is epistatic when selected would happen to affect others. This leads to the coregulation of others for a reason other than there is an adaptive quality to each of those traits. Like with pleiotropy, traits could reach fixation in a population as a by-product of selection for another. In the context of development the difference between pleiotropy and epistasis is not so clear but at the genetic level the distinction is more clear. With these traits as being by-products of others it can ultimately be said that these traits evolved but not that they necessarily represent adaptations. Polygenic traits are controlled by a number of separate genes. Many traits are polygenic, for example human height. To drastically change a polygenic trait is likely to require multiple changes. Anatomical constraints Anatomical constraints are features of organism's anatomy that are prevented from change by being constrained in some way. When organisms diverge from a common ancestor and inherit certain characteristics which become modified by natural selection of mutant phenotypes, it is as if some traits are locked in place and are unable to change in certain ways. Some textbook anatomical constraints often include examples of structures that connect parts of the body together though a physical link. These links are hard if not impossible to break because evolution usually requires that anatomy be formed by small consecutive modifications in populations through generations. In his book, Why We Get Sick, Randolph Nesse uses the "blind spot" in the vertebrate eye (caused by the nerve fibers running through the retina) as an example of this. He argues that natural selection has come up with an elaborate work-around of the eyes wobbling back-and-forth to correct for this, but vertebrates have not found the solution embodied in cephalopod eyes, where the optic nerve does not interrupt the view. See also: Evolution of the eye. Another example is the cranial nerves in tetrapods. In early vertebrate evolution, sharks, skates, and rays (collectively Chondrichthyes), the cranial nerves run from the part of the brain that interprets sensory information, and radiate out towards the organs that produce those sensations. In tetrapods, however, and mammals in particular, the nerves take an elaborate winding path through the cranium around structures that evolved after the common ancestor with sharks. Debate with structuralism Adaptationism is sometimes characterized by critics as an unsubstantiated assumption that all or most traits are optimal adaptations. Structuralist critics (most notably Richard Lewontin and Stephen Jay Gould in their "spandrel" paper) contend that the adaptationists have overemphasized the power of natural selection to shape individual traits to an evolutionary optimum. Adaptationists are sometimes accused by their critics of using ad hoc "just-so stories". The critics, in turn, have been accused of misrepresentation (Straw man argumentation), rather than attacking the actual statements of supposed adaptationists. Adaptationist researchers respond by asserting that they, too, follow George Williams' depiction of adaptation as an "onerous concept" that should only be applied in light of strong evidence. This evidence can be generally characterized as the successful prediction of novel phenomena based on the hypothesis that design details of adaptations should fit a complex evolved design to respond to a specific set of selection pressures. In evolutionary psychology, researchers such as Leda Cosmides, John Tooby, and David Buss contend that the bulk of research findings that were uniquely predicted through adaptationist hypothesizing comprise evidence of the methods' validity. Purpose and function There are philosophical issues with the way biologists speak of function, effectively invoking teleology, the purpose of an adaptation. Function To say something has a function is to say something about what it does for the organism. It also says something about its history: how it has come about. A heart pumps blood: that is its function. It also emits sound, which is considered to be an ancillary side-effect, not its function. The heart has a history (which may be well or poorly understood), and that history is about how natural selection formed and maintained the heart as a pump. Every aspect of an organism that has a function has a history. Now, an adaptation must have a functional history: therefore we expect it must have undergone selection caused by relative survival in its habitat. It would be quite wrong to use the word adaptation about a trait which arose as a by-product. Teleology Teleology was introduced into biology by Aristotle to describe the adaptedness of organisms. Biologists have found the implications of purposefulness awkward as they suggest supernatural intention, an aspect of Plato's thinking which Aristotle rejected. A similar term, teleonomy, grew out of cybernetics and self-organising systems and was used by biologists of the 1960s such as Ernst Mayr and George C. Williams as a less loaded alternative. On the one hand, adaptation is obviously purposeful: natural selection chooses what works and eliminates what does not. On the other hand, biologists want to deny conscious purpose in evolution. The dilemma gave rise to a famous joke by the evolutionary biologist Haldane: "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public.'" David Hull commented that Haldane's mistress "has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it. The only concession which they make to its disreputable past is to rename it 'teleonomy'." See also Adaptive evolution in the human genome Beneficial acclimation hypothesis Constructive neutral evolution Evolutionary failure Exaptation Gene-centered view of evolution Neutral theory of molecular evolution Vitalism References Sources External links Information from "Deep Ethology" course website, by Neil Greenberg Tooby & Cosmides comments on Maynard Smith's New York Review of Books piece on Gould et al. Evolutionary biology Modern synthesis (20th century)
0.784378
0.970299
0.761081
Structural linguistics
Structural linguistics, or structuralism, in linguistics, denotes schools or theories in which language is conceived as a self-contained, self-regulating semiotic system whose elements are defined by their relationship to other elements within the system. It is derived from the work of Swiss linguist Ferdinand de Saussure and is part of the overall approach of structuralism. Saussure's Course in General Linguistics, published posthumously in 1916, stressed examining language as a dynamic system of interconnected units. Saussure is also known for introducing several basic dimensions of semiotic analysis that are still important today. Two of these are his key methods of syntagmatic and paradigmatic analysis, which define units syntactically and lexically, respectively, according to their contrast with the other units in the system. Other key features of structuralism are the focus on systematic phenomena, the primacy of an idealized form over actual speech data, the priority of linguistic form over meaning, the marginalization of written language, and the connection of linguistic structure to broader social, behavioral, or cognitive phenomena. Structuralism as a term, however, was not used by Saussure, who called the approach semiology. The term structuralism is derived from sociologist Émile Durkheim's anti-Darwinian modification of Herbert Spencer's organic analogy which draws a parallel between social structures and the organs of an organism which have different functions or purposes. Similar analogies and metaphors were used in the historical-comparative linguistics that Saussure was part of. Saussure himself made a modification of August Schleicher's language–species analogy, based on William Dwight Whitney's critical writings, to turn focus to the internal elements of the language organism, or system. Nonetheless, structural linguistics became mainly associated with Saussure's notion of language as a dual interactive system of symbols and concepts. The term structuralism was adopted to linguistics after Saussure's death by the Prague school linguists Roman Jakobson and Nikolai Trubetzkoy; while the term structural linguistics was coined by Louis Hjelmslev. History Structural linguistics begins with the posthumous publication of Ferdinand de Saussure's Course in General Linguistics in 1916, which his students compiled from his lectures. The book proved to be highly influential, providing the foundation for both modern linguistics and semiotics. Structuralist linguistics is often thought of as giving rise to independent European and American traditions due to ambiguity in the term. It is most commonly thought that structural linguistics stems from Saussure's writings; but these were rejected by an American school of linguistics based on Wilhelm Wundt's structural psychology. Key Features John E. Joseph identifies several defining features of structuralism that emerged in the decade and a half following World War I: Systematic Phenomena and Synchronic Dimension: Structural linguistics focuses on studying language as a system (langue) rather than individual utterances (parole), emphasizing the synchronic dimension. Even attempts to study parole often incorporate elements into the sphere of langue. Primacy of Langue over Parole: Structuralists believe that the virtual system of langue, despite being indirectly observable and reconstructed through parole, is more fundamental and "real" than actual utterances. Priority of Form over Meaning: There is a general priority of linguistic form over meaning, continuing the Neogrammarians' tradition, although some exceptions exist, such as in Firth's work. Marginalization of Written Language: Written language is often viewed as a secondary representation of spoken language, though this view varies among different structuralist approaches. Connection to Social, Behavioral, or Cognitive Aspects: Structuralists are ready to link the structure of langue to broader phenomena beyond language, including social, behavioral, and psycho-cognitive aspects. European structuralism In Europe, Saussure influenced: (1) the Geneva School of Albert Sechehaye and Charles Bally, (2) the Prague linguistic circle, (3) the Copenhagen School of Louis Hjelmslev, (4) the Paris School of André Martinet and Algirdas Julien Greimas, and the Dutch school of Simon Dik. Structural linguistics also had an influence on other disciplines of humanities bringing about the movement known as structuralism. 'American structuralism', or American descriptivism Some confusion is caused by the fact that an American school of linguistics of 1910s through 1950s, which was based on structural psychology, (especially Wilhelm Wundt's Völkerpsychologie); and later on behavioural psychology, is sometimes nicknamed 'American structuralism'. This framework was not structuralist in the Saussurean sense that it did not consider language as arising from the interaction of meaning and expression. Instead, it was thought that the civilised human mind is organised into binary branching structures. Advocates of this type of structuralism are identified from their use of 'philosophical grammar' with its convention of placing the object, but not the subject, into the verb phrase; whereby the structure is disconnected from semantics in sharp contrast to Saussurean structuralism. This American school is alternatively called distributionalism, 'American descriptivism', or the 'Bloomfieldian' school – or 'post-Bloomfieldian', following the death of its leader Leonard Bloomfield in 1949. Nevertheless, Wundt's ideas had already been imported from Germany to American humanities by Franz Boas before him, influencing linguists such as Edward Sapir. Bloomfield named his psychological approach descriptive or philosophical–descriptive; as opposed to the historical–comparative study of languages. Structural linguists like Hjelmslev considered his work fragmentary because it eluded a full account of language. The concept of autonomy is also different: while structural linguists consider semiology (the bilateral sign system) separate from physiology, American descriptivists argued for the autonomy of syntax from semantics. All in all, there were unsolvable incompatibilities between the psychological and positivistic orientation of the Bloomfieldian school, and the semiotic orientation of the structuralists proper. In the generative or Chomskyan concept, a purported rejection of 'structuralism' usually refers to Noam Chomsky's opposition to the behaviourism of Bloomfield's 1933 textbook Language; though, coincidentally, he is also opposed to structuralism proper. Basic theories and methods The foundation of structural linguistics is a sign, which in turn has two components: a "signified" is an idea or concept, while the "signifier" is a means of expressing the signified. The "sign", e.g. a word, is thus the combined association of signifier and signified. The value of a sign can be defined only by being placed in contrast with other signs. This forms the basis of what later became the paradigmatic dimension of semiotic organization (i.e., terms and inventories of terms that stand in opposition to each other). This is contrasted drastically with the idea that linguistic structures can be examined in isolation from meaning, or that the organisation of the conceptual system can exist without a corresponding organisation of the signifying system. Paradigmatic relations hold among sets of units, such as the set distinguished phonologically by variation in their initial sound cat, bat, hat, mat, fat, or the morphologically distinguished set ran, run, running. The units of a set must have something in common with one another, but they must contrast too, otherwise they could not be distinguished from each other and would collapse into a single unit, which could not constitute a set on its own, since a set always consists of more than one unit. Syntagmatic relations, in contrast, are concerned with how units, once selected from their paradigmatic sets of oppositions, are 'chained' together into structural wholes. Syntagmatic and paradigmatic relations provide the structural linguist with a tool for categorization for phonology, morphology and syntax. Take morphology, for example. The signs cat and cats are associated in the mind, producing an abstract paradigm of the word forms of cat. Comparing this with other paradigms of word forms, we can note that, in English, the plural often consists of little more than adding an -s to the end of the word. Likewise, through paradigmatic and syntagmatic analysis, we can discover the syntax of sentences. For instance, contrasting the syntagma ("I should") and ("Should I?") allows us to realize that in French we only have to invert the units to turn a statement into a question. We thus take syntagmatic evidence (difference in structural configurations) as indicators of paradigmatic relations (e.g., in the present case: questions vs. assertions). The most detailed account of the relationship between a paradigmatic organisation of language as a motivator and classifier for syntagmatic configurations was provided by Louis Hjelmslev in his Prolegomena to a Theory of Language, giving rise to formal linguistics. Hjelmslev's model was subsequently incorporated into systemic functional grammar, functional discourse grammar, and Danish functional grammar. Structural explanation In structuralism, elements of a language are explained in relation to each other. For example, to understand the function of one grammatical case, it must be contrasted to all the other cases and, more widely, to all other grammatical categories of the language. The structural approach in humanities follows from 19th century Geist thinking which is derived from Georg Wilhelm Friedrich Hegel's philosophy. According to such theories, society or language arises as the collective psyche of a community; and this psyche is sometimes described as an 'organism'. In sociology, Émile Durkheim made a humanistic modification of Herbert Spencer's organic analogy. Durkheim, following Spencer's theory, compared society to an organism which has structures (organs) that carry out different functions. For Durkheim a structural explanation of society is that the population growth, through an organic solidarity (unlike Spencer who believes it happens by a self-interested conduct) leads to an increase of complexity and diversity in a community, creating a society. The structuralist reference became essential when linguistic 'structuralism' was established by the Prague linguistic circle after Saussure's death, following a shift from structural to functional explanation in the social anthropology of Alfred Radcliffe-Brown and Bronisław Malinowski. Saussure himself had actually used a modification of August Schleicher's Darwinian organic analogy in linguistics; his concept of la langue is the social organism or spirit. It needs to be noted that, despite certain similarities, structuralism and functionalism in humanistic linguistics are explicitly anti-Darwinian. This means that linguistic structures are not explained in terms of selection through competition; and that the biological metaphor is not to be taken literally. What is more, Saussure abandoned evolutionary linguistics altogether and, instead, defined synchronic analysis as the study of the language system; and diachronic analysis as the study of language change. With such precaution, structural explanation of language is analogous to structuralism in biology which explains structures in relation with material factors or substance. In Saussure's explanation, structure follows from systemic consequences of the association of meaning and expression. This can be contrasted with functional explanation which explains linguistic structure in relation to the "adaptation" of language to the community's communicative needs. Hjelmslev's elaboration of Saussure's structural explanation is that language arises from the structuring of content and expression. He argues that the nature of language could only be understood via the typological study of linguistic structures. In Hjelmslev's interpretation, there are no physical, psychological or other a priori principles that explain why languages are the way they are. Cross-linguistic similarities on the expression plane depend on a necessity to express meaning; conversely, cross-linguistic similarities on the content plane depend on the necessity to structure meaning potential according to the necessities of expression."The linguist must be equally interested in the similarity and in the difference between languages, two complementary sides of the same thing. The similarity between languages is their very structural principle; the difference between languages is the carrying out of that principle in concreto. Both the similarity and the difference between languages lie, then, in language and in languages themselves, in their internal structure; and no similarity or difference between languages rests on any factor outside language." – Louis Hjelmslev Compositional and combinatorial language According to André Martinet's concept of double articulation, language is a double-levelled or doubly articulated system. In this context, 'articulation' means 'joining'. The first level of articulation involves minimally meaningful units (monemes: words or morphemes), while the second level consists of minimally distinct non-signifying units (phonemes). Owing to double articulation, it is possible to construct all necessary words of a language with a couple dozen phonic units. Meaning is associated with combinations of the non-meaningful units. The organisation of language into hierarchical inventories makes highly complex and therefore highly useful language possible: "We might imagine a system of communication in which a special cry would correspond to each given situations and these facts of experience, it will be clear that if such a system were to serve the same purpose as our languages, it would have to comprise so large a number of distinct signs that the memory of man would be incapable of storing it. A few thousand of such units as tête, mal, ai, la, freely combinable, enable us to communicate more things than could be done by millions of unarticulated cries." – André Martinet Louis Hjelmslev's conception includes even more levels: phoneme, morpheme, lexeme, phrase, sentence and discourse. Building on the smallest meaningful and non-meaningful elements, glossemes, it is possible to generate an infinite number of productions: "When we compare the inventories yielded at the various stages of the deduction, their size will usually turn out to decrease as the procedure goes on. If the text is unrestricted, i.e., capable of being prolonged through constant addition of further parts … it will be possible to register an unrestricted number of sentences." – Louis Hjelmslev These notions are a continuation in a humanistic tradition which considers language as a human invention. A similar idea is found in Port-Royal Grammar: "It remains for us to examine the spiritual element of speech ... this marvelous invention of composing from twenty-five or thirty sounds an infinite variety of words, which, although not having any resemblance in themselves to that which passes through our minds, nevertheless do not fail to reveal to others all of the secrets of the mind, and to make intelligible to others who cannot penetrate into the mind all that we conceive and all of the diverse movements of our souls." – Antoine Arnauld Interaction of meaning and form Another way to approach structural explanation is from Saussure's concept of semiology (semiotics). Language is considered as arising from the interaction of form and meaning. Saussure's concept of the bilateral sign (signifier – signified) entails that the conceptual system is distinct from physical reality. For example, the spoken sign 'cat' is an association between the combination of the sounds [k], [æ] and [t] and the concept of a cat, rather than with its referent (an actual cat). Each item in the conceptual inventory is associated with an expression; and these two levels define, organise and restrict each other. Key concepts of the organisation of the phonemic versus the semantic system are those of opposition and distinctiveness. Each phoneme is distinct from other phonemes of the phonological system of a given language. The concepts of distinctiveness and markedness were successfully used by the Prague Linguistic Circle to explain the phonemic organisation of languages, laying a ground for modern phonology as the study of the sound systems of languages, also borrowing from Wilhelm von Humboldt. Likewise, each concept is distinct from all others in the conceptual system, and is defined in opposition with other concepts. Louis Hjelmslev laid the foundation of structural semantics with his idea that the content-level of language has a structure analogous to the level of expression. Structural explanation in the sense of how language shapes our understanding of the world has been widely used by the post-structuralists. Structural linguist Lucien Tesnière, who invented dependency grammar, considered the relationship between meaning and form as conflicting due to a mathematical difference in how syntactic and semantic structure is organised. He used his concept of antinomy between syntax and semantics to elucidate the concept of a language as a solution to the communication problem. From his perspective, the two-dimensional semantic dependency structure is necessarily forced into one-dimensional (linear) form. This causes the meaningful semantic arrangement to break into a largely arbitrary word ordering. Scientific validity Saussure's model of language emergence, the speech circuit, entails that la langue (language itself) is external to the brain and is received via la parole (language usage). While Saussure mostly employed interactive models, the speech circuit suggests that the brain is shaped by language, but language is not shaped by the brain except to the extent that the interactive association of meaning and form occurs ultimately in the brain. Such ideas roughly correspond to the idea of language that arises from neuroimaging studies. ERP studies have found that language processing is based on the interaction of syntax and semantics rather than on innate grammatical structures. MRI studies have found that the child's brain is shaped differently depending on the structural characteristics of their first language. By contrast, research evidence has failed to support the inverse idea that syntactic structures reflect the way the brain naturally prefers to process syntactic structures. It is argued that Functional Grammar, deriving from Saussure, is compatible with the view of language that arises from brain research and from the cross-linguistic study of linguistic structures. Recent perceptions of structuralism Those working in the generativist tradition often regard structuralist approaches as outdated and superseded. For example, Mitchell Marcus writes that structural linguistics was "fundamentally inadequate to process the full range of natural language". Holland writes that Chomsky had "decisively refuted Saussure". Similar views have been expressed by Jan Koster, Mark Turner, and other advocates of sociobiology. Others however stress the continuing importance of Saussure's thought and structuralist approaches. Gilbert Lazard has dismissed the Chomskyan approach as passé while applauding a return to Saussurean structuralism as the only course by which linguistics can become more scientific. Matthews notes the existence of many "linguists who are structuralists by many of the definitions that have been proposed, but who would themselves vigorously deny that they are anything of the kind", suggesting a persistence of the structuralist paradigm. Effect of structuralist linguistics upon other disciplines In the 1950s Saussure's ideas were appropriated by several prominent figures in continental philosophy, anthropology, and from there were borrowed in literary theory, where they are used to interpret novels and other texts. However, several critics have charged that Saussure's ideas have been misunderstood or deliberately distorted by continental philosophers and literary theorists and are certainly not directly applicable to the textual level, which Saussure himself would have firmly placed within parole and so not amenable to his theoretical constructs. Modern guidebooks of structural (formal and functional) analysis Roland Schäfer, 2016. Einführung in die grammatische Beschreibung des Deutschen (2nd ed.). Berlin: Language Science Press. Emma Pavey, 2010. The Structure of Language: An Introduction to Grammatical Analysis. Cambridge University Press. Kees Hengeveld & Lachlan MacKenzie, 2008. Functional Discourse Grammar: A Typologically-Based Theory of Language Structure. Oxford University Press. M.A.K. Halliday, 2004. An Introduction to Functional Grammar. 3rd edition, revised by Christian Matthiessen. London: Hodder Arnold. See also Theory of language Notes References External links Structural linguistics by Nasrullah Mambrol Key theories of Ferdinand de Saussure Key theories of Louis Hjelmslev Key theories of Emile Benveniste Key concepts of A. J. Greimas Institut Ferdinand de Saussure Revue Texto! Prague linguistic circle Structuralism Linguistic theories and hypotheses Systems theory
0.76743
0.991687
0.761051
Isogamy
Isogamy is a form of sexual reproduction that involves gametes of the same morphology (indistinguishable in shape and size), and is found in most unicellular eukaryotes. Because both gametes look alike, they generally cannot be classified as male or female. Instead, organisms that reproduce through isogamy are said to have different mating types, most commonly noted as "+" and "−" strains. Etymology The etymology of isogamy derives from the Greek adjective isos (meaning equal) and the Greek verb gameo (meaning to have sex/to reproduce), eventually meaning "equal reproduction" which refers to a hypothetical initial model of equal contribution of resources by both gametes to a zygote in contrast to a later evolutional stage of anisogamy. The term isogamy was first used in the year 1891. Characteristics of isogamous species Isogamous species often have two mating types (heterothallism), but sometimes can occur between two haploid individuals that are mitotic descendents (homothallism). Some isogamous species have more than two mating types, but the number is usually lower than ten. In some extremely rare cases, such as in some basidiomycete species, a species can have thousands of mating types. Under the strict definition of isogamy, fertilization occurs when two gametes fuse to form a zygote. Sexual reproduction between two cells that does not involve gametes (e.g. conjugation between two mycelia in basidiomycete fungi), is often called isogamy, although it is not technically isogametic reproduction in the strict sense. Evolution It is generally accepted that isogamy is an ancestral state for anisogamy and that isogamy was the first stage in the evolution of sexual reproduction. Isogamous reproduction evolved independently in several lineages of plants and animals to anisogamous species with gametes of male and female types and subsequently to oogamous species in which the female gamete is much larger than the male and has no ability to move. This pattern may have been driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction. Isogamy is the norm in unicellular eukaryote species, although it is possible that isogamy is evolutionarily stable in multicellular species. Occurrence Almost all unicellular eukaryotes are isogamous. Among multicellular organisms, isogamy is restricted to fungi and eukaryotic algae. Many species of green algae are isogamous. It is typical in the genera Ulva, Hydrodictyon, Tetraspora, Zygnema, Spirogyra, Ulothrix, and Chlamydomonas. Many fungi are also isogamous, including single-celled species such as Saccharomyces cerevisiae and Schizosaccharomyces pombe. In some multicellular fungi, such as basidiomycetes, sexual reproduction takes place between two mycelia, but there is no exchange of gametes. There are no known examples of isogamous metazoans, red algae or land plants. See also Biology Anisogamy Evolution of sexual reproduction Gamete Mating in fungi Meiosis Oogamy Sex Social anthropology Hypergamy Hypogamy Notes References Reproduction Germ cells Charophyta
0.770325
0.987929
0.761026
The Chemical Basis of Morphogenesis
"The Chemical Basis of Morphogenesis" is an article that the English mathematician Alan Turing wrote in 1952. It describes how patterns in nature, such as stripes and spirals, can arise naturally from a homogeneous, uniform state. The theory, which can be called a reaction–diffusion theory of morphogenesis, has become a basic model in theoretical biology. Such patterns have come to be known as Turing patterns. For example, it has been postulated that the protein VEGFC can form Turing patterns to govern the formation of lymphatic vessels in the zebrafish embryo. Reaction–diffusion systems Reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. Patterns such as fronts, spirals, targets, hexagons, stripes and dissipative solitons are found in various types of reaction-diffusion systems in spite of large discrepancies e.g. in the local reaction terms. Such patterns have been dubbed "Turing patterns". Reaction–diffusion processes form one class of explanation for the embryonic development of animal coats and skin pigmentation. Another reason for the interest in reaction-diffusion systems is that although they represent nonlinear partial differential equations, there are often possibilities for an analytical treatment. See also Evolutionary developmental biology Turing pattern Symmetry breaking References 1952 in England 1952 documents Mathematical modeling Parabolic partial differential equations Biological processes Chaos theory Works by English people Alan Turing Mathematics papers 1952 in biology
0.774051
0.983171
0.761025
Discourse analysis
Discourse analysis (DA), or discourse studies, is an approach to the analysis of written, spoken, or sign language, including any significant semiotic event. The objects of discourse analysis (discourse, writing, conversation, communicative event) are variously defined in terms of coherent sequences of sentences, propositions, speech, or turns-at-talk. Contrary to much of traditional linguistics, discourse analysts not only study language use 'beyond the sentence boundary' but also prefer to analyze 'naturally occurring' language use, not invented examples. Text linguistics is a closely related field. The essential difference between discourse analysis and text linguistics is that discourse analysis aims at revealing socio-psychological characteristics of a person/persons rather than text structure. Discourse analysis has been taken up in a variety of disciplines in the humanities and social sciences, including linguistics, education, sociology, anthropology, social work, cognitive psychology, social psychology, area studies, cultural studies, international relations, human geography, environmental science, communication studies, biblical studies, public relations, argumentation studies, and translation studies, each of which is subject to its own assumptions, dimensions of analysis, and methodologies. History Early use of the term The ancient Greeks (among others) had much to say on discourse; however, there is ongoing discussion about whether Austria-born Leo Spitzer's Stilstudien (Style Studies) of 1928 is the earliest example of discourse analysis (DA). Michel Foucault translated it into French. However, the term first came into general use following the publication of a series of papers by Zellig Harris from 1952 reporting on work from which he developed transformational grammar in the late 1930s. Formally equivalent relations among the sentences of a coherent discourse are made explicit by using sentence transformations to put the text in a canonical form. Words and sentences with equivalent information then appear in the same column of an array. This work progressed over the next four decades (see references) into a science of sublanguage analysis (Kittredge & Lehrberger 1982), culminating in a demonstration of the informational structures in texts of a sublanguage of science, that of immunology (Harris et al. 1989), and a fully articulated theory of linguistic informational content (Harris 1991). During this time, however, most linguists ignored such developments in favor of a succession of elaborate theories of sentence-level syntax and semantics. In January 1953, a linguist working for the American Bible Society, James A. Lauriault (alt. Loriot), needed to find answers to some fundamental errors in translating Quechua, in the Cuzco area of Peru. Following Harris's 1952 publications, he worked over the meaning and placement of each word in a collection of Quechua legends with a native speaker of Quechua and was able to formulate discourse rules that transcended the simple sentence structure. He then applied the process to Shipibo, another language of Eastern Peru. He taught the theory at the Summer Institute of Linguistics in Norman, Oklahoma, in the summers of 1956 and 1957 and entered the University of Pennsylvania to study with Harris in the interim year. He tried to publish a paper,Shipibo Paragraph Structure, but it was delayed until 1970 (Loriot & Hollenbach 1970). In the meantime, Kenneth Lee Pike, a professor at the University of Michigan, taught the theory, and one of his students, Robert E. Longacre, developed it in his writings. Harris's methodology disclosing the correlation of form with meaning was developed into a system for the computer-aided analysis of natural language by a team led by Naomi Sager at NYU, which has been applied to a number of sublanguage domains, most notably to medical informatics. The software for the Medical Language Processor is publicly available on SourceForge. In the humanities In the late 1960s and 1970s, and without reference to this prior work, a variety of other approaches to a new cross-discipline of DA began to develop in most of the humanities and social sciences concurrently with, and related to, other disciplines. These include semiotics, psycholinguistics, sociolinguistics, and pragmatics. Many of these approaches, especially those influenced by the social sciences, favor a more dynamic study of oral talk-in-interaction. An example is "conversational analysis" (CA), which was influenced by the sociologist Harold Garfinkel, the founder of ethnomethodology. Foucault In Europe, Michel Foucault became one of the key theorists of the subject, especially of discourse, and wrote The Archaeology of Knowledge. In this context, the term 'discourse' no longer refers to formal linguistic aspects, but to institutionalized patterns of knowledge that become manifest in disciplinary structures and operate by the connection of knowledge and power. Since the 1970s, Foucault's works have had an increasing impact, especially on discourse analysis in the field of social sciences. Thus, in modern European social sciences, one can find a wide range of different approaches working with Foucault's definition of discourse and his theoretical concepts. Apart from the original context in France, there has been, since 2005, a broad discussion on socio-scientific discourse analysis in Germany. Here, for example, the sociologist Reiner Keller developed his widely recognized 'sociology of knowledge approach to discourse' (SKAD). Following the sociology of knowledge by Peter L. Berger and Thomas Luckmann, Keller argues that our sense of reality in everyday life and thus the meaning of every object, action and event is the product of a permanent, routinized interaction. In this context, SKAD has been developed as a scientific perspective that is able to understand the processes of 'The Social Construction of Reality' on all levels of social life by combining the prementioned Michel Foucault's theories of discourse and power while also introducing the theory of knowledge by Berger/Luckmann. Whereas the latter primarily focus on the constitution and stabilization of knowledge on the level of interaction, Foucault's perspective concentrates on institutional contexts of the production and integration of knowledge, where the subject mainly appears to be determined by knowledge and power. Therefore, the 'Sociology of Knowledge Approach to Discourse' can also be seen as an approach to deal with the vividly discussed micro–macro problem in sociology. Perspectives The following are some of the specific theoretical perspectives and analytical approaches used in linguistic discourse analysis: Applied linguistics, an interdisciplinary perspective on linguistic analysis Cognitive neuroscience of discourse comprehension Cognitive psychology, studying the production and comprehension of discourse. Conversation analysis Critical discourse analysis Discursive psychology Emergent grammar Ethnography of communication Functional grammar Interactional sociolinguistics Mediated stylistics Pragmatics Response based therapy (counselling) Rhetoric Stylistics (linguistics) Sublanguage analysis Tagmemics Text linguistics Variation analysis Although these approaches emphasize different aspects of language use, they all view language as social interaction and are concerned with the social contexts in which discourse is embedded. Often a distinction is made between 'local' structures of discourse (such as relations among sentences, propositions, and turns) and 'global' structures, such as overall topics and the schematic organization of discourses and conversations. For instance, many types of discourse begin with some kind of global 'summary', in titles, headlines, leads, abstracts, and so on. A problem for the discourse analyst is to decide when a particular feature is relevant to the specification required. A question many linguists ask is: "Are there general principles which will determine the relevance or nature of the specification?" Topics of interest Topics of discourse analysis include: The various levels or dimensions of discourse, such as sounds (intonation, etc.), gestures, syntax, the lexicon, style, rhetoric, meanings, speech acts, moves, strategies, turns, and other aspects of interaction Genres of discourse (various types of discourse in politics, the media, education, science, business, etc.) The relations between discourse and the emergence of syntactic structure The relations between text (discourse) and context The relations between discourse and power The relations between discourse and interaction The relations between discourse and cognition and memory Lexical density Prominent academics Marc Angenot Johannes Angermuller Mikhail Bakhtin Roland Barthes Émile Benveniste Jean-Paul Benzécri Jan Blommaert Georges Canguilhem Teun van Dijk Oswald Ducrot Norman Fairclough Michel Foucault Heidi E. Hamilton Roman Jakobson Barbara Johnstone Dominique Maingueneau Sinfree Makoni Damon Mayaffre Michel Pêcheux Jonathan Potter Paul Ricœur Georges-Elia Sarfati Ferdinand de Saussure Deborah Schiffrin Deborah Tannen Margaret Wetherell Ruth Wodak Political discourse Political discourse is the text and talk of professional politicians or political institutions, such as presidents and prime ministers and other members of government, parliament or political parties, both at the local, national and international levels, includes both the speaker and the audience. Political discourse analysis is a field of discourse analysis which focuses on discourse in political forums (such as debates, speeches, and hearings) as the phenomenon of interest. Policy analysis requires discourse analysis to be effective from the post-positivist perspective. Political discourse is the formal exchange of reasoned views as to which of several alternative courses of action should be taken to solve a societal problem. Corporate discourse Corporate discourse can be broadly defined as the language used by corporations. It encompasses a set of messages that a corporation sends out to the world (the general public, the customers and other corporations) and the messages it uses to communicate within its own structures (the employees and other stakeholders). See also Actor (policy debate) Critical discourse analysis Dialogical analysis Discourse representation theory Frame analysis Communicative action Essex School of discourse analysis Ethnolinguistics Foucauldian discourse analysis Interpersonal communication Linguistic anthropology Narrative analysis Pragmatics Rhetoric Sociolinguistics Statement analysis Stylistics (linguistics) Worldview References External links DiscourseNet. International Association for Discourse Studies The Discourse Attributes Analysis Program and Measures of the Referential Process . Linguistic Society of America: Discourse Analysis, by Deborah Tannen Discourse Analysis by Z. Harris Daniel L. Everett, Documenting Languages: The View from the Brazilian Amazon Statement concerning James Loriot, p. 9 A discourse analysis related international conference You can find some information and events related to Metadiscourse Across Genres by visiting MAG 2017 website Systemic functional linguistics Applied linguistics Sociolinguistics Translation studies Postmodernism Postmodern theory
0.764641
0.995268
0.761023
Continuum mechanics
Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century. Continuum mechanics deals with deformable bodies, as opposed to rigid bodies. A continuum model assumes that the substance of the object completely fills the space it occupies. While ignoring the fact that matter is made of atoms, this provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships. Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics. Concept of a continuum The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body that can be continually sub-divided into infinitesimal elements with local material properties defined at any particular point. Properties of the bulk material can therefore be described by continuous functions, and their evolution can be studied using the mathematics of calculus. Apart from the assumption of continuity, two other independent assumptions are often employed in the study of continuum mechanics. These are homogeneity (assumption of identical properties at all locations) and isotropy (assumption of directionally invariant vector properties). If these auxiliary assumptions are not globally applicable, the material may be segregated into sections where they are applicable in order to simplify the analysis. For more complex cases, one or both of these assumptions can be dropped. In these cases, computational methods are often used to solve the differential equations describing the evolution of material properties. Major areas An additional area of continuum mechanics comprises elastomeric foams, which exhibit a curious hyperbolic stress-strain relationship. The elastomer is a true continuum, but a homogeneous distribution of voids gives it unusual properties. Formulation of models Continuum mechanics models begin by assigning a region in three-dimensional Euclidean space to the material body being modeled. The points within this region are called particles or material points. Different configurations or states of the body correspond to different regions in Euclidean space. The region corresponding to the body's configuration at time is labeled . A particular particle within the body in a particular configuration is characterized by a position vector where are the coordinate vectors in some frame of reference chosen for the problem (See figure 1). This vector can be expressed as a function of the particle position in some reference configuration, for example the configuration at the initial time, so that This function needs to have various properties so that the model makes physical sense. needs to be: continuous in time, so that the body changes in a way which is realistic, globally invertible at all times, so that the body cannot intersect itself, orientation-preserving, as transformations which produce mirror reflections are not possible in nature. For the mathematical formulation of the model, is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated. Forces in a continuum A solid is a deformable body that possesses shear strength, sc. a solid can support shear forces (forces parallel to the material surface on which they act). Fluids, on the other hand, do not sustain shear forces. Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as: Surface forces Surface forces or contact forces, expressed as force per unit area, can act either on the bounding surface of the body, as a result of mechanical contact with other bodies, or on imaginary internal surfaces that bound portions of the body, as a result of the mechanical interaction between the parts of the body to either side of the surface (Euler-Cauchy's stress principle). When a body is acted upon by external contact forces, internal contact forces are then transmitted from point to point inside the body to balance their action, according to Newton's third law of motion of conservation of linear momentum and angular momentum (for continuous bodies these laws are called the Euler's equations of motion). The internal contact forces are related to the body's deformation through constitutive equations. The internal contact forces may be mathematically described by how they relate to the motion of the body, independent of the body's material makeup. The distribution of internal contact forces throughout the volume of the body is assumed to be continuous. Therefore, there exists a contact force density or Cauchy traction field that represents this distribution in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector . Any differential area with normal vector of a given internal surface area , bounding a portion of the body, experiences a contact force arising from the contact between both portions of the body on each side of , and it is given by where is the surface traction, also called stress vector, traction, or traction vector. The stress vector is a frame-indifferent vector (see Euler-Cauchy's stress principle). The total contact force on the particular internal surface is then expressed as the sum (surface integral) of the contact forces on all differential surfaces : In continuum mechanics a body is considered stress-free if the only forces present are those inter-atomic forces (ionic, metallic, and van der Waals forces) required to hold the body together and to keep its shape in the absence of all external influences, including gravitational attraction. Stresses generated during manufacture of the body to a specific configuration are also excluded when considering stresses in a body. Therefore, the stresses considered in continuum mechanics are only those produced by deformation of the body, sc. only relative changes in stress are considered, not the absolute values of stress. Body forces Body forces are forces originating from sources outside of the body that act on the volume (or mass) of the body. Saying that body forces are due to outside sources implies that the interaction between different parts of the body (internal forces) are manifested through the contact forces alone. These forces arise from the presence of the body in force fields, e.g. gravitational field (gravitational forces) or electromagnetic field (electromagnetic forces), or from inertial forces when bodies are in motion. As the mass of a continuous body is assumed to be continuously distributed, any force originating from the mass is also continuously distributed. Thus, body forces are specified by vector fields which are assumed to be continuous over the entire volume of the body, i.e. acting on every point in it. Body forces are represented by a body force density (per unit of mass), which is a frame-indifferent vector field. In the case of gravitational forces, the intensity of the force depends on, or is proportional to, the mass density of the material, and it is specified in terms of force per unit mass or per unit volume. These two specifications are related through the material density by the equation . Similarly, the intensity of electromagnetic forces depends upon the strength (electric charge) of the electromagnetic field. The total body force applied to a continuous body is expressed as Body forces and contact forces acting on the body lead to corresponding moments of force (torques) relative to a given point. Thus, the total applied torque about the origin is given by In certain situations, not commonly considered in the analysis of the mechanical behavior of materials, it becomes necessary to include two other types of forces: these are couple stresses (surface couples, contact torques) and body moments. Couple stresses are moments per unit area applied on a surface. Body moments, or body couples, are moments per unit volume or per unit mass applied to the volume of the body. Both are important in the analysis of stress for a polarized dielectric solid under the action of an electric field, materials where the molecular structure is taken into consideration (e.g. bones), solids under the action of an external magnetic field, and the dislocation theory of metals. Materials that exhibit body couples and couple stresses in addition to moments produced exclusively by forces are called polar materials. Non-polar materials are then those materials with only moments of forces. In the classical branches of continuum mechanics the development of the theory of stresses is based on non-polar materials. Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) in the body can be given by Kinematics: motion and deformation A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration to a current or deformed configuration (Figure 2). The motion of a continuum body is a continuous time sequence of displacements. Thus, the material body will occupy different configurations at different times so that a particle occupies a series of points in space which describe a path line. There is continuity during motion or deformation of a continuum body in the sense that: The material points forming a closed curve at any instant will always form a closed curve at any subsequent time. The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within. It is convenient to identify a reference configuration or initial condition which all subsequent configurations are referenced from. The reference configuration need not be one that the body will ever occupy. Often, the configuration at is considered the reference configuration, . The components of the position vector of a particle, taken with respect to the reference configuration, are called the material or reference coordinates. When analyzing the motion or deformation of solids, or the flow of fluids, it is necessary to describe the sequence or evolution of configurations throughout time. One description for motion is made in terms of the material or referential coordinates, called material description or Lagrangian description. Lagrangian description In the Lagrangian description the position and physical properties of the particles are described in terms of the material or referential coordinates and time. In this case the reference configuration is the configuration at . An observer standing in the frame of reference observes the changes in the position and physical properties as the material body moves in space as time progresses. The results obtained are independent of the choice of initial time and reference configuration, . This description is normally used in solid mechanics. In the Lagrangian description, the motion of a continuum body is expressed by the mapping function (Figure 2), which is a mapping of the initial configuration onto the current configuration , giving a geometrical correspondence between them, i.e. giving the position vector that a particle , with a position vector in the undeformed or reference configuration , will occupy in the current or deformed configuration at time . The components are called the spatial coordinates. Physical and kinematic properties , i.e. thermodynamic properties and flow velocity, which describe or characterize features of the material body, are expressed as continuous functions of position and time, i.e. . The material derivative of any property of a continuum, which may be a scalar, vector, or tensor, is the time rate of change of that property for a specific group of particles of the moving continuum body. The material derivative is also known as the substantial derivative, or comoving derivative, or convective derivative. It can be thought as the rate at which the property changes when measured by an observer traveling with that group of particles. In the Lagrangian description, the material derivative of is simply the partial derivative with respect to time, and the position vector is held constant as it does not change with time. Thus, we have The instantaneous position is a property of a particle, and its material derivative is the instantaneous flow velocity of the particle. Therefore, the flow velocity field of the continuum is given by Similarly, the acceleration field is given by Continuity in the Lagrangian description is expressed by the spatial and temporal continuity of the mapping from the reference configuration to the current configuration of the material points. All physical quantities characterizing the continuum are described this way. In this sense, the function and are single-valued and continuous, with continuous derivatives with respect to space and time to whatever order is required, usually to the second or third. Eulerian description Continuity allows for the inverse of to trace backwards where the particle currently located at was located in the initial or referenced configuration . In this case the description of motion is made in terms of the spatial coordinates, in which case is called the spatial description or Eulerian description, i.e. the current configuration is taken as the reference configuration. The Eulerian description, introduced by d'Alembert, focuses on the current configuration , giving attention to what is occurring at a fixed point in space as time progresses, instead of giving attention to individual particles as they move through space and time. This approach is conveniently applied in the study of fluid flow where the kinematic property of greatest interest is the rate at which change is taking place rather than the shape of the body of fluid at a reference time. Mathematically, the motion of a continuum using the Eulerian description is expressed by the mapping function which provides a tracing of the particle which now occupies the position in the current configuration to its original position in the initial configuration . A necessary and sufficient condition for this inverse function to exist is that the determinant of the Jacobian matrix, often referred to simply as the Jacobian, should be different from zero. Thus, In the Eulerian description, the physical properties are expressed as where the functional form of in the Lagrangian description is not the same as the form of in the Eulerian description. The material derivative of , using the chain rule, is then The first term on the right-hand side of this equation gives the local rate of change of the property occurring at position . The second term of the right-hand side is the convective rate of change and expresses the contribution of the particle changing position in space (motion). Continuity in the Eulerian description is expressed by the spatial and temporal continuity and continuous differentiability of the flow velocity field. All physical quantities are defined this way at each instant of time, in the current configuration, as a function of the vector position . Displacement field The vector joining the positions of a particle in the undeformed configuration and deformed configuration is called the displacement vector , in the Lagrangian description, or , in the Eulerian description. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field, In general, the displacement field is expressed in terms of the material coordinates as or in terms of the spatial coordinates as where are the direction cosines between the material and spatial coordinate systems with unit vectors and , respectively. Thus and the relationship between and is then given by Knowing that then It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in , and the direction cosines become Kronecker deltas, i.e. Thus, we have or in terms of the spatial coordinates as Governing equations Continuum mechanics deals with the behavior of materials that can be approximated as continuous for certain length and time scales. The equations that govern the mechanics of such materials include the balance laws for mass, momentum, and energy. Kinematic relations and constitutive equations are needed to complete the system of governing equations. Physical restrictions on the form of the constitutive relations can be applied by requiring that the second law of thermodynamics be satisfied under all conditions. In the continuum mechanics of solids, the second law of thermodynamics is satisfied if the Clausius–Duhem form of the entropy inequality is satisfied. The balance laws express the idea that the rate of change of a quantity (mass, momentum, energy) in a volume must arise from three causes: the physical quantity itself flows through the surface that bounds the volume, there is a source of the physical quantity on the surface of the volume, or/and, there is a source of the physical quantity inside the volume. Let be the body (an open subset of Euclidean space) and let be its surface (the boundary of ). Let the motion of material points in the body be described by the map where is the position of a point in the initial configuration and is the location of the same point in the deformed configuration. The deformation gradient is given by Balance laws Let be a physical quantity that is flowing through the body. Let be sources on the surface of the body and let be sources inside the body. Let be the outward unit normal to the surface . Let be the flow velocity of the physical particles that carry the physical quantity that is flowing. Also, let the speed at which the bounding surface is moving be (in the direction ). Then, balance laws can be expressed in the general form The functions , , and can be scalar valued, vector valued, or tensor valued - depending on the physical quantity that the balance equation deals with. If there are internal boundaries in the body, jump discontinuities also need to be specified in the balance laws. If we take the Eulerian point of view, it can be shown that the balance laws of mass, momentum, and energy for a solid can be written as (assuming the source term is zero for the mass and angular momentum equations) In the above equations is the mass density (current), is the material time derivative of , is the particle velocity, is the material time derivative of , is the Cauchy stress tensor, is the body force density, is the internal energy per unit mass, is the material time derivative of , is the heat flux vector, and is an energy source per unit mass. The operators used are defined below. With respect to the reference configuration (the Lagrangian point of view), the balance laws can be written as In the above, is the first Piola-Kirchhoff stress tensor, and is the mass density in the reference configuration. The first Piola-Kirchhoff stress tensor is related to the Cauchy stress tensor by We can alternatively define the nominal stress tensor which is the transpose of the first Piola-Kirchhoff stress tensor such that Then the balance laws become Operators The operators in the above equations are defined as where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the current configuration. Also, where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the reference configuration. The inner product is defined as Clausius–Duhem inequality The Clausius–Duhem inequality can be used to express the second law of thermodynamics for elastic-plastic materials. This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved. Just like in the balance laws in the previous section, we assume that there is a flux of a quantity, a source of the quantity, and an internal density of the quantity per unit mass. The quantity of interest in this case is the entropy. Thus, we assume that there is an entropy flux, an entropy source, an internal mass density and an internal specific entropy (i.e. entropy per unit mass) in the region of interest. Let be such a region and let be its boundary. Then the second law of thermodynamics states that the rate of increase of in this region is greater than or equal to the sum of that supplied to (as a flux or from internal sources) and the change of the internal entropy density due to material flowing in and out of the region. Let move with a flow velocity and let particles inside have velocities . Let be the unit outward normal to the surface . Let be the density of matter in the region, be the entropy flux at the surface, and be the entropy source per unit mass. Then the entropy inequality may be written as The scalar entropy flux can be related to the vector flux at the surface by the relation . Under the assumption of incrementally isothermal conditions, we have where is the heat flux vector, is an energy source per unit mass, and is the absolute temperature of a material point at at time . We then have the Clausius–Duhem inequality in integral form: We can show that the entropy inequality may be written in differential form as In terms of the Cauchy stress and the internal energy, the Clausius–Duhem inequality may be written as Validity The validity of the continuum assumption may be verified by a theoretical analysis, in which either some clear periodicity is identified or statistical homogeneity and ergodicity of the microstructure exist. More specifically, the continuum hypothesis hinges on the concepts of a representative elementary volume and separation of scales based on the Hill–Mandel condition. This condition provides a link between an experimentalist's and a theoretician's viewpoint on constitutive equations (linear and nonlinear elastic/inelastic or coupled fields) as well as a way of spatial and statistical averaging of the microstructure. When the separation of scales does not hold, or when one wants to establish a continuum of a finer resolution than the size of the representative volume element (RVE), a statistical volume element (SVE) is employed, which results in random continuum fields. The latter then provide a micromechanics basis for stochastic finite elements (SFE). The levels of SVE and RVE link continuum mechanics to statistical mechanics. Experimentally, the RVE can only be evaluated when the constitutive response is spatially homogenous. Applications Continuum mechanics Solid mechanics Fluid mechanics Engineering Civil engineering Mechanical engineering Aerospace engineering Biomedical engineering Chemical engineering See also Transport phenomena Bernoulli's principle Cauchy elastic material Configurational mechanics Curvilinear coordinates Equation of state Finite deformation tensors Finite strain theory Hyperelastic material Lagrangian and Eulerian specification of the flow field Movable cellular automaton Peridynamics (a non-local continuum theory leading to integral equations) Stress (physics) Stress measures Tensor calculus Tensor derivative (continuum mechanics) Theory of elasticity Knudsen number Explanatory notes References Citations Works cited General references External links "Objectivity in classical continuum mechanics: Motions, Eulerian and Lagrangian functions; Deformation gradient; Lie derivatives; Velocity-addition formula, Coriolis; Objectivity" by Gilles Leborgne, April 7, 2021: "Part IV Velocity-addition formula and Objectivity" Classical mechanics
0.764216
0.995816
0.761019
Regeneration (ecology)
In ecology regeneration is the ability of an ecosystemspecifically, the environment and its living populationto renew and recover from damage. It is a kind of biological regeneration. Regeneration refers to ecosystems replenishing what is being eaten, disturbed, or harvested. Regeneration's biggest force is photosynthesis which transforms sun energy and nutrients into plant biomass. Resilience to minor disturbances is one characteristic feature of healthy ecosystems. Following major (lethal) disturbances, such as a fire or pest outbreak in a forest, an immediate return to the previous dynamic equilibrium will not be possible. Instead, pioneering species will occupy, compete for space, and establish themselves in the newly opened habitat. The new growth of seedlings and community assembly process is known as regeneration in ecology. As ecological succession sets in, a forest will slowly regenerate towards its former state within the succession (climax or any intermediate stage), provided that all outer parameters (climate, soil fertility availability of nutrients, animal migration paths, air pollution or the absence thereof, etc.) remain unchanged. In certain regions like Australia, natural wildfire is a necessary condition for a cyclically stable ecosystem with cyclic regeneration. Artificial disturbances While natural disturbances are usually fully compensated by the rules of ecological succession, human interference can significantly alter the regenerative homeostatic faculties of an ecosystem up to a degree that self-healing will not be possible. For regeneration to occur, active restoration must be attempted. See also Bush regeneration Biocapacity Ecological stability Ecoscaping Forest ecology Net Primary Productivity Pioneer species Regenerative design Regenerative agriculture Soil regeneration References Literature Ecosystems Biological systems Superorganisms Systems ecology
0.811275
0.938045
0.761012
Structural geology
Structural geology is the study of the three-dimensional distribution of rock units with respect to their deformational histories. The primary goal of structural geology is to use measurements of present-day rock geometries to uncover information about the history of deformation (strain) in the rocks, and ultimately, to understand the stress field that resulted in the observed strain and geometries. This understanding of the dynamics of the stress field can be linked to important events in the geologic past; a common goal is to understand the structural evolution of a particular area with respect to regionally widespread patterns of rock deformation (e.g., mountain building, rifting) due to plate tectonics. Use and importance The study of geologic structures has been of prime importance in economic geology, both petroleum geology and mining geology. Folded and faulted rock strata commonly form traps that accumulate and concentrate fluids such as petroleum and natural gas. Similarly, faulted and structurally complex areas are notable as permeable zones for hydrothermal fluids, resulting in concentrated areas of base and precious metal ore deposits. Veins of minerals containing various metals commonly occupy faults and fractures in structurally complex areas. These structurally fractured and faulted zones often occur in association with intrusive igneous rocks. They often also occur around geologic reef complexes and collapse features such as ancient sinkholes. Deposits of gold, silver, copper, lead, zinc, and other metals, are commonly located in structurally complex areas. Structural geology is a critical part of engineering geology, which is concerned with the physical and mechanical properties of natural rocks. Structural fabrics and defects such as faults, folds, foliations and joints are internal weaknesses of rocks which may affect the stability of human engineered structures such as dams, road cuts, open pit mines and underground mines or road tunnels. Geotechnical risk, including earthquake risk can only be investigated by inspecting a combination of structural geology and geomorphology. In addition, areas of karst landscapes which reside atop caverns, potential sinkholes, or other collapse features are of particular importance for these scientists. In addition, areas of steep slopes are potential collapse or landslide hazards. Environmental geologists and hydrogeologists need to apply the tenets of structural geology to understand how geologic sites impact (or are impacted by) groundwater flow and penetration. For instance, a hydrogeologist may need to determine if seepage of toxic substances from waste dumps is occurring in a residential area or if salty water is seeping into an aquifer. Plate tectonics is a theory developed during the 1960s which describes the movement of continents by way of the separation and collision of crustal plates. It is in a sense structural geology on a planet scale, and is used throughout structural geology as a framework to analyze and understand global, regional, and local scale features. Methods Structural geologists use a variety of methods to (first) measure rock geometries, (second) reconstruct their deformational histories, and (third) estimate the stress field that resulted in that deformation. Geometries Primary data sets for structural geology are collected in the field. Structural geologists measure a variety of planar features (bedding planes, foliation planes, fold axial planes, fault planes, and joints), and linear features (stretching lineations, in which minerals are ductilely extended; fold axes; and intersection lineations, the trace of a planar feature on another planar surface). Measurement conventions The inclination of a planar structure in geology is measured by strike and dip. The strike is the line of intersection between the planar feature and a horizontal plane, taken according to the right hand convention, and the dip is the magnitude of the inclination, below horizontal, at right angles to strike. For example; striking 25 degrees East of North, dipping 45 degrees Southeast, recorded as N25E,45SE. Alternatively, dip and dip direction may be used as this is absolute. Dip direction is measured in 360 degrees, generally clockwise from North. For example, a dip of 45 degrees towards 115 degrees azimuth, recorded as 45/115. Note that this is the same as above. The term hade is occasionally used and is the deviation of a plane from vertical i.e. (90°-dip). Fold axis plunge is measured in dip and dip direction (strictly, plunge and azimuth of plunge). The orientation of a fold axial plane is measured in strike and dip or dip and dip direction. Lineations are measured in terms of dip and dip direction, if possible. Often lineations occur expressed on a planar surface and can be difficult to measure directly. In this case, the lineation may be measured from the horizontal as a rake or pitch upon the surface. Rake is measured by placing a protractor flat on the planar surface, with the flat edge horizontal and measuring the angle of the lineation clockwise from horizontal. The orientation of the lineation can then be calculated from the rake and strike-dip information of the plane it was measured from, using a stereographic projection. If a fault has lineations formed by movement on the plane, e.g.; slickensides, this is recorded as a lineation, with a rake, and annotated as to the indication of throw on the fault. Generally it is easier to record strike and dip information of planar structures in dip/dip direction format as this will match all the other structural information you may be recording about folds, lineations, etc., although there is an advantage to using different formats that discriminate between planar and linear data. Plane, fabric, fold and deformation conventions The convention for analysing structural geology is to identify the planar structures, often called planar fabrics because this implies a textural formation, the linear structures and, from analysis of these, unravel deformations. Planar structures are named according to their order of formation, with original sedimentary layering the lowest at S0. Often it is impossible to identify S0 in highly deformed rocks, so numbering may be started at an arbitrary number or given a letter (SA, for instance). In cases where there is a bedding-plane foliation caused by burial metamorphism or diagenesis this may be enumerated as S0a. If there are folds, these are numbered as F1, F2, etc. Generally the axial plane foliation or cleavage of a fold is created during folding, and the number convention should match. For example, an F2 fold should have an S2 axial foliation. Deformations are numbered according to their order of formation with the letter D denoting a deformation event. For example, D1, D2, D3. Folds and foliations, because they are formed by deformation events, should correlate with these events. For example, an F2 fold, with an S2 axial plane foliation would be the result of a D2 deformation. Metamorphic events may span multiple deformations. Sometimes it is useful to identify them similarly to the structural features for which they are responsible, e.g.; M2. This may be possible by observing porphyroblast formation in cleavages of known deformation age, by identifying metamorphic mineral assemblages created by different events, or via geochronology. Intersection lineations in rocks, as they are the product of the intersection of two planar structures, are named according to the two planar structures from which they are formed. For instance, the intersection lineation of a S1 cleavage and bedding is the L1-0 intersection lineation (also known as the cleavage-bedding lineation). Stretching lineations may be difficult to quantify, especially in highly stretched ductile rocks where minimal foliation information is preserved. Where possible, when correlated with deformations (as few are formed in folds, and many are not strictly associated with planar foliations), they may be identified similar to planar surfaces and folds, e.g.; L1, L2. For convenience some geologists prefer to annotate them with a subscript S, for example Ls1 to differentiate them from intersection lineations, though this is generally redundant. Stereographic projections Stereographic projection is a method for analyzing the nature and orientation of deformation stresses, lithological units and penetrative fabrics wherein linear and planar features (structural strike and dip readings, typically taken using a compass clinometer) passing through an imagined sphere are plotted on a two-dimensional grid projection, facilitating more holistic analysis of a set of measurements. Stereonet developed by Richard W. Allmendinger is widely used in the structural geology community. Rock macro-structures On a large scale, structural geology is the study of the three-dimensional interaction and relationships of stratigraphic units within terranes of rock or geological regions. This branch of structural geology deals mainly with the orientation, deformation and relationships of stratigraphy (bedding), which may have been faulted, folded or given a foliation by some tectonic event. This is mainly a geometric science, from which cross sections and three-dimensional block models of rocks, regions, terranes and parts of the Earth's crust can be generated. Study of regional structure is important in understanding orogeny, plate tectonics and more specifically in the oil, gas and mineral exploration industries as structures such as faults, folds and unconformities are primary controls on ore mineralisation and oil traps. Modern regional structure is being investigated using seismic tomography and seismic reflection in three dimensions, providing unrivaled images of the Earth's interior, its faults and the deep crust. Further information from geophysics such as gravity and airborne magnetics can provide information on the nature of rocks imaged to be in the deep crust. Rock microstructures Rock microstructure or texture of rocks is studied by structural geologists on a small scale to provide detailed information mainly about metamorphic rocks and some features of sedimentary rocks, most often if they have been folded. Textural study involves measurement and characterisation of foliations, crenulations, metamorphic minerals, and timing relationships between these structural features and mineralogical features. Usually this involves collection of hand specimens, which may be cut to provide petrographic thin sections which are analysed under a petrographic microscope. Microstructural analysis finds application also in multi-scale statistical analysis, aimed to analyze some rock features showing scale invariance. Kinematics Geologists use rock geometry measurements to understand the history of strain in rocks. Strain can take the form of brittle faulting and ductile folding and shearing. Brittle deformation takes place in the shallow crust, and ductile deformation takes place in the deeper crust, where temperatures and pressures are higher. Stress fields By understanding the constitutive relationships between stress and strain in rocks, geologists can translate the observed patterns of rock deformation into a stress field during the geologic past. The following list of features are typically used to determine stress fields from deformational structures. In perfectly brittle rocks, faulting occurs at 30° to the greatest compressional stress. (Byerlee's Law) The greatest compressive stress is normal to fold axial planes. Modeling For economic geology such as petroleum and mineral development, as well as research, modeling of structural geology is becoming increasingly important. 2D and 3D models of structural systems such as anticlines, synclines, fold and thrust belts, and other features can help better understand the evolution of a structure through time. Without modeling or interpretation of the subsurface, geologists are limited to their knowledge of the surface geological mapping. If only reliant on the surface geology, major economic potential could be missed by overlooking the structural and tectonic history of the area. Characterization of the mechanical properties of rock The mechanical properties of rock play a vital role in the structures that form during deformation deep below the earth's crust. The conditions in which a rock is present will result in different structures that geologists observe above ground in the field. The field of structural geology tries to relate the formations that humans see to the changes the rock went through to get to that final structure. Knowing the conditions of deformation that lead to such structures can illuminate the history of the deformation of the rock. Temperature and pressure play a huge role in the deformation of rock. At the conditions under the earth's crust of extreme high temperature and pressure, rocks are ductile. They can bend, fold or break. Other vital conditions that contribute to the formation of structure of rock under the earth are the stress and strain fields. Stress-strain curve Stress is a pressure, defined as a directional force over area. When a rock is subjected to stresses, it changes shape. When the stress is released, the rock may or may not return to its original shape. That change in shape is quantified by strain, the change in length over the original length of the material in one dimension. Stress induces strain which ultimately results in a changed structure. Elastic deformation refers to a reversible deformation. In other words, when stress on the rock is released, the rock returns to its original shape. Reversible, linear, elasticity involves the stretching, compressing, or distortion of atomic bonds. Because there is no breaking of bonds, the material springs back when the force is released. This type of deformation is modeled using a linear relationship between stress and strain, i.e. a Hookean relationship. Where σ denotes stress, denotes strain, and E is the elastic modulus, which is material dependent. The elastic modulus is, in effect, a measure of the strength of atomic bonds. Plastic deformation refers to non-reversible deformation. The relationship between stress and strain for permanent deformation is nonlinear. Stress has caused permanent change of shape in the material by involving the breaking of bonds. One mechanism of plastic deformation is the movement of dislocations by an applied stress. Because rocks are essentially aggregates of minerals, we can think of them as poly-crystalline materials. Dislocations are a type of crystallographic defect which consists of an extra or missing half plane of atoms in the periodic array of atoms that make up a crystal lattice. Dislocations are present in all real crystallographic materials. Hardness Hardness is difficult to quantify. It is a measure of resistance to deformation, specifically permanent deformation. There is precedent for hardness as a surface quality, a measure of the abrasiveness or surface-scratching resistance of a material. If the material being tested, however, is uniform in composition and structure, then the surface of the material is only a few atomic layers thick, and measurements are of the bulk material. Thus, simple surface measurements yield information about the bulk properties. Ways to measure hardness include: Mohs Scale Dorry abrasion test Deval abrasion test Indentation hardness Indentation hardness is used often in metallurgy and materials science and can be thought of as resistance to penetration by an indenter. Toughness Toughness can be described best by a material's resistance to cracking. During plastic deformation, a material absorbs energy until fracture occurs. The area under the stress-strain curve is the work required to fracture the material. The toughness modulus is defined as: Where is the ultimate tensile strength, and is the strain at failure. The modulus is the maximum amount of energy per unit volume a material can absorb without fracturing. From the equation for modulus, for large toughness, high strength and high ductility are needed. These two properties are usually mutually exclusive. Brittle materials have low toughness because low plastic deformation decreases the strain (low ductility). Ways to measure toughness include: Page impact machine and Charpy impact test. Resilience Resilience is a measure of the elastic energy absorbed of a material under stress. In other words, the external work performed on a material during deformation. The area under the elastic portion of the stress-strain curve is the strain energy absorbed per unit volume. The resilience modulus is defined as: where is the yield strength of the material and E is the elastic modulus of the material. To increase resilience, one needs increased elastic yield strength and decreased modulus of elasticity. See also Crenulation List of rock textures Section restoration Stereographic projection Tectonophysics Vergence (geology) Hydrogeology References Further reading
0.773759
0.983524
0.761011
VUCA
VUCA is an acronym based on the leadership theories of Warren Bennis and Burt Nanus, to describe or to reflect on the volatility, uncertainty, complexity and ambiguity of general conditions and situations. The U.S. Army War College introduced the concept of VUCA in 1987, to describe a more complex multilateral world perceived as resulting from the end of the Cold War. More frequent use and discussion of the term began from 2002. It has subsequently spread to strategic leadership in organizations, from for-profit corporations to education. Meaning The VUCA framework provides a lens through which organizations can interpret their challenges and opportunities. It emphasizes strategic foresight, insight, and the behavior of entities within organizations. Furthermore, it highlights both systemic and behavioral failures often associated with organizational missteps. V = Volatility: Characterizes the rapid and unpredictable nature of change. U = Uncertainty: Denotes the unpredictability of events and issues. C = Complexity: Describes the intertwined forces and issues, making cause-and-effect relationships unclear. A = Ambiguity: Points to the unclear realities and potential misunderstandings stemming from mixed messages. These elements articulate how organizations perceive their current and potential challenges. They establish the parameters for planning and policy-making. Interacting in various ways, they can either complicate decision-making or enhance the ability to strategize, plan, and progress. Essentially, VUCA lays the groundwork for effective management and leadership. The VUCA framework is a conceptual tool that underscores the conditions and challenges organizations face when making decisions, planning, managing risks, driving change, and solving problems. It primarily shapes an organization's ability to: Anticipate the key issues that emerge. Understand the repercussions of particular issues and actions. Appreciate how variables interrelate. Prepare for diverse scenarios and challenges. Interpret and tackle pertinent opportunities. VUCA serves as a guideline for fostering awareness and preparedness in various sectors, including business, the military, education, and government. It provides a roadmap for organizations to develop strategies for readiness, foresight, adaptation, and proactive intervention. Themes VUCA, as a system of thought, revolves around an idea expressed by Andrew Porteous: "Failure in itself may not be a catastrophe. Still, failure to learn from failure is." This perspective underlines the significance of resilience and adaptability in leadership. It suggests that beyond mere competencies, it is behavioural nuances, like the ability to learn from failures and adapt, that distinguish exceptional leaders from average ones. Leaders using VUCA as a guide often see change not just as inevitable but as something to anticipate. Within VUCA, several thematic areas of consideration emerge, providing a framework for introspection and evaluation: Knowledge management and sense-making: An exploration into how we organize and interpret information. Planning and readiness considerations: A reflection on our preparedness for unforeseen challenges. Process management and resource systems: A contemplation on our efficiency in resource utilization and system deployment. Functional responsiveness and impact models: Understanding our capacity to adapt to changes. Recovery systems and forward practices: An inquiry into our resilience and future-oriented strategies. Systemic failures: A philosophical dive into organizational vulnerabilities. Behavioural failures: Exploring the human tendencies that lead to mistakes. Within the VUCA system of thought, an organization's ability to navigate these challenges is closely tied to its foundational beliefs, values, and aspirations. Those enterprises that consider themselves prepared and resolved align their strategic approach with VUCA's principles, signaling a holistic awareness. The essence of VUCA philosophy also emphasizes the need for a deep-rooted understanding of one's environment, spanning technical, social, political, market, and economic realms. Psychometrics which measure fluid intelligence by tracking information processing when faced with unfamiliar, dynamic, and vague data can predict cognitive performance in VUCA environments. Social categorization Volatility Volatility is the V component of VUCA, which refers to the different situational social-categorizations of people due to specific traits or reactions that stand out in particular situations. When people act based on a specific situation, there is a possibility that the public categorizes them into a different group than they were in a previous situation. These people might respond differently to individual situations due to social or environmental cues. The idea that situational occurrences cause certain social categorization is known as volatility and is one of the main aspects of self-categorization theory. Sociologists use volatility to better understand the impacts of stereotypes and social categorization on the situation at hand and any external forces that may cause people to perceive others differently. Volatility is the changing dynamic of social categorization in environmental situations. The dynamic can change due to any shift in a situation, whether social, technical, biological, or anything else. Studies have been conducted, but finding the specific component that causes the change in situational social categorization has proven challenging. Two distinct components link individuals to their social identities. The first component is normative fit, which pertains to how a person aligns with the stereotypes and norms associated with their particular identity. For instance, when a Hispanic woman is cleaning the house, people often associate gender stereotypes with the situation, while her ethnicity is not a central concern. However, when this same woman eats an enchilada, ethnicity stereotypes come to the forefront, while her gender is not the focal point. The second social cue is comparative fit. This is when a specific characteristic or trait of a person is prominent in certain situations compared to others. For example, as mentioned by Bodenhausen and Peery, when there is one woman in a room full of men. She stands out, because she is the only one of her gender. However, all of the men are clumped together because they do not have any specific traits that stand out. Comparative fit shows that people categorize others based on the relative social context. In a particular situation, particular characteristics are made obvious because others around that individual do not possess that characteristic. However, in other cases, this characteristic may be the norm and would not be a key characteristic in the categorization process. People can be less critical of the same person in different scenarios. For example, when looking at an African American man on the street in a low-income neighborhood and the same man inside a school in a high-income neighborhood, people will be less judgmental when seeing him in school. Nothing else has changed about this man, other than his location. When individuals are spotted in certain social contexts, the basic-level categories are forgotten, and the more partial categories are brought to light. This helps to describe the problems of situational social-categorization. This also illustrates how stereotypes can shift the perspectives of those around an individual. Uncertainty Uncertainty in the VUCA framework occurs when the availability or predictability of information in events is unknown. Uncertainty often occurs in volatile environments consisting of complex unanticipated interactions. Uncertainty may occur with the intention to imply causation or correlation between the events of a social perceiver and a target. Situations where there is either a lack of information to prove why perception is in occurrence or informational availability but lack of causation, are where uncertainty is salient. The uncertainty component of the framework serves as a grey area and is compensated by the use of social categorization and/or stereotypes. Social categorization can be described as a collection of people that have no interaction but tend to share similar characteristics. People tend to engage in social categorization, especially when there is a lack of information surrounding the event. Literature suggests that default categories tend to be assumed in the absence of any clear data when referring to someone's gender or race in the essence of a discussion. Individuals often associate general references (e.g. people, they, them, a group) with the male gender, meaning people = male. This usually occurs when there is insufficient information to distinguish someone's gender clearly. For example, when discussing a written piece of information, most assume the author is male. If an author's name is unavailable (due to lack of information), it is difficult to determine the gender of the author through the context of whatever was written. People automatically label the author as male without having any prior basis of gender, thus placing the author in a social category. This social categorization happens in this example, but people will also assume someone is male if the gender is not known in many other situations as well. Social categorization occurs in the realm of not only gender, but also race. Default assumptions may be made, like in gender, to the race of an individual or a group based on prior known stereotypes. For example, race-occupation combinations such as basketball or golf players usually receive race assumptions. Without any information on the individual's race, people usually assume a basketball player is black, and a golf player is white. This is based upon stereotypes because each sport tends to be dominated by a single race. In reality, there are other races within each sport. Complexity Complexity is the C component of VUCA, which refers to the interconnectivity and interdependence of multiple parts in a system. When conducting research, complexity is a component that scholars have to keep in mind. The results of a deliberately controlled environment are unexpected because of the non-linear interaction and interdependencies within different groups and categories. In a sociological aspect, the VUCA framework is utilized in research to understand social perception in the real world and how that plays into social categorization and stereotypes. Galen V. Bodenhausen and Destiny Peery's article, Social Categorization and Stereotyping In vivo: The VUCA Challenge, focused on researching how social categories impacted the process of social cognition and perception. The strategy used to conduct the research is to manipulate or isolate a single identity of a target while keeping all other identities constant. This method clearly shows how a specific identity in a social category can change one's perception of other identities, thus creating stereotypes. There are problems with categorizing an individual's social identity due to the complexity of an individual's background. This research fails to address the complexity of the real world and the results from this highlighted an even greater picture of social categorization and stereotyping. Complexity adds many layers of different components to an individual's identity and creates challenges for sociologists trying to examine social categories. In the real world, people are far more complex than a modified social environment. Individuals identify with more than one social category, which opens the door to a more profound discovery about stereotyping. Results from research conducted by Bodenhausen reveal that specific identities are more dominant than others. Perceivers who recognize these distinct identities latch on to them and associate their preconceived notion of such identity and make initial assumptions about the individuals and hence stereotypes are created. Conversely, perceivers who share some identities with the target tend to be more open-minded. They consider multiple social identities simultaneously, a phenomenon known as cross-categorization effects. Some social categories are nested within larger categorical structures, making subcategories more salient to perceivers. Cross-categorization can trigger both positive and negative effects. On the positive side, perceivers become more open-minded and motivated to delve deeper into their understanding of the target, moving beyond dominant social categories. However, cross-categorization can also result in social invisibility, where some cross-over identities diminish the visibility of others, leading to "intersectional invisibility" where neither social identity stands out distinctly and is overlooked. Ambiguity Ambiguity is the A component of VUCA. This refers to when the general meaning of something is unclear even when an appropriate amount of information is provided. Many get confused about the meaning of ambiguity. It is similar to the idea of uncertainty, but they have different factors. Uncertainty is when relevant information is unavailable and unknown, and ambiguity where relevant information is available but the overall meaning is still unknown. Both uncertainty and ambiguity exist in our culture today. Sociologists use ambiguity to determine how and why an answer has been developed. Sociologists focus on details such as if there was enough information present and if the subject had the full knowledge necessary to make a decision. and why did he/she come to their specific answer. Ambiguity is considered one of the leading causes of conflict within organizations. Ambiguity often prompts individuals to make assumptions, including those related to race, gender, sexual orientation, and even class stereotypes. When people possess some information but lack a complete answer, they tend to generate their own conclusions based on the available relevant information. For instance, as Bodenhausen notes, we may occasionally encounter individuals who possess a degree of androgyny, making it challenging to determine their gender. In such cases, brief exposure might lead to misclassifications based on gender-atypical features, such as very long hair on a man or very short hair on a woman. Ambiguity can result in premature categorizations, potentially leading to inaccurate conclusions due to the absence of crucial details. Sociologists suggest that ambiguity can fuel racial stereotypes and discrimination. In a South African study, white participants were shown images of racially mixed faces and asked to categorize them as European or African. Since all the participants were white, they struggled to classify these mixed-race faces as European and instead labeled them as African. This difficulty arose due to the ambiguity present in the images. The only information available to the participants was the subjects' skin tone and facial features. Despite having this information, the participants still couldn't confidently determine the ethnicity because the individuals didn't precisely resemble their own racial group. Responses and revisions Levent Işıklıgöz has suggested that the C of VUCA be changed from complexity to chaos, arguing that it is more suitable according to our era. Bill George, a professor of management practice at Harvard Business School, argues that VUCA calls for a leadership response which he calls VUCA 2.0: Vision, understanding, courage and adaptability. George's response seems a minor adaptation of Bob Johansen's VUCA prime: Vision, understanding, clarity and agility German academic Ali Aslan Gümüsay adds "paradox" to the acronym, calling it VUCA + paradox or VUCAP. See also Antifragile (disambiguation) Cynefin framework Fear, uncertainty, and doubt (FUD) Global Simplicity Index Goldilocks process Innovation butterfly Software bug References Business models
0.764241
0.995773
0.76101
Microcosm–macrocosm analogy
The microcosm–macrocosm analogy (or, equivalently, macrocosm–microcosm analogy) refers to a historical view which posited a structural similarity between the human being (the microcosm, i.e., the small order or the small universe) and the cosmos as a whole (the macrocosm, i.e., the great order or the great universe). Given this fundamental analogy, truths about the nature of the cosmos as a whole may be inferred from truths about human nature, and vice versa. One important corollary of this view is that the cosmos as a whole may be considered to be alive, and thus to have a mind or soul (the world soul), a position advanced by Plato in his Timaeus. Moreover, this cosmic mind or soul was often thought to be divine, most notably by the Stoics and those who were influenced by them, such as the authors of the Hermetica. Hence, it was sometimes inferred that the human mind or soul was divine in nature as well. Apart from this important psychological and noetic (i.e., related to the mind) application, the analogy was also applied to human physiology. For example, the cosmological functions of the seven classical planets were sometimes taken to be analogous to the physiological functions of human organs, such as the heart, the spleen, the liver, the stomach, etc. The view itself is ancient, and may be found in many philosophical systems world-wide, such as for example in ancient Mesopotamia, in ancient Iran, or in ancient Chinese philosophy. However, the terms microcosm and macrocosm refer more specifically to the analogy as it was developed in ancient Greek philosophy and its medieval and early modern descendants. In contemporary usage, the terms microcosm and macrocosm are also employed to refer to any smaller system that is representative of a larger one, and vice versa. History Antiquity Among ancient Greek and Hellenistic philosophers, notable proponents of the microcosm–macrocosm analogy included Anaximander, Plato, the Hippocratic authors (late 5th or early 4th century BCE and onwards), and the Stoics (3rd century BCE and onwards). In later periods, the analogy was especially prominent in the works of those philosophers who were heavily influenced by Platonic and Stoic thought, such as Philo of Alexandria, the authors of the early Greek Hermetica, and the Neoplatonists (3rd century CE and onwards). The analogy was also employed in late antique and early medieval religious literature, such as in the Bundahishn, a Zoroastrian encyclopedic work, and the Avot de-Rabbi Nathan, a Jewish Rabbinical text. Middle Ages Medieval philosophy was generally dominated by Aristotle, who – despite having been the first to coin the term "microcosm" – had posited a fundamental and insurmountable difference between the region below the Moon (the sublunary world, consisting of the four elements) and the region above the Moon (the superlunary world, consisting of a fifth element). Nevertheless, the microcosm–macrocosm analogy was adopted by a wide variety of medieval thinkers working in different linguistic traditions: the concept of microcosm was known in Arabic as , in Hebrew as , and in Latin as or . The analogy was elaborated by alchemists such as those writing under the name of Jabir ibn Hayyan, by the anonymous Shi'ite philosophers known as the Ikhwān al-Ṣafāʾ ("The Brethren of Purity", ), by Jewish theologians and philosophers such as Isaac Israeli, Saadia Gaon (882/892–942), Ibn Gabirol (11th century), and Judah Halevi, by Victorine monks such as Godfrey of Saint Victor (born 1125, author of a treatise called Microcosmus), by the Andalusian mystic Ibn Arabi (1165–1240), by the German cardinal Nicholas of Cusa (1401–1464), and by numerous others. Renaissance The revival of Hermeticism and Neoplatonism in the Renaissance, both of which had reserved a prominent place for the microcosm–macrocosm analogy, also led to a marked rise in popularity of the latter. Some of the most notable proponents of the concept in this period include Marsilio Ficino (1433 – 1499), Heinrich Cornelius Agrippa (1486–1535), Francesco Patrizi (1529–1597), Giordano Bruno (1548–1600), and Tommaso Campanella (1568–1639). It was also central to the new medical theories propounded by the Swiss physician Paracelsus (1494–1541) and his many followers, most notably Robert Fludd (1574–1637). Andreas Vesalius (1514–1564) in his anatomy text De fabrica wrote that the human body "in many respects corresponds admirably to the universe and for that reason was called the little universe by the ancients." In Judaism Analogies between microcosm and macrocosm are found throughout the history of Jewish philosophy. According to this analogy, there is a structural similarity between the human being (the microcosm, from , ) and the cosmos as a whole (the macrocosm, from ). The view was elaborated by the Jewish philosopher Philo (c. 20 BCE–50 CE), who adopted it from Hellenistic philosophy. Similar ideas can also be found in early rabbinical literature. In the Middle Ages, the analogy became a prominent theme in the works of most Jewish philosophers. Rabbinical literature In the Avot de-Rabbi Natan (compiled c. 700–900), human parts are compared with parts belonging to the larger world: the hair is like a forest, the lungs like the wind, the loins like counsellors, the stomach like a mill, etc. Middle Ages The microcosm–macrocosm analogy was a common theme among medieval Jewish philosophers, just as it was among the Arabic philosophers who were their peers. Especially influential concerning the microcosm–macrocosm analogy were the Epistles of the Brethren of Purity, an encyclopedic work written in the 10th century by an anonymous group of Shi'i Muslim philosophers. Having been brought to al-Andalus at an early date by the hadith scholar and alchemist Maslama al-Majriti of the Umayyad state of Córdoba (died 964), the Epistles were of central importance to Sephardic philosophers such as Bahya ibn Paquda (c. 1050–1120), Judah Halevi (c. 1075–1141), Joseph ibn Tzaddik (died 1149), and Abraham ibn Ezra (c. 1090–1165). Nevertheless, the analogy was already in use by earlier Jewish philosophers. In his commentary on the Sefer Yetzirah ("Book of Creation"), Saadia Gaon (882/892–942) put forward a set of analogies between the cosmos, the Tabernacle, and the human being. Saadia was followed in this by a number of later authors, such as Bahya ibn Paquda, Judah Halevi, and Abraham ibn Ezra. Whereas the physiological application of the analogy in the rabbinical work Avot de-Rabbi Natan had still been relatively simple and crude, much more elaborate versions of this application were given by Bahya ibn Paquda and Joseph ibn Tzaddik (in his Sefer ha-Olam ha-Katan, "Book of the Microcosm"), both of whom compared human parts with the heavenly bodies and other parts of the cosmos at large. The analogy was linked to the ancient theme of "know thyself" (Greek: γνῶθι σεαυτόν, gnōthi seauton) by the physician and philosopher Isaac Israeli (c. 832–932), who suggested that by knowing oneself, a human being may gain knowledge of all things. This theme of self-knowledge returned in the works of Joseph ibn Tzaddik, who added that in this way humans may come to know God himself. The macrocosm was also associated with the divine by Judah Halevi, who saw God as the spirit, soul, mind, and life that animates the universe, while according to Maimonides (1138–1204), the relationship between God and the universe is analogous to the relationship between the intellect and the human being. See also Notes References Bibliography General overviews The following works contain general overviews of the microcosm–macrocosm analogy: Other sources cited Ancient Greek physics Metaphysics of religion Esoteric cosmology Hermeticism Stoicism Paracelsus Philosophical analogies Jewish philosophy
0.766593
0.992716
0.76101
Risk assessment
Risk assessment determines possible mishaps, their likelihood and consequences, and the tolerances for such events. The results of this process may be expressed in a quantitative or qualitative fashion. Risk assessment is an inherent part of a broader risk management strategy to help reduce any potential risk-related consequences. More precisely, risk assessment identifies and analyses potential (future) events that may negatively impact individuals, assets, and/or the environment (i.e. hazard analysis). It also makes judgments "on the tolerability of the risk on the basis of a risk analysis" while considering influencing factors (i.e. risk evaluation). Categories Individual risk assessment Risk assessments can be done in individual cases, including in patient and physician interactions. In the narrow sense chemical risk assessment is the assessment of a health risk in response to environmental exposures. The ways statistics are expressed and communicated to an individual, both through words and numbers impact his or her interpretation of benefit and harm. For example, a fatality rate may be interpreted as less benign than the corresponding survival rate. A systematic review of patients and doctors from 2017 found that overstatement of benefits and understatement of risks occurred more often than the alternative. A systematic review from the Cochrane collaboration suggested "well-documented decision aids" are helpful in reducing effects of such tendencies or biases. Aids may help people come to a decision about their care based on evidence informed information that align with their values. Decision aids may also help people understand the risks more clearly, and they empower people to take an active role when making medical decisions. The systematic review did not find a difference in people who regretted their decisions between those who used decision aids and those who had the usual standard treatment. An individual´s own risk perception may be affected by psychological, ideological, religious or otherwise subjective factors, which impact rationality of the process. Individuals tend to be less rational when risks and exposures concern themselves as opposed to others. There is also a tendency to underestimate risks that are voluntary or where the individual sees themselves as being in control, such as smoking. Systems risk assessment Risk assessment can also be made on a much larger systems theory scale, for example assessing the risks of an ecosystem or an interactively complex mechanical, electronic, nuclear, and biological system or a hurricane (a complex meteorological and geographical system). Systems may be defined as linear and nonlinear (or complex), where linear systems are predictable and relatively easy to understand given a change in input, and non-linear systems unpredictable when inputs are changed. As such, risk assessments of non-linear/complex systems tend to be more challenging. In the engineering of complex systems, sophisticated risk assessments are often made within safety engineering and reliability engineering when it concerns threats to life, natural environment, or machine functioning. The agriculture, nuclear, aerospace, oil, chemical, railroad, and military industries have a long history of dealing with risk assessment. Also, medical, hospital, social service, and food industries control risks and perform risk assessments on a continual basis. Methods for assessment of risk may differ between industries and whether it pertains to general financial decisions or environmental, ecological, or public health risk assessment. Concept Rapid technological change, increasing scale of industrial complexes, increased system integration, market competition, and other factors have been shown to increase societal risk in the past few decades. As such, risk assessments become increasingly critical in mitigating accidents, improving safety, and improving outcomes. Risk assessment consists of an objective evaluation of risk in which assumptions and uncertainties are clearly considered and presented. This involves identification of risk (what can happen and why), the potential consequences, the probability of occurrence, the tolerability or acceptability of the risk, and ways to mitigate or reduce the probability of the risk. Optimally, it also involves documentation of the risk assessment and its findings, implementation of mitigation methods, and review of the assessment (or risk management plan), coupled with updates when necessary. Sometimes risks can be deemed acceptable, meaning the risk "is understood and tolerated ... usually because the cost or difficulty of implementing an effective countermeasure for the associated vulnerability exceeds the expectation of loss." Mild versus wild risk Benoit Mandelbrot distinguished between "mild" and "wild" risk and argued that risk assessment and risk management must be fundamentally different for the two types of risk. Mild risk follows normal or near-normal probability distributions, is subject to regression to the mean and the law of large numbers, and is therefore relatively predictable. Wild risk follows fat-tailed distributions, e.g., Pareto or power-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and management is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and management are to be valid and reliable, according to Mandelbrot. Mathematical conceptualization To see the risk management process expressed mathematically, one can define expected risk as the sum over individual risks, , which can be computed as the product of potential losses, , and their probabilities, : Even though for some risks , we might have , if the probability is small compared to , its estimation might be based only on a smaller number of prior events, and hence, more uncertain. On the other hand, since , must be larger than , so decisions based on this uncertainty would be more consequential, and hence, warrant a different approach. This becomes important when we consider the variance of risk as a large changes the value. Financial decisions, such as insurance, express loss in terms of dollar amounts. When risk assessment is used for public health or environmental decisions, the loss can be quantified in a common metric such as a country's currency or some numerical measure of a location's quality of life. For public health and environmental decisions, the loss is simply a verbal description of the outcome, such as increased cancer incidence or incidence of birth defects. In that case, the "risk" is expressed as If the risk estimate takes into account information on the number of individuals exposed, it is termed a "population risk" and is in units of expected increased cases per time period. If the risk estimate does not take into account the number of individuals exposed, it is termed an "individual risk" and is in units of incidence rate per time period. Population risks are of more use for cost/benefit analysis; individual risks are of more use for evaluating whether risks to individuals are "acceptable". Quantitative risk assessment In quantitative risk assessment, an annualized loss expectancy (ALE) may be used to justify the cost of implementing countermeasures to protect an asset. This may be calculated by multiplying the single loss expectancy (SLE), which is the loss of value based on a single security incident, with the annualized rate of occurrence (ARO), which is an estimate of how often a threat would be successful in exploiting a vulnerability. The usefulness of quantitative risk assessment has been questioned, however. Barry Commoner, Brian Wynne and other critics have expressed concerns that risk assessment tends to be overly quantitative and reductive. For example, they argue that risk assessments ignore qualitative differences among risks. Some charge that assessments may drop out important non-quantifiable or inaccessible information, such as variations among the classes of people exposed to hazards, or social amplification. Furthermore, Commoner and O'Brien claim that quantitative approaches divert attention from precautionary or preventative measures. Others, like Nassim Nicholas Taleb consider risk managers little more than "blind users" of statistical tools and methods. Process Older textbooks distinguish between the term risk analysis and risk evaluation; a risk analysis includes the following 4 steps: establish the context, which restricts the range of hazards to be considered. It is also necessary to identify the potential parties or assets which may be affected by the threat, and the potential consequences to them if the hazard is activated. Hazard identification, an identification of visible and implied hazards and determining the qualitative nature of the potential adverse consequences of each hazard. Without a potential adverse consequence, there is no hazard. frequency analysis If a consequence is dependent on dose, i.e. the amount of exposure, the relationship between dose and severity of consequence must be established, and the risk depends on the probable dose, which may depend on concentration or amplitude and duration or frequency of exposure. This is the general case for many health hazards where the mechanism of injury is toxicity or repetitive injury, particularly where the effect is cumulative. consequence analysis. For other hazards, the consequences may either occur or not, and the severity may be extremely variable even when the triggering conditions are the same. This is typical of many biological hazards as well as a large range of safety hazards. Exposure to a pathogen may or may not result in actual infection, and the consequences of infection may also be variable. Similarly, a fall from the same place may result in minor injury or death, depending on unpredictable details. In these cases, estimates must be made of reasonably likely consequences and associated probability of occurrence. A risk evaluation means that judgements are made on the tolerability of the identified risks, leading to risk acceptance. When risk analysis and risk evaluation are made at the same time, it is called risk assessment. As of 2023, chemical risk assessment follows these 4 steps: hazard characterization exposure assessment dose-response modeling risk characterization. There is tremendous variability in the dose-response relationship between a chemical and human health outcome in particularly susceptible subgroups, such as pregnant women, developing fetuses, children up to adolescence, people with low socioeconomic status, those with preexisting diseases, disabilities, genetic susceptibility, and those with other environmental exposures. The process of risk assessment may be somewhat informal at the individual social level, assessing economic and household risks, or a sophisticated process at the strategic corporate level. However, in both cases, ability to anticipate future events and create effective strategies for mitigating them when deemed unacceptable is vital. At the individual level, identifying objectives and risks, weighing their importance, and creating plans, may be all that is necessary. At the strategic organisational level, more elaborate policies are necessary, specifying acceptable levels of risk, procedures to be followed within the organisation, priorities, and allocation of resources. At the strategic corporate level, management involved with the project produce project level risk assessments with the assistance of the available expertise as part of the planning process and set up systems to ensure that required actions to manage the assessed risk are in place. At the dynamic level, the personnel directly involved may be required to deal with unforeseen problems in real time. The tactical decisions made at this level should be reviewed after the operation to provide feedback on the effectiveness of both the planned procedures and decisions made in response to the contingency. Dose dependent risk Dose-Response Analysis, is determining the relationship between dose and the type of adverse response and/or probability or the incidence of effect (dose-response assessment). The complexity of this step in many contexts derives mainly from the need to extrapolate results from experimental animals (e.g. mouse, rat) to humans, and/or from high to lower doses, including from high acute occupational levels to low chronic environmental levels. In addition, the differences between individuals due to genetics or other factors mean that the hazard may be higher for particular groups, called susceptible populations. An alternative to dose-response estimation is to determine a concentration unlikely to yield observable effects, that is, a no effect concentration. In developing such a dose, to account for the largely unknown effects of animal to human extrapolations, increased variability in humans, or missing data, a prudent approach is often adopted by including safety or uncertainty factors in the estimate of the "safe" dose, typically a factor of 10 for each unknown step. Exposure Quantification, aims to determine the amount of a contaminant (dose) that individuals and populations will receive, either as a contact level (e.g., concentration in ambient air) or as intake (e.g., daily dose ingested from drinking water). This is done by examining the results of the discipline of exposure assessment. As a different location, lifestyle, and other factors likely influence the amount of contaminant that is received, a range or distribution of possible values is generated in this step. Particular care is taken to determine the exposure of the susceptible population(s). The results of these steps are combined to produce an estimate of risk. Because of the different susceptibilities and exposures, this risk will vary within a population. An uncertainty analysis is usually included in a health risk assessment. Dynamic risk assessment During an emergency response, the situation and hazards are often inherently less predictable than for planned activities (non-linear). In general, if the situation and hazards are predictable (linear), standard operating procedures should deal with them adequately. In some emergencies, this may also hold true, with the preparation and trained responses being adequate to manage the situation. In these situations, the operator can manage risk without outside assistance, or with the assistance of a backup team who are prepared and available to step in at short notice. Other emergencies occur where there is no previously planned protocol, or when an outsider group is brought in to handle the situation, and they are not specifically prepared for the scenario that exists but must deal with it without undue delay. Examples include police, fire department, disaster response, and other public service rescue teams. In these cases, ongoing risk assessment by the involved personnel can advise appropriate action to reduce risk. HM Fire Services Inspectorate has defined dynamic risk assessment (DRA) as: Dynamic risk assessment is the final stage of an integrated safety management system that can provide an appropriate response during changing circumstances. It relies on experience, training and continuing education, including effective debriefing to analyse not only what went wrong, but also what went right, and why, and to share this with other members of the team and the personnel responsible for the planning level risk assessment. Fields of application The application of risk assessment procedures is common in a wide range of fields, and these may have specific legal obligations, codes of practice, and standardised procedures. Some of these are listed here. General human health There are many resources that provide human health risk information: The National Library of Medicine provides risk assessment and regulation information tools for a varied audience. These include: TOXNET (databases on hazardous chemicals, environmental health, and toxic releases), the Household Products Database (potential health effects of chemicals in over 10,000 common household products), TOXMAP (maps of the U.S. Environmental Protection Agency Superfund and Toxics Release Inventory data). The United States Environmental Protection Agency provides basic information about environmental health risk assessments for the public for a wide variety of possible environmental exposures. The Environmental Protection Agency began actively using risk assessment methods to protect drinking water in the United States after the passage of the Safe Drinking Water Act of 1974. The law required the National Academy of Sciences to conduct a study on drinking water issues, and in its report, the NAS described some methodologies for doing risk assessments for chemicals that were suspected carcinogens, recommendations that top EPA officials have described as perhaps the study's most important part. Considering the increase in junk food and its toxicity, FDA required in 1973 that cancer-causing compounds must not be present in meat at concentrations that would cause a cancer risk greater than 1 in a million over a lifetime. The US Environmental Protection Agency provides extensive information about ecological and environmental risk assessments for the public via its risk assessment portal. The Stockholm Convention on persistent organic pollutants (POPs) supports a qualitative risk framework for public health protection from chemicals that display environmental and biological persistence, bioaccumulation, toxicity (PBT) and long range transport; most global chemicals that meet this criterion have been previously assessed quantitatively by national and international health agencies. For non-cancer health effects, the terms reference dose (RfD) or reference concentration (RfC) are used to describe the safe level of exposure in a dichotomous fashion. Newer ways of communicating the risk is the probabilistic risk assessment. Small sub-populations When risks apply mainly to small sub-populations, it can be difficult to determine when intervention is necessary. For example, there may be a risk that is very low for everyone, other than 0.1% of the population. It is necessary to determine whether this 0.1% is represented by: all infants younger than X days or recreational users of a particular product. If the risk is higher for a particular sub-population because of abnormal exposure rather than susceptibility, strategies to further reduce the exposure of that subgroup are considered. If an identifiable sub-population is more susceptible due to inherent genetic or other factors, public policy choices must be made. The choices are: to set policies for protecting the general population that are protective of such groups, e.g. for children when data exists, the Clean Air Act for populations such as asthmatics or not to set policies, because the group is too small, or the costs too high. Acceptable risk criteria Acceptable risk is a risk that is understood and tolerated usually because the cost or difficulty of implementing an effective countermeasure for the associated vulnerability exceeds the expectation of loss. The idea of not increasing lifetime risk by more than one in a million has become commonplace in public health discourse and policy. It is a heuristic measure. It provides a numerical basis for establishing a negligible increase in risk. Environmental decision making allows some discretion for deeming individual risks potentially "acceptable" if less than one in ten thousand chance of increased lifetime risk. Low risk criteria such as these provide some protection for a case where individuals may be exposed to multiple chemicals e.g. pollutants, food additives, or other chemicals. In practice, a true zero-risk is possible only with the suppression of the risk-causing activity. Stringent requirements of 1 in a million may not be technologically feasible or may be so prohibitively expensive as to render the risk-causing activity unsustainable, resulting in the optimal degree of intervention being a balance between risks vs. benefit. For example, emissions from hospital incinerators result in a certain number of deaths per year. However, this risk must be balanced against the alternatives. There are public health risks, as well as economic costs, associated with all options. The risk associated with no incineration is the potential spread of infectious diseases or even no hospitals. Further investigation identifies options such as separating noninfectious from infectious wastes, or air pollution controls on a medical incinerator. Intelligent thought about a reasonably full set of options is essential. Thus, it is not unusual for there to be an iterative process between analysis, consideration of options, and follow up analysis. Public health In the context of public health, risk assessment is the process of characterizing the nature and likelihood of a harmful effect to individuals or populations from certain human activities. Health risk assessment can be mostly qualitative or can include statistical estimates of probabilities for specific populations. In most countries, the use of specific chemicals or the operations of specific facilities (e.g. power plants, manufacturing plants) is not allowed unless it can be shown that they do not increase the risk of death or illness above a specific threshold. For example, the American Food and Drug Administration (FDA) regulates food safety through risk assessment, while the EFSA does the same in EU. An occupational risk assessment is an evaluation of how much potential danger a hazard can have to a person in a workplace environment. The assessment takes into account possible scenarios in addition to the probability of their occurrence and the results. The five types of hazards to be aware of are safety (those that can cause injury), chemicals, biological, physical, and ergonomic (those that can cause musculoskeletal disorders). To appropriately access hazards there are two parts that must occur. Firstly, there must be an "exposure assessment" which measures the likelihood of worker contact and the level of contact. Secondly, a "risk characterization" must be made which measures the probability and severity of the possible health risks. Human settlements The importance of risk assessments to manage the consequences of climate change and variability is recalled in the global frameworks for disaster risk reduction, adopted by the member countries of the United Nations at the end of the World Conferences held in Kobe (2005) and Sendai (2015). The Sendai Framework for Disaster Risk Reduction brings attention to the local scale and encourages a holistic risk approach, which should consider all the hazards to which a community is exposed, the integration of technical-scientific knowledge with local knowledge, and the inclusion of the concept of risk in local plans to achieve a significant disaster reduction by 2030. Taking these principles into daily practice poses a challenge for many countries. The Sendai framework monitoring system highlights how little is known about the progress made from 2015 to 2019 in local disaster risk reduction. Sub-Saharan Africa As of 2019, in the South of the Sahara, risk assessment is not yet an institutionalized practice. The exposure of human settlements to multiple hazards (hydrological and agricultural drought, pluvial, fluvial and coastal floods) is frequent and requires risk assessments on a regional, municipal, and sometimes individual human settlement scale. The multidisciplinary approach and the integration of local and technical-scientific knowledge are necessary from the first steps of the assessment. Local knowledge remains unavoidable to understand the hazards that threaten individual communities, the critical thresholds in which they turn into disasters, for the validation of hydraulic models, and in the decision-making process on risk reduction. On the other hand, local knowledge alone is not enough to understand the impacts of future changes and climatic variability and to know the areas exposed to infrequent hazards. The availability of new technologies and open access information (high resolution satellite images, daily rainfall data) allow assessment today with an accuracy that only 10 years ago was unimaginable. The images taken by unmanned vehicle technologies allow to produce very high resolution digital elevation models and to accurately identify the receptors. Based on this information, the hydraulic models allow the identification of flood areas with precision even at the scale of small settlements. The information on loss and damages and on cereal crop at individual settlement scale allow to determine the level of multi-hazard risk on a regional scale.The multi-temporal high-resolution satellite images allow to assess the hydrological drought and the dynamics of human settlements in the flood zone. Risk assessment is more than an aid to informed decision making about risk reduction or acceptance. It integrates early warning systems by highlighting the hot spots where disaster prevention and preparedness are most urgent. When risk assessment considers the dynamics of exposure over time, it helps to identify risk reduction policies that are more appropriate to the local context. Despite these potentials, the risk assessment is not yet integrated into the local planning in the South of the Sahara which, in the best of cases, uses only the analysis of vulnerability to climate change and variability. Auditing For audits performed by an outside audit firm, risk assessment is a crucial stage before accepting an audit engagement. According to ISA315 Understanding the Entity and its Environment and Assessing the Risks of Material Misstatement, "the auditor should perform risk assessment procedures to obtain an understanding of the entity and its environment, including its internal control". Evidence relating to the auditor's risk assessment of a material misstatement in the client's financial statements. Then, the auditor obtains initial evidence regarding the classes of transactions at the client and the operating effectiveness of the client's internal controls. Audit risk is defined as the risk that the auditor will issue a clean unmodified opinion regarding the financial statements, when in fact the financial statements are materially misstated, and therefore do not qualify for a clean unmodified opinion. As a formula, audit risk is the product of two other risks: Risk of Material Misstatement and Detection Risk. This formula can be further broken down as follows: inherent risk × control risk × detection risk. Project management In project management, risk assessment is an integral part of the risk management plan, studying the probability, the impact, and the effect of every known risk on the project, as well as the corrective action to take should an incident be implied by a risk occur. Of special consideration in this area are the relevant codes of practice that are enforced in the specific jurisdiction. Understanding the regime of regulations that risk management must abide by is integral to formulating safe and compliant risk assessment practices. Information security Information technology risk assessment can be performed by a qualitative or quantitative approach, following different methodologies. One important difference in risk assessments in information security is modifying the threat model to account for the fact that any adversarial system connected to the Internet has access to threaten any other connected system. Risk assessments may therefore need to be modified to account for the threats from all adversaries, instead of just those with reasonable access as is done in other fields. NIST Definition: The process of identifying risks to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, other organizations, and the Nation, resulting from the operation of an information system. Part of risk management incorporates threat and vulnerability analyses and considers mitigations provided by security controls planned or in place. There are various risk assessment methodologies and frameworks available which include NIST Risk Management Framework (RMF), Control Objectives for Information and Related Technologies (COBIT), Factor Analysis of Information Risk (FAIR), Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE), The Center for Internet Security Risk Assessment Method (CIS RAM), and The Duty of Care Risk Analysis (DoCRA) Standard, which helps define 'reasonable' security. Cybersecurity The Threat and Risk Assessment (TRA) process is part of risk management referring to risks related to cyber threats. The TRA process will identify cyber risks, assess risks' severities, and may recommend activities to reduce risks to an acceptable level. There are different methodologies for performing TRA (e.g., Harmonized TRA Methodology), all utilize the following elements: identifying of assets (what should be protected), identifying and assessing of the threats and vulnerabilities for the identified assets, determining the exploitability of the vulnerabilities, determining the levels of risk associated with the vulnerabilities (what are the implications if the assets were damaged or lost), and recommending a risk mitigation program. Megainvestment projects Megaprojects (sometimes also called "major programs") are extremely large-scale investment projects, typically costing more than US$1 billion per project. They include bridges, tunnels, highways, railways, airports, seaports, power plants, dams, wastewater projects, coastal flood protection, oil and natural gas extraction projects, public buildings, information technology systems, aerospace projects, and defence systems. Megaprojects have been shown to be particularly risky in terms of finance, safety, and social and environmental impacts. Software evolution Studies have shown that early parts of the system development cycle such as requirements and design specifications are especially prone to error. This effect is particularly notorious in projects involving multiple stakeholders with different points of view. Evolutionary software processes offer an iterative approach to requirement engineering to alleviate the problems of uncertainty, ambiguity, and inconsistency inherent in software developments, including uncertainty, ambiguity, and inconsistency inherent in software developments. Shipping industry In July 2010, shipping companies agreed to use standardized procedures in order to assess risk in key shipboard operations. These procedures were implemented as part of the amended ISM Code. Underwater diving Formal risk assessment is a required component of most professional dive planning, but the format and methodology may vary. Consequences of an incident due to an identified hazard are generally chosen from a small number of standardised categories, and probability is estimated based on statistical data on the rare occasions when it is available, and on a best guess estimate based on personal experience and company policy in most cases. A simple risk matrix is often used to transform these inputs into a level of risk, generally expressed as unacceptable, marginal or acceptable. If unacceptable, measures must be taken to reduce the risk to an acceptable level, and the outcome of the risk assessment must be accepted by the affected parties before a dive commences. Higher levels of risk may be acceptable in special circumstances, such as military or search and rescue operations when there is a chance of recovering a survivor. Diving supervisors are trained in the procedures of hazard identification and risk assessment, and it is part of their planning and operational responsibility. Both health and safety hazards must be considered. Several stages may be identified. There is risk assessment done as part of the diving project planning, on site risk assessment which takes into account the specific conditions of the day, and dynamic risk assessment which is ongoing during the operation by the members of the dive team, particularly the supervisor and the working diver. In recreational scuba diving, the extent of risk assessment expected of the diver is relatively basic and is included in the pre-dive checks. Several mnemonics have been developed by diver certification agencies to remind the diver to pay some attention to risk, but the training is rudimentary. Diving service providers are expected to provide a higher level of care for their customers, and diving instructors and divemasters are expected to assess risk on behalf of their customers and warn them of site-specific hazards and the competence considered appropriate for the planned dive. Technical divers are expected to make a more thorough assessment of risk, but as they will be making an informed choice for a recreational activity, the level of acceptable risk may be considerably higher than that permitted for occupational divers under the direction of an employer. Outdoor and wilderness adventure In outdoor activities including commercial outdoor education, wilderness expeditions, and outdoor recreation, risk assessment refers to the analysis of the probability and magnitude of unfavorable outcomes such as injury, illness, or property damage due to environmental and related causes, compared to the human development or other benefits of outdoor activity. This is of particular importance as school programs and others weigh the benefits of youth and adult participation in various outdoor learning activities against the inherent and other hazards present in those activities. Schools, corporate entities seeking team-building experiences, parents/guardians, and others considering outdoor experiences expect or require organizations to assess the hazards and risks of different outdoor activities—such as sailing, target shooting, hunting, mountaineering, or camping—and select activities with acceptable risk profiles. Outdoor education, wilderness adventure, and other outdoor-related organizations should, and are in some jurisdictions required, to conduct risk assessments prior to offering programs for commercial purposes. Such organizations are given guidance on how to provide their risk assessments. Risk assessments for led outdoor activities form only one component of a comprehensive risk management plan, as many risk assessments use a basic linear-style thinking that does not employ more modern risk management practice employing complex socio-technical systems theory. Environment Environmental Risk Assessment (ERA) aims to assess the effects of stressors, usually chemicals, on the local environment. A risk is an integrated assessment of the likelihood and severity of an undesired event. In ERA, the undesired event often depends on the chemical of interest and on the risk assessment scenario. This undesired event is usually a detrimental effect on organisms, populations or ecosystems. Current ERAs usually compare an exposure to a no-effect level, such as the Predicted Environmental Concentration/Predicted No-Effect Concentration (PEC/PNEC) ratio in Europe. Although this type of ratio is useful and often used in regulation purposes, it is only an indication of an exceeded apparent threshold. New approaches start to be developed in ERA in order to quantify this risk and to communicate effectively on it with both the managers and the general public. Ecological risk assessment is complicated by the fact that there are many nonchemical stressors that substantially influence ecosystems, communities, and individual plants and animals, as well as across landscapes and regions. Defining the undesired (adverse) event is a political or policy judgment, further complicating applying traditional risk analysis tools to ecological systems. Much of the policy debate surrounding ecological risk assessment is over defining precisely what is an adverse event. Biodiversity Biodiversity Risk Assessments evaluate risks to biological diversity, specially the risk of species extinction or the risk of ecosystem collapse. The units of assessments are the biological (species, subspecies or populations) or ecological entities (habitats, ecosystems, etc.), and the risk are often related to human actions and interventions (threats and pressures). Regional and national protocols have been proposed by multiple academic or governmental institutions and working groups, but global standards such as the Red List of Threatened Species and the IUCN Red List of Ecosystems have been widely adopted, and are recognized or proposed as official indicators of progress toward international policy targets and goals, such as the Aichi targets and the Sustainable Development Goals. Law Risk assessments are used in numerous stages during the legal process and are developed to measure a wide variety of items, such as recidivism rates, potential pretrial issues, probation/parole, and to identify potential interventions for defendants. Clinical psychologists, forensic psychologists, and other practitioners are responsible for conducting risk assessments. Depending on the risk assessment tool, practitioners are required to gather a variety of background information on the defendant or individual being assessed. This information includes their previous criminal history (if applicable) and other records (i.e. Demographics, Education, Job Status, Medical History), which can be accessed through direct interview with the defendant or on-file records. In the pre-trial stage, a widely used risk assessment tool is the Public Safety Assessment, which predicts failure to appear in court, likelihood of a new criminal arrest while on pretrial release, and likelihood of a new violent criminal arrest while on pretrial release. Multiple items are observed and taken into account based on which aspect of the PSA is being focused, and like all other actuarial risk assessments, each item is assigned a weighted amount to produce a final score. Detailed information such as transparency on the items the PSA factors and how scores are distributed are accessible online. For defendants who have been incarcerated, risk assessments are used to determine their likelihood of recidivism and inform sentence length decisions. Risk assessments also aid parole/probation officers in determining the level of supervision a probationer should be subjected to and what interventions could be implemented to improve offender risk status. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a risk assessment too designed to measure pretrial release risk, general recidivism risk, and violent recidivism risk. Detailed information on scoring and algorithms for COMPAS are not accessible to the general public. See also References References Further reading Also published as December 4 cover title: "Why We Worry About the Wrong Things: The Psychology of Risk" |work=Time Impact assessment Probability assessment Hazard analysis Safety engineering Reliability engineering Occupational safety and health Corporate development
0.765107
0.994603
0.760978
Plant hormone
Plant hormones (or phytohormones) are signal molecules, produced within plants, that occur in extremely low concentrations. Plant hormones control all aspects of plant growth and development, including embryogenesis, the regulation of organ size, pathogen defense, stress tolerance and reproductive development. Unlike in animals (in which hormone production is restricted to specialized glands) each plant cell is capable of producing hormones. Went and Thimann coined the term "phytohormone" and used it in the title of their 1937 book. Phytohormones occur across the plant kingdom, and even in algae, where they have similar functions to those seen in vascular plants ("higher plants"). Some phytohormones also occur in microorganisms, such as unicellular fungi and bacteria, however in these cases they do not play a hormonal role and can better be regarded as secondary metabolites. Characteristics The word hormone is derived from Greek, meaning set in motion. Plant hormones affect gene expression and transcription levels, cellular division, and growth. They are naturally produced within plants, though very similar chemicals are produced by fungi and bacteria that can also affect plant growth. A large number of related chemical compounds are synthesized by humans. They are used to regulate the growth of cultivated plants, weeds, and in vitro-grown plants and plant cells; these manmade compounds are called plant growth regulators (PGRs). Early in the study of plant hormones, "phytohormone" was the commonly used term, but its use is less widely applied now. Plant hormones are not nutrients, but chemicals that in small amounts promote and influence the growth, development, and differentiation of cells and tissues. The biosynthesis of plant hormones within plant tissues is often diffuse and not always localized. Plants lack glands to produce and store hormones, because, unlike animals—which have two circulatory systems (lymphatic and cardiovascular) powered by a heart that moves fluids around the body—plants use more passive means to move chemicals around their bodies. Plants utilize simple chemicals as hormones, which move more easily through their tissues. They are often produced and used on a local basis within the plant body. Plant cells produce hormones that affect even different regions of the cell producing the hormone. Hormones are transported within the plant by utilizing four types of movements. For localized movement, cytoplasmic streaming within cells and slow diffusion of ions and molecules between cells are utilized. Vascular tissues are used to move hormones from one part of the plant to another; these include sieve tubes or phloem that move sugars from the leaves to the roots and flowers, and xylem that moves water and mineral solutes from the roots to the foliage. Not all plant cells respond to hormones, but those cells that do are programmed to respond at specific points in their growth cycle. The greatest effects occur at specific stages during the cell's life, with diminished effects occurring before or after this period. Plants need hormones at very specific times during plant growth and at specific locations. They also need to disengage the effects that hormones have when they are no longer needed. The production of hormones occurs very often at sites of active growth within the meristems, before cells have fully differentiated. After production, they are sometimes moved to other parts of the plant, where they cause an immediate effect; or they can be stored in cells to be released later. Plants use different pathways to regulate internal hormone quantities and moderate their effects; they can regulate the amount of chemicals used to biosynthesize hormones. They can store them in cells, inactivate them, or cannibalise already-formed hormones by conjugating them with carbohydrates, amino acids, or peptides. Plants can also break down hormones chemically, effectively destroying them. Plant hormones frequently regulate the concentrations of other plant hormones. Plants also move hormones around the plant diluting their concentrations. The concentration of hormones required for plant responses are very low (10−6 to 10−5 mol/L). Because of these low concentrations, it has been very difficult to study plant hormones, and only since the late 1970s have scientists been able to start piecing together their effects and relationships to plant physiology. Much of the early work on plant hormones involved studying plants that were genetically deficient in one or involved the use of tissue-cultured plants grown in vitro that were subjected to differing ratios of hormones, and the resultant growth compared. The earliest scientific observation and study dates to the 1880s; the determination and observation of plant hormones and their identification was spread out over the next 70 years. Synergism in plant hormones refers to the how of two or more hormones result in an effect that is more than the individual effects. For example, auxins and cytokinins often act in cooperation during cellular division and differentiation. Both hormones are key to cell cycle regulation, but when they come together, their synergistic interactions can enhance cell proliferation and organogenesis more effectively than either could in isolation. Classes Different hormones can be sorted into different classes, depending on their chemical structures. Within each class of hormone, chemical structures can vary, but all members of the same class have similar physiological effects. Initial research into plant hormones identified five major classes: abscisic acid, auxins, brassinosteroids, cytokinins and ethylene. This list was later expanded, and brassinosteroids, jasmonates, salicylic acid, and strigolactones are now also considered major plant hormones. Additionally there are several other compounds that serve functions similar to the major hormones, but their status as bona fide hormones is still debated. Abscisic acid Abscisic acid (also called ABA) is one of the most important plant growth inhibitors. It was discovered and researched under two different names, dormin and abscicin II, before its chemical properties were fully known. Once it was determined that the two compounds are the same, it was named abscisic acid. The name refers to the fact that it is found in high concentrations in newly abscissed or freshly fallen leaves. This class of PGR is composed of one chemical compound normally produced in the leaves of plants, originating from chloroplasts, especially when plants are under stress. In general, it acts as an inhibitory chemical compound that affects bud growth, and seed and bud dormancy. It mediates changes within the apical meristem, causing bud dormancy and the alteration of the last set of leaves into protective bud covers. Since it was found in freshly abscissed leaves, it was initially thought to play a role in the processes of natural leaf drop, but further research has disproven this. In plant species from temperate parts of the world, abscisic acid plays a role in leaf and seed dormancy by inhibiting growth, but, as it is dissipated from seeds or buds, growth begins. In other plants, as ABA levels decrease, growth then commences as gibberellin levels increase. Without ABA, buds and seeds would start to grow during warm periods in winter and would be killed when it froze again. Since ABA dissipates slowly from the tissues and its effects take time to be offset by other plant hormones, there is a delay in physiological pathways that provides some protection from premature growth. Abscisic acid accumulates within seeds during fruit maturation, preventing seed germination within the fruit or before winter. Abscisic acid's effects are degraded within plant tissues during cold temperatures or by its removal by water washing in and out of the tissues, releasing the seeds and buds from dormancy. ABA exists in all parts of the plant, and its concentration within any tissue seems to mediate its effects and function as a hormone; its degradation, or more properly catabolism, within the plant affects metabolic reactions and cellular growth and production of other hormones. Plants start life as a seed with high ABA levels. Just before the seed germinates, ABA levels decrease; during germination and early growth of the seedling, ABA levels decrease even more. As plants begin to produce shoots with fully functional leaves, ABA levels begin to increase again, slowing down cellular growth in more "mature" areas of the plant. Stress from water or predation affects ABA production and catabolism rates, mediating another cascade of effects that trigger specific responses from targeted cells. Scientists are still piecing together the complex interactions and effects of this and other phytohormones. In plants under water stress, ABA plays a role in closing the stomata. Soon after plants are water-stressed and the roots are deficient in water, a signal moves up to the leaves, causing the formation of ABA precursors there, which then move to the roots. The roots then release ABA, which is translocated to the foliage through the vascular system and modulates potassium and sodium uptake within the guard cells, which then lose turgidity, closing the stomata. Auxins Auxins are compounds that positively influence cell enlargement, bud formation, and root initiation. They also promote the production of other hormones and, in conjunction with cytokinins, control the growth of stems, roots, and fruits, and convert stems into flowers. Auxins were the first class of growth regulators discovered. A Dutch Biologist Frits Warmolt Went first described auxins. They affect cell elongation by altering cell wall plasticity. They stimulate cambium, a subtype of meristem cells, to divide, and in stems cause secondary xylem to differentiate. Auxins act to inhibit the growth of buds lower down the stems in a phenomenon known as apical dominance, and also to promote lateral and adventitious root development and growth. Leaf abscission is initiated by the growing point of a plant ceasing to produce auxins. Auxins in seeds regulate specific protein synthesis, as they develop within the flower after pollination, causing the flower to develop a fruit to contain the developing seeds. In large concentrations, auxins are often toxic to plants; they are most toxic to dicots and less so to monocots. Because of this property, synthetic auxin herbicides including 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) have been developed and used for weed control by defoliation. Auxins, especially 1-naphthaleneacetic acid (NAA) and indole-3-butyric acid (IBA), are also commonly applied to stimulate root growth when taking cuttings of plants. The most common auxin found in plants is indole-3-acetic acid (IAA). Brassinosteroids Brassinosteroids (BRs) are a class of polyhydroxysteroids, the only example of steroid-based hormones in plants. Brassinosteroids control cell elongation and division, gravitropism, resistance to stress, and xylem differentiation. They inhibit root growth and leaf abscission. Brassinolide was the first brassinosteroid to be identified and was isolated from extracts of rapeseed (Brassica napus) pollen in 1979. Brassinosteroids are a class of steroidal phytohormones in plants that regulate numerous physiological processes. This plant hormone was identified by Mitchell et al. who extracted ingredients from Brassica pollen only to find that the extracted ingredients’ main active component was Brassinolide. This finding meant the discovery of a new class of plant hormones called Brassinosteroids. These hormones act very similarly to animal steroidal hormones by promoting growth and development. In plants these steroidal hormones play an important role in cell elongation via BR signaling. The brassinosteroids receptor brassinosteroid insensitive 1 (BRI1) is the main receptor for this signaling pathway. This BRI1 receptor was found by Clouse et al. who made the discovery by inhibiting BR and comparing it to the wildtype in Arabidopsis. The BRI1 mutant displayed several problems associated with growth and development such as dwarfism, reduced cell elongation and other physical alterations. These findings mean that plants properly expressing brassinosteroids grow more than their mutant counterparts. Brassinosteroids bind to BRI1 localized at the plasma membrane which leads to a signal cascade that further regulates cell elongation. This signal cascade however is not entirely understood at this time. What is believed to be happening is that BR binds to the BAK1 complex which leads to a phosphorylation cascade. This phosphorylation cascade then causes BIN2 to be deactivated which causes the release of transcription factors. These released transcription factors then bind to DNA that leads to growth and developmental processes and allows plants to respond to abiotic stressors. Cytokinins Cytokinins (CKs) are a group of chemicals that influence cell division and shoot formation. They also help delay senescence of tissues, are responsible for mediating auxin transport throughout the plant, and affect internodal length and leaf growth. They were called kinins in the past when they were first isolated from yeast cells. Cytokinins and auxins often work together, and the ratios of these two groups of plant hormones affect most major growth periods during a plant's lifetime. Cytokinins counter the apical dominance induced by auxins; in conjunction with ethylene, they promote abscission of leaves, flower parts, and fruits. Among the plant hormones, the three that are known to help with immunological interactions are ethylene (ET), salicylates (SA), and jasmonates (JA), however more research has gone into identifying the role that cytokinins play in this. Evidence suggests that cytokinins delay the interactions with pathogens, showing signs that they could induce resistance toward these pathogenic bacteria. Accordingly, there are higher CK levels in plants that have increased resistance to pathogens compared to those which are more susceptible. For example, pathogen resistance involving cytokinins was tested using the Arabidopsis species by treating them with naturally occurring CK (trans-zeatin) to see their response to the bacteria Pseudomonas syringa. Tobacco studies reveal that over expression of CK inducing IPT genes yields increased resistance whereas over expression of CK oxidase yields increased susceptibility to pathogen, namely P. syringae. While there’s not much of a relationship between this hormone and physical plant behavior, there are behavioral changes that go on inside the plant in response to it.  Cytokinin defense effects can include the establishment and growth of microbes (delay leaf senescence), reconfiguration of secondary metabolism or even induce the production of new organs such as galls or nodules. These organs and their corresponding processes are all used to protect the plants against biotic/abiotic factors. Ethylene Unlike the other major plant hormones, ethylene is a gas and a very simple organic compound, consisting of just six atoms. It forms through the breakdown of methionine, an amino acid which is in all cells. Ethylene has very limited solubility in water and therefore does not accumulate within the cell, typically diffusing out of the cell and escaping the plant. Its effectiveness as a plant hormone is dependent on its rate of production versus its rate of escaping into the atmosphere. Ethylene is produced at a faster rate in rapidly growing and dividing cells, especially in darkness. New growth and newly germinated seedlings produce more ethylene than can escape the plant, which leads to elevated amounts of ethylene, inhibiting leaf expansion (see hyponastic response). As the new shoot is exposed to light, reactions mediated by phytochrome in the plant's cells produce a signal for ethylene production to decrease, allowing leaf expansion. Ethylene affects cell growth and cell shape; when a growing shoot or root hits an obstacle while underground, ethylene production greatly increases, preventing cell elongation and causing the stem to swell. The resulting thicker stem is stronger and less likely to buckle under pressure as it presses against the object impeding its path to the surface. If the shoot does not reach the surface and the ethylene stimulus becomes prolonged, it affects the stem's natural geotropic response, which is to grow upright, allowing it to grow around an object. Studies seem to indicate that ethylene affects stem diameter and height: when stems of trees are subjected to wind, causing lateral stress, greater ethylene production occurs, resulting in thicker, sturdier tree trunks and branches. Ethylene also affects fruit ripening. Normally, when the seeds are mature, ethylene production increases and builds up within the fruit, resulting in a climacteric event just before seed dispersal. The nuclear protein Ethylene Insensitive2 (EIN2) is regulated by ethylene production, and, in turn, regulates other hormones including ABA and stress hormones. Ethylene diffusion out of plants is strongly inhibited underwater. This increases internal concentrations of the gas. In numerous aquatic and semi-aquatic species (e.g. Callitriche platycarpus, rice, and Rumex palustris), the accumulated ethylene strongly stimulates upward elongation. This response is an important mechanism for the adaptive escape from submergence that avoids asphyxiation by returning the shoot and leaves to contact with the air whilst allowing the release of entrapped ethylene. At least one species (Potamogeton pectinatus) has been found to be incapable of making ethylene while retaining a conventional morphology. This suggests ethylene is a true regulator rather than being a requirement for building a plant's basic body plan. Gibberellins Gibberellins (GAs) include a large range of chemicals that are produced naturally within plants and by fungi. They were first discovered when Japanese researchers, including Eiichi Kurosawa, noticed a chemical produced by a fungus called Gibberella fujikuroi that produced abnormal growth in rice plants. It was later discovered that GAs are also produced by the plants themselves and control multiple aspects of development across the life cycle. The synthesis of GA is strongly upregulated in seeds at germination and its presence is required for germination to occur. In seedlings and adults, GAs strongly promote cell elongation. GAs also promote the transition between vegetative and reproductive growth and are also required for pollen function during fertilization. Gibberellins breaks the dormancy (in active stage) in seeds and buds and helps increasing the height of the plant. It helps in the growth of the stem Jasmonates Jasmonates (JAs) are lipid-based hormones that were originally isolated from jasmine oil. JAs are especially important in the plant response to attack from herbivores and necrotrophic pathogens. The most active JA in plants is jasmonic acid. Jasmonic acid can be further metabolized into methyl jasmonate (MeJA), which is a volatile organic compound. This unusual property means that MeJA can act as an airborne signal to communicate herbivore attack to other distant leaves within one plant and even as a signal to neighboring plants. In addition to their role in defense, JAs are also believed to play roles in seed germination, the storage of protein in seeds, and root growth. JAs have been shown to interact in the signalling pathway of other hormones in a mechanism described as “crosstalk.” The hormone classes can have both negative and positive effects on each other's signal processes. Jasmonic acid methyl ester (JAME) has been shown to regulate genetic expression in plants. They act in signalling pathways in response to herbivory, and upregulate expression of defense genes. Jasmonyl-isoleucine (JA-Ile) accumulates in response to herbivory, which causes an upregulation in defense gene expression by freeing up transcription factors. Jasmonate mutants are more readily consumed by herbivores than wild type plants, indicating that JAs play an important role in the execution of plant defense. When herbivores are moved around leaves of wild type plants, they reach similar masses to herbivores that consume only mutant plants, implying the effects of JAs are localized to sites of herbivory. Studies have shown that there is significant crosstalk between defense pathways. Salicylic acid Salicylic acid (SA) is a hormone with a structure related to benzoic acid and phenol. It was originally isolated from an extract of white willow bark (Salix alba) and is of great interest to human medicine, as it is the precursor of the painkiller aspirin. In plants, SA plays a critical role in the defense against biotrophic pathogens. In a similar manner to JA, SA can also become methylated. Like MeJA, methyl salicylate is volatile and can act as a long-distance signal to neighboring plants to warn of pathogen attack. In addition to its role in defense, SA is also involved in the response of plants to abiotic stress, particularly from drought, extreme temperatures, heavy metals, and osmotic stress. Salicylic acid (SA) serves as a key hormone in plant innate immunity, including resistance in both local and systemic tissue upon biotic attacks, hypersensitive responses, and cell death. Some of the SA influences on plants include seed germination, cell growth, respiration, stomatal closure, senescence-associated gene expression, responses to abiotic and biotic stresses, basal thermo tolerance and fruit yield. A possible role of salicylic acid in signaling disease resistance was first demonstrated by injecting leaves of resistant tobacco with SA. The result was that injecting SA stimulated pathogenesis related (PR) protein accumulation and enhanced resistance to tobacco mosaic virus (TMV) infection. Exposure to pathogens causes a cascade of reactions in the plant cells. SA biosynthesis is increased via isochorismate synthase (ICS) and phenylalanine ammonia-lyase (PAL) pathway in plastids. It was observed that during plant-microbe interactions, as part of the defense mechanisms, SA is initially accumulated at the local infected tissue and then spread all over the plant to induce systemic acquired resistance at non-infected distal parts of the plant. Therefore with increased internal concentration of  SA, plants were able to build resistant barriers for pathogens and other adverse environmental conditions Strigolactones Strigolactones (SLs) were originally discovered through studies of the germination of the parasitic weed Striga lutea. It was found that the germination of Striga species was stimulated by the presence of a compound exuded by the roots of its host plant. It was later shown that SLs that are exuded into the soil also promote the growth of symbiotic arbuscular mycorrhizal (AM) fungi. More recently, another role of SLs was identified in the inhibition of shoot branching. This discovery of the role of SLs in shoot branching led to a dramatic increase in the interest in these hormones, and it has since been shown that SLs play important roles in leaf senescence, phosphate starvation response, salt tolerance, and light signalling. Other known hormones Other identified plant growth regulators include: Plant peptide hormones – encompasses all small secreted peptides that are involved in cell-to-cell signaling. These small peptide hormones play crucial roles in plant growth and development, including defense mechanisms, the control of cell division and expansion, and pollen self-incompatibility. The small peptide CLE25 is known to act as a long-distance signal to communicate water stress sensed in the roots to the stomata in the leaves. Polyamines – are strongly basic molecules with low molecular weight that have been found in all organisms studied thus far. They are essential for plant growth and development and affect the process of mitosis and meiosis. In plants, polyamines have been linked to the control of senescence and programmed cell death. Nitric oxide (NO) – serves as signal in hormonal and defense responses (e.g. stomatal closure, root development, germination, nitrogen fixation, cell death, stress response). NO can be produced by a yet undefined NO synthase, a special type of nitrite reductase, nitrate reductase, mitochondrial cytochrome c oxidase or non enzymatic processes and regulate plant cell organelle functions (e.g. ATP synthesis in chloroplasts and mitochondria). Karrikins – are not plant hormones as they are not produced by plants themselves but are rather found in the smoke of burning plant material. Karrikins can promote seed germination in many species. The finding that plants which lack the receptor of karrikin receptor show several developmental phenotypes (enhanced biomass accumulation and increased sensitivity to drought) have led some to speculate on the existence of an as yet unidentified karrikin-like endogenous hormone in plants. The cellular karrikin signalling pathway shares many components with the strigolactone signalling pathway. Triacontanol – a fatty alcohol that acts as a growth stimulant, especially initiating new basal breaks in the rose family. It is found in alfalfa (lucerne), bee's wax, and some waxy leaf cuticles. Use in horticulture Synthetic plant hormones or PGRs are used in a number of different techniques involving plant propagation from cuttings, grafting, micropropagation and tissue culture. Most commonly they are commercially available as "rooting hormone powder". The propagation of plants by cuttings of fully developed leaves, stems, or roots is performed by gardeners utilizing auxin as a rooting compound applied to the cut surface; the auxins are taken into the plant and promote root initiation. In grafting, auxin promotes callus tissue formation, which joins the surfaces of the graft together. In micropropagation, different PGRs are used to promote multiplication and then rooting of new plantlets. In the tissue-culturing of plant cells, PGRs are used to produce callus growth, multiplication, and rooting. When used in field conditions, plant hormones or mixtures that include them can be applied as biostimulants. Seed dormancy Plant hormones affect seed germination and dormancy by acting on different parts of the seed. Embryo dormancy is characterized by a high ABA:GA ratio, whereas the seed has high abscisic acid sensitivity and low GA sensitivity. In order to release the seed from this type of dormancy and initiate seed germination, an alteration in hormone biosynthesis and degradation toward a low ABA/GA ratio, along with a decrease in ABA sensitivity and an increase in GA sensitivity, must occur. ABA controls embryo dormancy, and GA embryo germination. Seed coat dormancy involves the mechanical restriction of the seed coat. This, along with a low embryo growth potential, effectively produces seed dormancy. GA releases this dormancy by increasing the embryo growth potential, and/or weakening the seed coat so the radical of the seedling can break through the seed coat. Different types of seed coats can be made up of living or dead cells, and both types can be influenced by hormones; those composed of living cells are acted upon after seed formation, whereas the seed coats composed of dead cells can be influenced by hormones during the formation of the seed coat. ABA affects testa or seed coat growth characteristics, including thickness, and effects the GA-mediated embryo growth potential. These conditions and effects occur during the formation of the seed, often in response to environmental conditions. Hormones also mediate endosperm dormancy: Endosperm in most seeds is composed of living tissue that can actively respond to hormones generated by the embryo. The endosperm often acts as a barrier to seed germination, playing a part in seed coat dormancy or in the germination process. Living cells respond to and also affect the ABA:GA ratio, and mediate cellular sensitivity; GA thus increases the embryo growth potential and can promote endosperm weakening. GA also affects both ABA-independent and ABA-inhibiting processes within the endosperm. Human use Salicylic acid Willow bark has been used for centuries as a painkiller. The active ingredient in willow bark that provides these effects is the hormone salicylic acid (SA). In 1899, the pharmaceutical company Bayer began marketing a derivative of SA as the drug aspirin. In addition to its use as a painkiller, SA is also used in topical treatments of several skin conditions, including acne, warts and psoriasis. Another derivative of SA, sodium salicylate has been found to suppress proliferation of lymphoblastic leukemia, prostate, breast, and melanoma human cancer cells. Jasmonic acid Jasmonic acid (JA) can induce death in lymphoblastic leukemia cells. Methyl jasmonate (a derivative of JA, also found in plants) has been shown to inhibit proliferation in a number of cancer cell lines, although there is still debate over its use as an anti-cancer drug, due to its potential negative effects on healthy cells. See also Forchlorfenuron Phytoestrogen Phytoandrogen Chlormequat References External links Simple plant hormone table with location of synthesis and effects of application — this is the format used in the description templates at bottom of Wikipedia articles about plant hormones. Hormonal Regulation of Gene Expression and Development — Detailed introduction to plant hormones, including genetic information. Biologically based therapies
0.764545
0.995332
0.760976
Molecular modelling
Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach). Molecular mechanics Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics. This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, . Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects. Variables Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations. Coordinate representations Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method. Applications Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes. See also References Further reading Bioinformatics Molecular biology Computational chemistry
0.774513
0.982482
0.760945
Cline (biology)
In biology, a cline is a measurable gradient in a single characteristic (or biological trait) of a species across its geographical range. Clines usually have a genetic (e.g. allele frequency, blood type), or phenotypic (e.g. body size, skin pigmentation) character. They can show either smooth, continuous gradation in a character, or more abrupt changes in the trait from one geographic region to the next. A cline is a spatial gradient in a single specific trait, rather than in a collection of traits; a single population can therefore have as many clines as it has traits, at least in principle. Additionally, as Julian Huxley recognised, these multiple independent clines may not act in concordance with each other. For example, it has been observed that in Australia, birds generally become smaller the further towards the north of the country they are found. In contrast, the intensity of their plumage colouration follows a different geographical trajectory, being most vibrant where humidity is highest and becoming less vibrant further into the arid centre of the country. Because of this, Huxley described the notion of clines as an "auxiliary taxonomic principle,” meaning that clinal variation in a species is not awarded taxonomic recognition in the way subspecies or species are. The term cline was coined by Huxley in 1938 from the Greek κλίνειν klinein, meaning "to lean.” While it and the term ecotype are sometimes used interchangeably, they do in fact differ in that ecotype refers to a population which differs from other populations in a number of characters, rather than the single character that varies amongst populations in a cline. Drivers and the evolution of clines Clines are often cited to be the result of two opposing drivers: selection and gene flow (also known as migration). Selection causes adaptation to the local environment, resulting in different genotypes or phenotypes being favoured in different environments. This diversifying force is countered by gene flow, which has a homogenising effect on populations and prevents speciation through causing genetic admixture and blurring any distinct genetic boundaries. Development of clines Clines are generally thought to arise under one of two conditions: "primary differentiation" (also known as "primary contact" or "primary intergradation"), or "secondary contact" (also known as "secondary introgression", or "secondary intergradation"). Primary differentiation Clines produced through this way are generated by spatial heterogeneity in environmental conditions. The mechanism of selection acting upon organisms is therefore external. Species ranges frequently span environmental gradients (e.g. humidity, rainfall, temperature, or day length) and, according to natural selection, different environments will favour different genotypes or phenotypes. In this way, when previously genetically or phenotypically uniform populations spread into novel environments, they will evolve to be uniquely adapted to the local environment, in the process potentially creating a gradient in a genotypic or phenotypic trait. Such clines in characters can not be maintained through selection alone if much gene flow occurred between populations, as this would tend to swamp out the effects of local adaptation. However, because species usually tend to have a limited dispersal range (e.g. in an isolation by distance model), restricted gene flow can serve as a type of barrier which encourages geographic differentiation. However, some degree of migration is often required to maintain a cline; without it, speciation is likely to eventually occur, as local adaptation can cause reproductive isolation between populations. A classic example of the role of environmental gradients in creating clines is that of the peppered moth, Biston betularia, in the UK. During the 19th century, when the industrial sector gained traction, coal emissions blackened vegetation across northwest England and parts of northern Wales. As a result of this, lighter morphs of the moth were more visible to predators against the blackened tree trunks and were therefore more heavily predated relative to the darker morphs. Consequently, the frequency of the more cryptic melanic morph of the peppered moth increased drastically in northern England. This cline in morph colour, from a dominance of lighter morphs in the west of England (which did not suffer as heavily from pollution), to the higher frequency of melanic forms in the north, has slowly been degrading since limitations to sooty emissions were introduced in the 1960s. Secondary contact Clines generated through this mechanism have arisen through the joining of two formerly isolated populations which differentiated in allopatry, creating an intermediate zone. This secondary contact scenario may occur, for example, when climatic conditions change, allowing the ranges of populations to expand and meet. Because over time the effect of gene flow will tend to eventually swamp out any regional differences and cause one large homogenous population, for a stable cline to be maintained when two populations join there must usually be a selective pressure maintaining a degree of differentiation between the two populations. The mechanism of selection maintaining the clines in this scenario is often intrinsic. This means that the fitness of individuals is independent of the external environment, and selection is instead dependent on the genome of the individual. Intrinsic, or endogenous, selection can give rise to clines in characters through a variety of mechanisms. One way it may act is through heterozygote disadvantage, in which intermediate genotypes have a lower relative fitness than either homozygote genotypes. Because of this disadvantage, one allele will tend to become fixed in a given population, such that populations will consist largely of either AA (homozygous dominant) or aa (homozygous recessive) individuals. The cline of heterozygotes that is created when these respective populations come into contact is then shaped by the opposing forces of selection and gene flow; even if selection against heterozygotes is great, if there is some degree of gene flow between the two populations, then a steep cline may be able to be maintained. Because instrinsic selection is independent of the external environment, clines generated by selection against hybrids are not fixed to any given geographical area and can move around the geographic landscape. Such hybrid zones where hybrids are a disadvantage relative to their parental lines (but which are nonetheless maintained through selection being counteracted by gene flow) are known as "tension zones". Another way in which selection can generate clines is through frequency-dependent selection. Characters that could be maintained by such frequency-dependent selective pressures include warning signals (aposematism). For example, aposematic signals in Heliconius butterflies sometimes display steep clines between populations, which are maintained through positive frequency dependence. This is because heterozygosity, mutations and recombination can all produce patterns that deviate from those well-established signals which mark prey as being unpalatable. These individuals are then predated more heavily relative to their counterparts with "normal" markings (i.e. selected against), creating populations dominated by a particular pattern of warning signal. As with heterozygote disadvantage, when these populations join, a narrow cline of intermediate individuals could be produced, maintained by gene flow counteracting selection. Secondary contact could lead to a cline with a steep gradient if heterozygote disadvantage or frequency-dependent selection exists, as intermediates are heavily selected against. Alternatively, steep clines could exist because the populations have only recently established secondary contact, and the character in the original allopatric populations had a large degree of differentiation. As genetic admixture between the population increases with time however, the steepness of the cline is likely to decrease as the difference in character is eroded. However, if the character in the original allopatric populations was not very differentiated to begin with, the cline between the populations need not display a very steep gradient. Because both primary differentiation and secondary contact can therefore give rise to similar or identical clinal patterns (e.g. gently sloping clines), distinguishing which of these two processes is responsible for generating a cline is difficult and often impossible. However, in some circumstances a cline and a geographic variable (such as humidity) may be very tightly linked, with a change in one corresponding closely to a change in the other. In such cases it may be tentatively concluded that the cline is generated by primary differentiation and therefore moulded by environmental selective pressures. No selection (drift/migration balance) While selection can therefore clearly play a key role in creating clines, it is theoretically feasible that they might be generated by genetic drift alone. It is unlikely that large-scale clines in genotype or phenotype frequency will be produced solely by drift. However, across smaller geographical scales and in smaller populations, drift could produce temporary clines. The fact that drift is a weak force upholding the cline however means that clines produced this way are often random (i.e. uncorrelated with environmental variables) and subject to breakdown or reversal over time. Such clines are therefore unstable and sometimes called "transient clines". Clinal structure and terminology The steepness, or gradient, of a cline reflects the extent of the differentiation in the character across a geographic range. For example, a steep cline could indicate large variation in the colour of plumage between adjacent bird populations. It has been previously outlined that such steep clines may be the result of two previously allopatric populations with a large degree of difference in the trait having only recently established gene flow, or where there is strong selection against hybrids. However, it may also reflect a sudden environmental change or boundary. Examples of rapidly changing environmental boundaries like this include abrupt changes in the heavy metal content of soils, and the consequent narrow clines produced between populations of Agrostis that are either adapted to these soils with high metal content, or adapted to "normal" soil. Conversely, a shallow cline indicates little geographical variation in the character or trait across a given geographical distance. This may have arisen through weak differential environmental selective pressure, or where two populations established secondary contact a long time ago and gene flow has eroded the large character differentiation between the populations. The gradient of a cline is related to another commonly referred to property, clinal width. A cline with a steep slope is said to have a small, or narrow, width, while shallower clines have larger widths. Types of clines According to Huxley, clines can be classified into two categories; continuous clines and discontinuous stepped clines. These types of clines characterise the way that a genetic or phenotypic trait transforms from one end of its geographical range of the species to the other. Continuous clines In continuous clines, all populations of the species are able to interbreed and there is gene flow throughout the entire range of the species. In this way, these clines are both biologically (no clear subgroups) and geographically (contiguous distribution) continuous. Continuous clines can be further sub-divided into smooth and stepped clines. Continuous smooth clines are characterised by the lack of any abrupt changes or delineation in the genetic or phenotypic trait across the cline, instead displaying a smooth gradation throughout. Huxley recognised that this type of cline, with its uniform slope throughout, was unlikely to be common. Continuous stepped clines consist of an overall shallow cline, interspersed by sections of much steeper slope. The shallow slope represents the populations, and the shorter, steeper sections the larger change in character between populations. Stepped clines can be further subdivided into horizontally stepped clines, and obliquely stepped clines. Horizontally stepped clines show no intra-population variation or gradation in the character, therefore displaying a horizontal gradient. These uniform populations are connected by steeper sections of the cline, characterised by larger changes in the form of the character. However, because in continuous clines all populations exchange genetic material, the intergradation zone between the groups can never have a vertical slope. In obliquely stepped clines, conversely, each population also demonstrates a cline in the character, albeit of a shallower slope than the clines connecting the populations together. Huxley compared obliquely stepped clines to looking like a "stepped ramp", rather than taking on the formation of a staircase as in the case of horizontally stepped clines. Discontinuous stepped clines Unlike in continuous clines, in discontinuous clines the populations of species are allopatric, meaning there is very little or no gene flow amongst populations. The genetic or phenotypic trait in question always shows a steeper gradient between groups than within groups, as in continuous clines. Discontinuous clines follow the same principles as continuous clines by displaying either Horizontally stepped clines, where intra-group variation is very small or non-existent and the geographic space separating groups shows a sharp change in character Obliquely stepped clines, where there is some intra-group gradation, but this is less than the gradation in the character between populations Clines and speciation It was originally assumed that geographic isolation was a necessary precursor to speciation (allopatric speciation). The possibility that clines may be a precursor to speciation was therefore ignored, as they were assumed to be evidence of the fact that in contiguous populations gene flow was too strong a force of homogenisation, and selection too weak a force of differentiation, for speciation to take place. However, the existence of particular types of clines, such as ring species, in which populations did not differentiate in allopatry but the terminal ends of the cline nonetheless do not interbreed, cast into doubt whether complete geographical isolation of populations is an absolute requirement for speciation. Because clines can exist in populations connected by some degree of gene flow, the generation of new species from a previously clinal population is termed parapatric speciation. Both extrinsic and intrinsic selection can serve to generate varying degrees of reproductive isolation and thereby instigate the process of speciation. For example, through environmental selection acting on populations and favouring particular allele frequencies, large genetic differences between populations may accumulate (this would be reflected in clinal structure by the presence of numerous very steep clines). If the local genetic differences are great enough, it may lead unfavourable combinations of genotypes and therefore to hybrids being at a decreased fitness relative to the parental lines. When this hybrid disadvantage is great enough, natural selection will select for pre-zygotic traits in the homozygous parental lines that reduce the likelihood of disadvantageous hybridisation - in other words, natural selection will favour traits that promote assortative mating in the parental lines. This is known as reinforcement and plays an important role in parapatric and sympatric speciation. Clinal maps Clines can be portrayed graphically on maps using lines that show the transition in character state from one end of the geographic range to the other. Character states can however additionally be represented using isophenes, defined by Ernst Mayr as "lines of equal expression of a clinally varying character". In other words, areas on maps that demonstrate the same biological phenomenon or character will be connected by something that resembles a contour line. When mapping clines therefore, which follow a character gradation from one extreme to the other, isophenes will transect clinal lines at a right angle. Examples of clines Although the term "cline" was first officially coined by Huxley in 1938, gradients and geographic variations in the character states of species have been observed for centuries. Indeed, some gradations have been considered so ubiquitous that they have been labelled ecological "rules". One commonly cited example of a gradient in morphology is Gloger's Rule, named after Constantin Gloger, who observed in 1833 that environmental factors and the pigmentation of avian plumage tend to covary with each other, such that birds found in arid areas near the Equator tend to be much darker than those in less arid areas closer to the Poles. Since then, this rule has been extended to include many other animals, including flies, butterflies, and wolves. Other ecogeographical rules include Bergmann's Rule, coined by Carl Bergmann in 1857, which states that homeotherms closer to the Equator tend to be smaller than their more northerly or southerly conspecifics. One of the proposed reasons for this cline is that larger animals have a relatively smaller surface area to volume ratio and therefore improved heat conservancy – an important advantage in cold climates. The role of the environment in imposing a selective pressure and producing this cline has been heavily implicated due to the fact that Bergmann's Rule has been observed across many independent lineages of species and continents. For example, the house sparrow, which was introduced in the early 1850s to the eastern United States, evolved a north-south gradient in size soon after its introduction. This gradient reflects the gradient that already existed in the house sparrow's native range in Europe. Ring species are a distinct type of cline where the geographical distribution in question is circular in shape, so that the two ends of the cline overlap with one another, giving two adjacent populations that rarely interbreed due to the cumulative effect of the many changes in phenotype along the cline. The populations elsewhere along the cline interbreed with their geographically adjacent populations as in a standard cline. In the case of Larus gulls, the habitats of the end populations even overlap, which introduces questions as to what constitutes a species: nowhere along the cline can a line be drawn between the populations, but they are unable to interbreed. In humans, clines in the frequency of blood types has allowed scientists to infer past population migrations. For example, the Type B blood group reaches its highest frequency in Asia, but become less frequent further west. From this, it has been possible to infer that some Asian populations migrated towards Europe around 2,000 years ago, causing genetic admixture in an isolation by distance model. In contrast to this cline, blood Type A shows the reverse pattern, reaching its highest frequency in Europe and declining in frequency towards Asia. References Ecology terminology Evolutionary biology Genetic genealogy Kinship and descent Landscape ecology Modern human genetic history Population genetics Biological classification Species
0.772441
0.985114
0.760942
Crystallization
Crystallization is the process by which solids form, where the atoms or molecules are highly organized into a structure known as a crystal. Some ways by which crystals form are precipitating from a solution, freezing, or more rarely deposition directly from a gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, cooling rate, and in the case of liquid crystals, time of fluid evaporation. Crystallization occurs in two major steps. The first is nucleation, the appearance of a crystalline phase from either a supercooled liquid or a supersaturated solvent. The second step is known as crystal growth, which is the increase in the size of particles and leads to a crystal state. An important feature of this step is that loose particles form layers at the crystal's surface and lodge themselves into open inconsistencies such as pores, cracks, etc. The majority of minerals and organic molecules crystallize easily, and the resulting crystals are generally of good quality, i.e. without visible defects. However, larger biochemical particles, like proteins, are often difficult to crystallize. The ease with which molecules will crystallize strongly depends on the intensity of either atomic forces (in the case of mineral substances), intermolecular forces (organic and biochemical substances) or intramolecular forces (biochemical substances). Crystallization is also a chemical solid–liquid separation technique, in which mass transfer of a solute from the liquid solution to a pure solid crystalline phase occurs. In chemical engineering, crystallization occurs in a crystallizer. Crystallization is therefore related to precipitation, although the result is not amorphous or disordered, but a crystal. Process The crystallization process consists of two major events, nucleation and crystal growth which are driven by thermodynamic properties as well as chemical properties. Nucleation is the step where the solute molecules or atoms dispersed in the solvent start to gather into clusters, on the microscopic scale (elevating solute concentration in a small region), that become stable under the current operating conditions. These stable clusters constitute the nuclei. Therefore, the clusters need to reach a critical size in order to become stable nuclei. Such critical size is dictated by many different factors (temperature, supersaturation, etc.). It is at the stage of nucleation that the atoms or molecules arrange in a defined and periodic manner that defines the crystal structure – note that "crystal structure" is a special term that refers to the relative arrangement of the atoms or molecules, not the macroscopic properties of the crystal (size and shape), although those are a result of the internal crystal structure. The crystal growth is the subsequent size increase of the nuclei that succeed in achieving the critical cluster size. Crystal growth is a dynamic process occurring in equilibrium where solute molecules or atoms precipitate out of solution, and dissolve back into solution. Supersaturation is one of the driving forces of crystallization, as the solubility of a species is an equilibrium process quantified by Ksp. Depending upon the conditions, either nucleation or growth may be predominant over the other, dictating crystal size. Many compounds have the ability to crystallize with some having different crystal structures, a phenomenon called polymorphism. Certain polymorphs may be metastable, meaning that although it is not in thermodynamic equilibrium, it is kinetically stable and requires some input of energy to initiate a transformation to the equilibrium phase. Each polymorph is in fact a different thermodynamic solid state and crystal polymorphs of the same compound exhibit different physical properties, such as dissolution rate, shape (angles between facets and facet growth rates), melting point, etc. For this reason, polymorphism is of major importance in industrial manufacture of crystalline products. Additionally, crystal phases can sometimes be interconverted by varying factors such as temperature, such as in the transformation of anatase to rutile phases of titanium dioxide. In nature There are many examples of natural process that involve crystallization. Geological time scale process examples include: Natural (mineral) crystal formation (see also gemstone); Stalactite/stalagmite, rings formation; Human time scale process examples include: Snow flakes formation; Honey crystallization (nearly all types of honey crystallize). Methods Crystal formation can be divided into two types, where the first type of crystals are composed of a cation and anion, also known as a salt, such as sodium acetate. The second type of crystals are composed of uncharged species, for example menthol. Crystals can be formed by various methods, such as: cooling, evaporation, addition of a second solvent to reduce the solubility of the solute (technique known as antisolvent or drown-out), solvent layering, sublimation, changing the cation or anion, as well as other methods. The formation of a supersaturated solution does not guarantee crystal formation, and often a seed crystal or scratching the glass is required to form nucleation sites. A typical laboratory technique for crystal formation is to dissolve the solid in a solution in which it is partially soluble, usually at high temperatures to obtain supersaturation. The hot mixture is then filtered to remove any insoluble impurities. The filtrate is allowed to slowly cool. Crystals that form are then filtered and washed with a solvent in which they are not soluble, but is miscible with the mother liquor. The process is then repeated to increase the purity in a technique known as recrystallization. For biological molecules in which the solvent channels continue to be present to retain the three dimensional structure intact, microbatch crystallization under oil and vapor diffusion have been the common methods. Typical equipment Equipment for the main industrial processes for crystallization. Tank crystallizers. Tank crystallization is an old method still used in some specialized cases. Saturated solutions, in tank crystallization, are allowed to cool in open tanks. After a period of time the mother liquor is drained and the crystals removed. Nucleation and size of crystals are difficult to control. Typically, labor costs are very high. Mixed-Suspension, Mixed-Product-Removal (MSMPR): MSMPR is used for much larger scale inorganic crystallization. MSMPR can crystalize solutions in a continuous manner. Thermodynamic view The crystallization process appears to violate the second principle of thermodynamics. Whereas most processes that yield more orderly results are achieved by applying heat, crystals usually form at lower temperaturesespecially by supercooling. However, the release of the heat of fusion during crystallization causes the entropy of the universe to increase, thus this principle remains unaltered. The molecules within a pure, perfect crystal, when heated by an external source, will become liquid. This occurs at a sharply defined temperature (different for each type of crystal). As it liquifies, the complicated architecture of the crystal collapses. Melting occurs because the entropy (S) gain in the system by spatial randomization of the molecules has overcome the enthalpy (H) loss due to breaking the crystal packing forces: Regarding crystals, there are no exceptions to this rule. Similarly, when the molten crystal is cooled, the molecules will return to their crystalline form once the temperature falls beyond the turning point. This is because the thermal randomization of the surroundings compensates for the loss of entropy that results from the reordering of molecules within the system. Such liquids that crystallize on cooling are the exception rather than the rule. The nature of the crystallization process is governed by both thermodynamic and kinetic factors, which can make it highly variable and difficult to control. Factors such as impurity level, mixing regime, vessel design, and cooling profile can have a major impact on the size, number, and shape of crystals produced. Dynamics As mentioned above, a crystal is formed following a well-defined pattern, or structure, dictated by forces acting at the molecular level. As a consequence, during its formation process the crystal is in an environment where the solute concentration reaches a certain critical value, before changing status. Solid formation, impossible below the solubility threshold at the given temperature and pressure conditions, may then take place at a concentration higher than the theoretical solubility level. The difference between the actual value of the solute concentration at the crystallization limit and the theoretical (static) solubility threshold is called supersaturation and is a fundamental factor in crystallization. Nucleation Nucleation is the initiation of a phase change in a small region, such as the formation of a solid crystal from a liquid solution. It is a consequence of rapid local fluctuations on a molecular scale in a homogeneous phase that is in a state of metastable equilibrium. Total nucleation is the sum effect of two categories of nucleation – primary and secondary. Primary nucleation Primary nucleation is the initial formation of a crystal where there are no other crystals present or where, if there are crystals present in the system, they do not have any influence on the process. This can occur in two conditions. The first is homogeneous nucleation, which is nucleation that is not influenced in any way by solids. These solids include the walls of the crystallizer vessel and particles of any foreign substance. The second category, then, is heterogeneous nucleation. This occurs when solid particles of foreign substances cause an increase in the rate of nucleation that would otherwise not be seen without the existence of these foreign particles. Homogeneous nucleation rarely occurs in practice due to the high energy necessary to begin nucleation without a solid surface to catalyze the nucleation. Primary nucleation (both homogeneous and heterogeneous) has been modeled as follows: where B is the number of nuclei formed per unit volume per unit time, N is the number of nuclei per unit volume, kn is a rate constant, c is the instantaneous solute concentration, c* is the solute concentration at saturation, (c − c*) is also known as supersaturation, n is an empirical exponent that can be as large as 10, but generally ranges between 3 and 4. Secondary nucleation Secondary nucleation is the formation of nuclei attributable to the influence of the existing microscopic crystals in the magma. More simply put, secondary nucleation is when crystal growth is initiated with contact of other existing crystals or "seeds". The first type of known secondary crystallization is attributable to fluid shear, the other due to collisions between already existing crystals with either a solid surface of the crystallizer or with other crystals themselves. Fluid-shear nucleation occurs when liquid travels across a crystal at a high speed, sweeping away nuclei that would otherwise be incorporated into a crystal, causing the swept-away nuclei to become new crystals. Contact nucleation has been found to be the most effective and common method for nucleation. The benefits include the following: Low kinetic order and rate-proportional to supersaturation, allowing easy control without unstable operation. Occurs at low supersaturation, where growth rate is optimal for good quality. Low necessary energy at which crystals strike avoids the breaking of existing crystals into new crystals. The quantitative fundamentals have already been isolated and are being incorporated into practice. The following model, although somewhat simplified, is often used to model secondary nucleation: where k1 is a rate constant, MT is the suspension density, j is an empirical exponent that can range up to 1.5, but is generally 1, b is an empirical exponent that can range up to 5, but is generally 2. Growth Once the first small crystal, the nucleus, forms it acts as a convergence point (if unstable due to supersaturation) for molecules of solute touching – or adjacent to – the crystal so that it increases its own dimension in successive layers. The pattern of growth resembles the rings of an onion, as shown in the picture, where each colour indicates the same mass of solute; this mass creates increasingly thin layers due to the increasing surface area of the growing crystal. The supersaturated solute mass the original nucleus may capture in a time unit is called the growth rate expressed in kg/(m2*h), and is a constant specific to the process. Growth rate is influenced by several physical factors, such as surface tension of solution, pressure, temperature, relative crystal velocity in the solution, Reynolds number, and so forth. The main values to control are therefore: Supersaturation value, as an index of the quantity of solute available for the growth of the crystal; Total crystal surface in unit fluid mass, as an index of the capability of the solute to fix onto the crystal; Retention time, as an index of the probability of a molecule of solute to come into contact with an existing crystal; Flow pattern, again as an index of the probability of a molecule of solute to come into contact with an existing crystal (higher in laminar flow, lower in turbulent flow, but the reverse applies to the probability of contact). The first value is a consequence of the physical characteristics of the solution, while the others define a difference between a well- and poorly designed crystallizer. Size distribution The appearance and size range of a crystalline product is extremely important in crystallization. If further processing of the crystals is desired, large crystals with uniform size are important for washing, filtering, transportation, and storage, because large crystals are easier to filter out of a solution than small crystals. Also, larger crystals have a smaller surface area to volume ratio, leading to a higher purity. This higher purity is due to less retention of mother liquor which contains impurities, and a smaller loss of yield when the crystals are washed to remove the mother liquor. In special cases, for example during drug manufacturing in the pharmaceutical industry, small crystal sizes are often desired to improve drug dissolution rate and bio-availability. The theoretical crystal size distribution can be estimated as a function of operating conditions with a fairly complicated mathematical process called population balance theory (using population balance equations). Main crystallization processes Some of the important factors influencing solubility are: Concentration Temperature Solvent mixture composition Polarity Ionic strength So one may identify two main families of crystallization processes: Cooling crystallization Evaporative crystallization This division is not really clear-cut, since hybrid systems exist, where cooling is performed through evaporation, thus obtaining at the same time a concentration of the solution. A crystallization process often referred to in chemical engineering is the fractional crystallization. This is not a different process, rather a special application of one (or both) of the above. Cooling crystallization Application Most chemical compounds, dissolved in most solvents, show the so-called direct solubility that is, the solubility threshold increases with temperature. So, whenever the conditions are favorable, crystal formation results from simply cooling the solution. Here cooling is a relative term: austenite crystals in a steel form well above 1000 °C. An example of this crystallization process is the production of Glauber's salt, a crystalline form of sodium sulfate. In the diagram, where equilibrium temperature is on the x-axis and equilibrium concentration (as mass percent of solute in saturated solution) in y-axis, it is clear that sulfate solubility quickly decreases below 32.5 °C. Assuming a saturated solution at 30 °C, by cooling it to 0 °C (note that this is possible thanks to the freezing-point depression), the precipitation of a mass of sulfate occurs corresponding to the change in solubility from 29% (equilibrium value at 30 °C) to approximately 4.5% (at 0 °C) – actually a larger crystal mass is precipitated, since sulfate entrains hydration water, and this has the side effect of increasing the final concentration. There are limitations in the use of cooling crystallization: Many solutes precipitate in hydrate form at low temperatures: in the previous example this is acceptable, and even useful, but it may be detrimental when, for example, the mass of water of hydration to reach a stable hydrate crystallization form is more than the available water: a single block of hydrate solute will be formed – this occurs in the case of calcium chloride); Maximum supersaturation will take place in the coldest points. These may be the heat exchanger tubes which are sensitive to scaling, and heat exchange may be greatly reduced or discontinued; A decrease in temperature usually implies an increase of the viscosity of a solution. Too high a viscosity may give hydraulic problems, and the laminar flow thus created may affect the crystallization dynamics. It is not applicable to compounds having reverse solubility, a term to indicate that solubility increases with temperature decrease (an example occurs with sodium sulfate where solubility is reversed above 32.5 °C). Cooling crystallizers The simplest cooling crystallizers are tanks provided with a mixer for internal circulation, where temperature decrease is obtained by heat exchange with an intermediate fluid circulating in a jacket. These simple machines are used in batch processes, as in processing of pharmaceuticals and are prone to scaling. Batch processes normally provide a relatively variable quality of the product along with the batch. The Swenson-Walker crystallizer is a model, specifically conceived by Swenson Co. around 1920, having a semicylindric horizontal hollow trough in which a hollow screw conveyor or some hollow discs, in which a refrigerating fluid is circulated, plunge during rotation on a longitudinal axis. The refrigerating fluid is sometimes also circulated in a jacket around the trough. Crystals precipitate on the cold surfaces of the screw/discs, from which they are removed by scrapers and settle on the bottom of the trough. The screw, if provided, pushes the slurry towards a discharge port. A common practice is to cool the solutions by flash evaporation: when a liquid at a given T0 temperature is transferred in a chamber at a pressure P1 such that the liquid saturation temperature T1 at P1 is lower than T0, the liquid will release heat according to the temperature difference and a quantity of solvent, whose total latent heat of vaporization equals the difference in enthalpy. In simple words, the liquid is cooled by evaporating a part of it. In the sugar industry, vertical cooling crystallizers are used to exhaust the molasses in the last crystallization stage downstream of vacuum pans, prior to centrifugation. The massecuite enters the crystallizers at the top, and cooling water is pumped through pipes in counterflow. Evaporative crystallization Another option is to obtain, at an approximately constant temperature, the precipitation of the crystals by increasing the solute concentration above the solubility threshold. To obtain this, the solute/solvent mass ratio is increased using the technique of evaporation. This process is insensitive to change in temperature (as long as hydration state remains unchanged). All considerations on control of crystallization parameters are the same as for the cooling models. Evaporative crystallizers Most industrial crystallizers are of the evaporative type, such as the very large sodium chloride and sucrose units, whose production accounts for more than 50% of the total world production of crystals. The most common type is the forced circulation (FC) model (see evaporator). A pumping device (a pump or an axial flow mixer) keeps the crystal slurry in homogeneous suspension throughout the tank, including the exchange surfaces; by controlling pump flow, control of the contact time of the crystal mass with the supersaturated solution is achieved, together with reasonable velocities at the exchange surfaces. The Oslo, mentioned above, is a refining of the evaporative forced circulation crystallizer, now equipped with a large crystals settling zone to increase the retention time (usually low in the FC) and to roughly separate heavy slurry zones from clear liquid. Evaporative crystallizers tend to yield larger average crystal size and narrows the crystal size distribution curve. DTB crystallizer Whichever the form of the crystallizer, to achieve an effective process control it is important to control the retention time and the crystal mass, to obtain the optimum conditions in terms of crystal specific surface and the fastest possible growth. This is achieved by a separation – to put it simply – of the crystals from the liquid mass, in order to manage the two flows in a different way. The practical way is to perform a gravity settling to be able to extract (and possibly recycle separately) the (almost) clear liquid, while managing the mass flow around the crystallizer to obtain a precise slurry density elsewhere. A typical example is the DTB (Draft Tube and Baffle) crystallizer, an idea of Richard Chisum Bennett (a Swenson engineer and later President of Swenson) at the end of the 1950s. The DTB crystallizer (see images) has an internal circulator, typically an axial flow mixer – yellow – pushing upwards in a draft tube while outside the crystallizer there is a settling area in an annulus; in it the exhaust solution moves upwards at a very low velocity, so that large crystals settle – and return to the main circulation – while only the fines, below a given grain size are extracted and eventually destroyed by increasing or decreasing temperature, thus creating additional supersaturation. A quasi-perfect control of all parameters is achieved as DTF crystallizers offer superior control over crystal size and characteristics. This crystallizer, and the derivative models (Krystal, CSC, etc.) could be the ultimate solution if not for a major limitation in the evaporative capacity, due to the limited diameter of the vapor head and the relatively low external circulation not allowing large amounts of energy to be supplied to the system. See also Abnormal grain growth Chiral resolution by crystallization Crystal habit Crystal structure Crystallite Fractional crystallization (chemistry) Igneous differentiation Laser heated pedestal growth Micro-pulling-down Protein crystallization Pumpable ice technology Quasicrystal Recrystallization (chemistry) Recrystallization (metallurgy) Seed crystal Single crystal Symplectite Vitrification X-ray crystallography References Further reading "Small Molecule Crystallization" (PDF) at Illinois Institute of Technology website Arkenbout-de Vroome, Tine (1995). Melt Crystallization Technology CRC Geankoplis, C.J. (2003) "Transport Processes and Separation Process Principles". 4th Ed. Prentice-Hall Inc. Glynn P.D. and Reardon E.J. (1990) "Solid-solution aqueous-solution equilibria: thermodynamic theory and representation". Amer. J. Sci. 290, 164–201. Jancic, S. J.; Grootscholten, P.A.M.: “Industrial Crystallization”, Textbook, Delft University Press and Reidel Publishing Company, Delft, The Netherlands, 1984. Mersmann, A. (2001) Crystallization Technology Handbook CRC; 2nd ed. External links Batch Crystallization Industrial Crystallization Liquid-solid separation Crystallography Laboratory techniques Phase transitions Articles containing video clips
0.763216
0.996991
0.760919
Systems theory in anthropology
Systems theory in anthropology is an interdisciplinary, non-representative, non-referential, and non-Cartesian approach that brings together natural and social sciences to understand society in its complexity. The basic idea of a system theory in social science is to solve the classical problem of duality; mind-body, subject-object, form-content, signifier-signified, and structure-agency. Systems theory suggests that instead of creating closed categories into binaries (subject-object), the system should stay open so as to allow free flow of process and interactions. In this way the binaries are dissolved. Complex systems in nature involve a dynamic interaction of many variables (e.g. animals, plants, insects and bacteria; predators and prey; climate, the seasons and the weather, etc.) These interactions can adapt to changing conditions but maintain a balance both between the various parts and as a whole; this balance is maintained through homeostasis. Human societies are also complex systems. Work to define complex systems scientifically arose first in math in the late 19th century, and was later applied to biology in the 1920s to explain ecosystems, then later to social sciences. Anthropologist Gregory Bateson is the most influential and earliest propagator of systems theory in social sciences. In the 1940s, as a result of the Macy conferences, he immediately recognized its application to human societies with their many variables and the flexible but sustainable balance that they maintain. Bateson describes system as "any unit containing feedback structure and therefore competent to process information." Thus an open system allows interaction between concepts and materiality or subject and the environment or abstract and real. In natural science, systems theory has been a widely used approach. Austrian biologist, Karl Ludwig von Bertalanffy, developed the idea of the general systems theory (GST). The GST is a multidisciplinary approach of system analysis. Main concepts in systems theory Non-representational and non-referential One of the central elements of the systems theory is to move away from the representational system to the non-representation of things. What it means is that instead of imposing mental concepts, which reduce complexity of a materiality by limiting the variations or malleability onto the objects, one should trace the network of things. According to Gregory Bateson, "ethos, eidos, sociology, economics, cultural structure, social structure, and all the rest of these words refer only to scientists' ways of putting the jigsaw puzzle." The tracing rather than projecting mental images bring in sight material reality that has been obscured under the universalizing concepts. Non-Cartesian Since the European Enlightenment, the Western philosophy has placed the individual, as an indispensable category, at the center of the universe. René Descartes' famous aphorism, 'I think therefore I am' proves that a person is a rational subject whose feature of thinking brings the human into existence. The Cartesian subject, therefore, is a scientific individual who imposes mental concepts on things in order to control the nature or simply what exists outside his mind. This subject-centered view of the universe has reduced the complex nature of the universe. One of the biggest challenges for system theory is thus to displace or de-center the Cartesian subject as a center of a universe and as a rational being. The idea is to make human beings not a supreme entity but rather to situate them as any other being in the universe. The humans are not thinking Cartesian subject but they dwell alongside nature. This brings back the human to its original place and introduces nature in the equation. The systems theory, therefore, encourages a non-unitary subject in opposition to a Cartesian subject. Complexity Once the Cartesian individual is dissolved, the social sciences will move away from a subject-centered view of the world. The challenge is then how to non-represent empirical reality without reducing the complexity of a system. To put it simply, instead of representing things by us let the things speak through us. These questions led materialists philosophers such as Deleuze and Guattari to develop a "science" for understanding reality without imposing our mental projections. The way they encourage is instead of throwing conceptual ideas we should do tracing. Tracing requires one to connect disparate assemblages or appendages not into a unified center but rather into a rhizome or an open system. Open system and closed system Ludwig Bertalanffy describes two types of systems: open system and closed system. The open systems are systems that allow interactions between its internal elements and the environment. An open system is defined as a "system in exchange of matter with its environment, presenting import and export, building-up and breaking-down of its material components." For example, living organism. Closed systems, on the other hand, are considered to be isolated from their environment. For instance, thermodynamics that applies to closed systems. Tracing "systems theory" in anthropology Marx–Weber debates Although the term 'system theory' is never mentioned in the work of Karl Marx and Max Weber, the fundamental idea behind systems theory does penetrate deeply in to their understanding of social reality. One can easily see the challenges that both Marx and Weber faced in their work. Breaking away from Hegelian speculative philosophy, Marx developed a social theory based on historical materialism, arguing that it is not consciousness that determines being, but in fact, it is social being that determines consciousness. More specifically, it is human beings' social activity, labor, that causes, shapes, and informs human thinking. Based on labor, Marx develops his entire social theory that specifically questions reified, bourgeois capitalism. Labor, class conflict, commodity, value, surplus-value, bourgeoisie, and proletariat are thus central concepts in Marxian social theory. In contrast to the Cartesian "pure and rational subjectivity," Marx introduced social activity as the force that produces rationality. He was interested in finding sophisticated, scientific universal laws of society, though contrary to positivist mechanistic approaches which take facts as given, and then develop causal relationship out of them. Max Weber found Marxist ideas useful, however, limited in explaining complex societal practices and activities. Drawing on hermeneutic tradition, Weber introduced multiple rationalities in the modern schema of thinking and used interpretive approach in understanding the meaning of a phenomenon placed in the webs of significance. Contrary to Marx, who was searching for the universal laws of the society, Weber attempts an interpretive understanding of social action in order to arrive at a "causal explanation of its course and effects." Here the word course signifies Weber's non-deterministic approach to a phenomenon. The social actions have subjective meanings that should be understood in its given context. Weber's interpretive approach in understanding the meaning of an action in relation to its environment delineated a contextualized social framework for cultural relativism. Since we exist in webs of significance and the objective analysis would detach us from a concrete reality which we are all part of it, Weber suggested ideal-types; an analytical and conceptual construct "formed by the accentuation of one or more points of view and by the synthesis of a great many diffuse, discrete, more or less present, and occasionally absent concrete individual phenomena, which are arranged according to those one-sidedly emphasized viewpoints into a unified analytical construct." Although they are analytical concepts, they serve as reference points in interpreting the meaning of society's heterogeneous and polymorphous activities. In other words, ideal-types are simplified and typified empirical reality, but they are not reality in themselves. Bureaucracy, authority, religion, etc. are all ideal-types, according to Weber, and do not exist in the real world. They assist social scientists in selecting culturally significant elements of a larger whole which can be contrasted with each other to demonstrate their interrelationship, patterns of formation, and similar societal functions. Weber's selected ideal-types – bureaucracy, religion, and capitalism – are culturally significant variables through which he demonstrated show multiple functionalities of social behavior. Similarly, Weber emphasizes that Marxist laws are also ideal-types. The concept of class, economy, capitalism, proletariat and bourgeoisie, revolution and state, along with other Marxian models are heuristic tools for understanding a society in its context. Thus, according to Weber, Marxist ideal-types could prove fruitful only if used to access a given society. However, Weber warns of dangerousness or perniciousness in relation to Marxist ideal-types when seen as empirical reality. The reason is that Marxist practitioners have imposed analytical concepts as ahistorical and universal categories to reduce concrete-process and activities from the polymorphous actions into a simplified phenomenon. This renders social phenomena not only ahistorical but also devoid of spatio-temporal rigour, decontextualized, and categorizes chaos and ruptures under the general label of bourgeoisie exploitation. In fact, history emerged as a metanarrative of a class struggle, moving in a chronological order, and future anticipated as a revolutionary overthrow of state apparatuses by the workers. For instance, the state as an ideal-type imported to the physical world has deceived and diverted political activism away from the real sites of power such as corporations and discourses. Similarly class as an ideal-type, projected to a society, which is an ensemble of population, becomes dangerous because it marginalizes and undermines organic linkages of kinship, language, race, and ethnicity. This is a significant point because society is not composed of two conflicting classes, bourgeoisie and proletariat, and does not just have vicissitudes along economic lines. It does not exist in binaries, as Marxist ideal-types would suppose. In fact, it is a reality in which people of various denominations – class backgrounds, religious affiliations, kinship and family ties, gender, and ethnic and linguistic differences – do not only experience conflict, but also practice cooperation in everyday life. Thus when one inserts ideal-types into this concrete dynamic process one does categorical violence to multifariousness of the population and similarly reduces feeling, emotions, non-economic social standing such as honor, and status, as Weber describes, to economism. Moreover, the ideal-types should also be treated relevant to a context that defines and delimits the former's parameters. Weber's intervention came at the right moment when Marxism – particularly vulgar Marxism – reduced "non-economic" practices and beliefs, the superstructure, to a determined base, the mode of production. Similarly, speculative philosophy imposed its own metaphysical categories on diverse concrete realities thus making a particular instance ahistorical. Weber approaches both the methods, materialist and purely idealist, as "equally possible, but each, of it does not serve as the preparation, but as the conclusion of an investigation." To prove this point, Weber demonstrated how ethics and morality played a significant role in the rise of modern capitalism. The Protestant work ethic, for instance, functioned as sophisticated mechanism that encouraged population to "care for the self", which served as an underpinning social activity for bourgeois capitalism. Of course, work ethics was not the only element, utilitarian philosophy equally contributed in forming a bureaucratic work culture whose side-effects are all too well known to the modern world. In response to the reductive approach of economism or vulgar Marxism, as it is also known, Louis Althusser and Raymond Williams introduced new understanding to Marxist thought. Althusser and Williams introduced politics and culture as new entry points alongside the mode of production in Marxist methodology. However, there is a sharp contrast between the scholars' arguments. Taking Williams as our point of discussion, he criticizes the mechanistic approach to Marxism that encourages a close reading of Marxian concepts. Concepts such as being, consciousness, class, capital, labor, labor power, commodity, economy, politics, etc. are not closed categories but rather interactive, engaging, and open practices or praxis. Althusser, on the other hand, proposes ‘overdetermination' as multiple forces rather than isolated single force or modes of production. However, he argues that the economy is "determinant in the last instance." Closed systems In anthropology, the term 'system' is used widely for describing socio-cultural phenomena of a given society in a holistic way. For instance, kinship system, marriage system, cultural system, religious system, totemic system, etc. This systemic approach to a society shows the anxieties of the earliest anthropologists to capture the reality without reducing the complexity of a given community. In their quest of searching the underline pattern of a reality, they "discovered" the kinship system as a fundamental structure of the natives. However, their systems are closed systems because they reduce the complexity and fluidity by imposing anthropological concepts such as genealogy, kinship, heredity, marriage. Cultural relativism Franz Boas was the first anthropologist to problematize the notion of culture. Challenging the modern hegemony of culture, Boas introduced the idea of cultural relativism (understanding culture in its context). Drawing on his extensive fieldwork in the northwestern United States and British Columbia, Boas discusses culture separate from physical environment, biology, and most importantly discarded evolutionary models that represent civilization as a progressive entity following chronological development. Moreover, cultural boundaries, according to Boas, are not barriers to intermixing and should not be seen as obstacle to multiculturalism. In fact, boundaries must be seen as "porous and permeable," and "pluralized." His critique on the concept of modern race and culture had political implications in the racial politics of the United States in the 1920s. In his chapter, "The Race Problem in Modern Society," one can feel Boas' intellectual effort toward separating the natural from the social sciences and setting up the space for genuine political solutions for race relations. Structural-functionalism A. R. Radcliffe-Brown developed a structural functionalism approach in anthropology. He believed that concrete reality is "not any sort of entity but a process, the process of social life." Radcliffe-Brown emphasized on learning the social form especially a kinship system of primitive societies. The way in which one can study the pattern of life is by conceptually delineating a relation determined by a kinship or marriage, "and that we can give a general analytical description of them as constituting a system." The systems consist of structure which is referred to "some sort of ordered arrangement of parts or components." The intervening variable between the processes and structure is a function. The three concepts of process, structure, and function are thus "components of a single theory as a scheme of interpretation of human social systems." Most importantly, function "is the part it plays in, the contribution it makes to, the life of the organism as a whole." Thus the functionality of each part in the system works together to maintain a harmony or internal consistency. British anthropologist, E. R. Leach, went beyond the instrumentalist argument of Radcliffe-Brown's structural-functionalism, which approached social norms, kinship, etc. in functionalist terms rather than as social fields, or arenas of contestation. According to Leach, "the nicely ordered ranking of lineage seniority conceals a vicious element of competition." In fact, Leach was sensitive to "the essential difference between the ritual description of structural relations and the anthropologist's scientific description." For instance, in his book, Leach argues, "the question that whether a particular community is gumlao, or gumsa, or Shan is not necessarily ascertainable in the realm of empirical facts; it is a question, in part at any rate, of the attitudes and ideas of particular individuals at a particular time." Thus, Leach separated conceptual categories from empirical realities. Structural anthropology Swiss linguist Ferdinand de Saussure, in search of discovering universal laws of language, formulated a general science of linguistic by bifurcating language into langue, abstract system of language, and parole, utterance or speech. The phonemes, fundamental unit of sound, are the basic structure of a language. The linguistic community gives a social dimension to a language. Moreover, linguistic signs are arbitrary and change only comes with time and not by individual will. Drawing on structural linguistics, Claude Lévi-Strauss transforms the world into a text and thus subjected social phenomena to linguistic laws as formulated by Saussure. For instance, the "primitive systems" such as kinship, magic, mythologies, and rituals are scrutinized under the similar linguistic dichotomies of abstract normative system (objective) and utterance (subjective). The division did not only split social actions, but it also conditioned them to the categories of abstract systems that are made up of deep structures. For example, Lévi-Strauss suggests, "Kinship phenomena are of the same type as linguistic phenomena." As Saussure discovered phonemes as the basic structures of language, Lévi-Strauss identified (1) consanguinity, (2) affinity, and (3) descent as the deep structures of kinship. These "microsociological" levels serve "to discover the most general structural laws." The deep structures acquire meanings only with respect to the system they constitute. "Like phonemes, kinship terms are elements of meaning; like phonemes, they acquire meaning only if they are integrated into systems." Like the langue and parole distinctions of language, kinship system consists of (1) system of terminology (vocabulary), through which relationships are expressed and (2) system of attitudes (psychological or social) functions for social cohesion. To elaborate the dynamic interdependence between systems of terminology and attitudes, Lévi-Strauss rejected Radcliffe-Brown's idea that a system of attitudes is merely the manifestation of a system of terminology on the affective level. He turned to the concept of the avunculate as a part of a whole, which consists of three types of relationship consanguinity, affinity, and descent. Thus, Lévi-Strauss identified complex avuncular relationships, contrary to atomism and simplified labels of avunculate associated with matrilineal descent. Furthermore, he suggested that kinship systems "exist only in human consciousness; it is an arbitrary system of representations, not the spontaneous development of a real situation." The meaning of an element (avunculate) exists only in relation to a kinship structure. Lévi-Strauss elaborates the meaning and structure point further in his essay titled "The Sorcerer and His Magic." The sorcerer, patient, and group, according to Lévi-Strauss, comprise a shaman complex, which makes social consensus an underlying pattern for understanding. The work of a sorcerer is to reintegrate divergent expressions or feelings of patients into "patterns present in the group's culture. The assimilation of such patterns is the only means of objectivizing subjective states, of formulating inexpressible feelings, and of integrating inarticulated experiences into a system." The three examples that Lévi-Strauss mentions relate to magic, a practice reached as a social consensus, by a group of people including sorcerer and patient. It seems that people make sense of certain activities through beliefs, created by social consensus, and not by the effectiveness of magical practices. The community's belief in social consensus thus determines social roles and sets rules and categories for attitudes. Perhaps, in this essay, magic is system of terminology, a langue, whereas, individual behavior is a system of attitude, parole. Attitudes make sense or acquire meaning through magic. Here, magic is a language. Interpretive anthropology Influenced by Hermeneutic tradition, Clifford Geertz developed an interpretive anthropology of understanding the meaning of the society. The hermeneutic approach allows Geertz to close the distance between an ethnographer and a given culture similar to reader and text relationship. The reader reads a text and generates his/her own meaning. Instead of imposing concepts to represent reality, ethnographers should read the culture and interpret the multiplicities of meaning expressed or hidden in the society. In his influential essay, Thick Description: Towards an Interpretive Theory of Culture, Geertz argues that "man is an animal suspended in webs of significance he himself has spun." Practice theory French sociologist, Pierre Bourdieu challenges the same duality of phenomenology (subjective) and structuralism (objective) through his Practice theory. This idea precisely challenges the reductive approach of economism that places symbolic interest in opposition to economic interests. Similarly, it also rejects subjected-centered view of the world. Bourdieu attempts to close this gap by developing the concept of symbolic capital, for instance, a prestige, as readily convertible back into economic capital and hence, is ‘the most valuable form of accumulation.' Therefore, economic and symbolic both works together and should be studied as a general science of the economy of practices. System theory: Gregory Bateson British anthropologist, Gregory Bateson, is the most influential and one of the earliest founders of System Theory in anthropology. He developed an interdisciplinary approach that included communication theory, cybernetics, and mathematical logic. In his collection of essays, The Sacred Unity, Bateson argues that there are "ecological systems, social systems, and the individual organism plus the environment with which it interacts is itself a system in this technical sense." By adding environment with systems, Bateson closes the gap between the dualities such as subject and object. "Playing upon the differences between formalization and process, or crystallization and randomness, Bateson sought to transcend other dualisms–mind versus nature, organism versus environment, concept versus context, and subject versus object." Bateson set out the general rule of systems theory. He says: The basic rule of systems theory is that, if you want to understand some phenomenon or appearance, you must consider that phenomenon within the context of all completed circuits which are relevant to it. The emphasis is on the concept of the completed communicational circuit and implicit in the theory is the expectation that all units containing completed circuits will show mental characteristics. The mind, in other words, is immanent in the circuitry. We are accustomed to thinking of the mind as somehow contained within the skin of an organism, but the circuitry is not contained within the skin. Poststructuralist influence Bateson's work influenced major poststructuralist scholars especially Gilles Deleuze and Félix Guattari. In fact, the very word 'plateau' in Deleuze and Guattari's magnum opus, A Thousand Plateaus, came from Bateson's work on Balinese culture. They wrote: "Gregory Bateson uses the word plateau to designate something very special: a continuous, self-vibrating region of intensities whose development avoids any orientation toward a culmination point or external end." Bateson pioneered an interdisciplinary approach in anthropology. He coined the term "ecology of mind" to demonstrate that what "goes on in one's head and in one's behavior" is interlocked and constitutes a network. Guattari wrote: Gregory Bateson has clearly shown that what he calls the "ecology of ideas" cannot be contained within the domain of the psychology of the individual, but organizes itself into systems or "minds", the boundaries of which no longer coincide with the participant individuals. Posthumanist turn and ethnographic writing In anthropology, the task of representing a native point of view has been a challenging one. The idea behind the ethnographic writing is to understand a complexity of an everyday life of the people without undermining or reducing the native account. Historically, as mentioned above, ethnographers insert raw data, collected in the fieldwork, into the writing "machine". The output is usually the neat categories of ethnicity, identity, classes, kinship, genealogy, religion, culture, violence, and numerous other. With the posthumanist turn, however, the art of ethnographic writing has suffered serious challenges. Anthropologists are now thinking of experimenting with new style of writing. For instance, writing with natives or multiple authorship. See also Complex systems Social systems Systems science Systems theory References Further reading Gregory Bateson, A Sacred Unity: Further Steps to an Ecology of Mind Ludwig von Bertalanffy. General System Theory: Foundations, Development, Applications. Revised edition. New York: George Braziller. Rosi Braidotti. Nomadic Subjects: Embodiment and Sexual Difference in Contemporary Feminist Theory. New York: Columbia UP 1994. ---. Transpositions: On Nomadic Ethics. Cambridge, UK; Malden, MA: Polity, 2006. Georges Canguilhem. The Normal and the Pathological. Trans. Carolyn R. Fawcett. New York: Zone, 1991. Lilie Chouliaraki and Norman Fairclough. Discourse in Late Modernity: Rethinking Critical Discourse Analysis. Edinburgh: Edinburgh UP, 2000. Manuel De Landa, A Thousand Years of Nonlinear History. New York: Zone Books. 1997. ---. A New Philosophy of Society: Assemblage Theory and Social Complexity. New York: Continuum, 2006. Gilles Deleuze and Félix Guattari. Anti-Œdipus: Capitalism and Schizophrenia. Minneapolis: U of Minnesota P, 1987. ---. Thousand Plateaus. Minneapolis: U of Minnesota P, 1987. Jürgen Habermas. Theory of Communicative Action, Vol. 1. Trans. Thomas McCarthy. Boston: Beacon, 1985. ---. Theory of Communicative Action, Vol. 2. Trans. Thomas McCarthy. Boston: Beacon, 1985. Stuart Hall, ed. Representation: Cultural Representations and Signifying Practices. Thousand Oaks, CA: Sage, 1997. Donna Haraway. "A Cyborg Manifesto." Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledte, 1991. 149-181 Julia Kristeva. "The System and the Speaking Subject." The Kristeva Reader. Ed. Toril Moi. Oxford: Basil Blackwell, 1986. 24–33. (see also <http://www.kristeva.fr/> & <http://www.phillwebb.net/History/TwentiethCentury/continental/(post)structuralisms/StructuralistPsychoanalysis/Kristeva/Kristeva.htm>) Ervin Laszlo. The Systems View of the World: A Holistic Vision for Our Time. New York: Hampton Press, 1996. Bruno Latour. Reassembling the Social: An Introduction to Actor-Network Theory. New York: Oxford UP, 2007 Niklas Luhmann. Art as a Social System. Trans. Eva M. Knodt. Stanford, CA: Stanford UP, 2000. ---. Social Systems. Stanford, CA: Trans. John Bednarz, Jr., with Dirk Baecker. Stanford UP, 1996. Nina Lykke and Rosi Braidotti, eds. Monsters, Goddesses and Cyborgs: Feminist Confrontations with Science, Medicine and Cyberspace. London: Zed Books, 1996. Humberto Maturana and Bernhard Pörksen. From Being to Doing: The Origins of the Biology of Cognition. Trans. Wolfram Karl Koeck and Alison Rosemary Koeck. Heidelberg: Carl-Auer Verlag, 2004. Humberto Maturana and F. J. Varela. Autopoiesis and Cognition: The Realization of the Living. New York: Springer, 1991. Moretti, Franco. Graphs, Maps, Trees: Abstract Models for a Literary History. London, New York: Verso, 2005. Paul R. Samson and David Pitt, eds. The Biosphere and Noosphere Reader: Global Environment, Society and Change. London, New York: Routledge, 2002 [1999]. 0-415-16645-4 EBOOK online from UT library John Tresch (1998). "Heredity is an Open System: Gregory Bateson as Descendant and Ancestor". In: Anthropology Today, Vol. 14, No. 6 (Dec., 1998), pp. 3–6. Vladimir I. Vernadsky. The Biosphere. Trans. David B. Langmuir. New York: Copernicus/Springer Verlag, 1997. External links New England Complex System Institute Commonwealth Scientific and Industrial Research Organisation (CSIRO) Anthropology Cybernetics
0.795129
0.956957
0.760905
Transdisciplinarity
Transdisciplinarity connotes a research strategy that crosses disciplinary boundaries to create a holistic approach. It applies to research efforts focused on problems that cross the boundaries of two or more disciplines, such as research on effective information systems for biomedical research (see bioinformatics), and can refer to concepts or methods that were originally developed by one discipline, but are now used by several others, such as ethnography, a field research method originally developed in anthropology but now widely used by other disciplines. The Belmont Forum elaborated that a transdisciplinary approach is enabling inputs and scoping across scientific and non-scientific stakeholder communities and facilitating a systemic way of addressing a challenge. This includes initiatives that support the capacity building required for the successful transdisciplinary formulation and implementation of research actions. Usage Transdisciplinarity has two common meanings: German usage In German-speaking countries, Transdisziplinarität refers to the integration of diverse forms of research, and includes specific methods for relating knowledge in problem-solving. A 2003 conference held at the University of Göttingen showcased the diverse meanings of multi-, inter- and transdisciplinarity and made suggestions for converging them without eliminating present usages. When the very nature of a problem is under dispute, transdisciplinarity can help determine the most relevant problems and research questions involved. A first type of question concerns the cause of the present problems and their future development (system knowledge). Another concerns which values and norms can be used to form goals of the problem-solving process (target knowledge). A third relates to how a problematic situation can be transformed and improved (transformation knowledge). Transdisciplinarity requires adequate addressing of the complexity of problems and the diversity of perceptions of them, that abstract and case-specific knowledge are linked, and that practices promote the common good. Transdisciplinarity arises when participating experts interact in an open discussion and dialogue, giving equal weight to each perspective and relating them to each other. This is difficult because of the overwhelming amount of information involved, and because of incommensurability of specialized languages in each field of expertise. To excel under these conditions, researchers need not only in-depth knowledge and know-how of the disciplines involved, but skills in moderation, mediation, association and transfer. Wider usage Transdisciplinarity is also used to signify a unity of knowledge beyond disciplines. Jean Piaget introduced this usage of the term in 1970, and in 1987, the International Center for Transdisciplinary Research (CIRET) adopted the Charter of Transdisciplinarity at the 1st World Congress of Transdisciplinarity, Convento da Arrabida, Portugal, November 1994. In the CIRET approach, transdisciplinarity is radically distinct from interdisciplinarity. Interdisciplinarity, like pluridisciplinarity, concerns the transfer of methods from one discipline to another, allowing research to spill over disciplinary boundaries, but staying within the framework of disciplinary research. As the prefix "trans" indicates, transdisciplinarity concerns that which is at once between the disciplines, across the different disciplines, and beyond each individual discipline. Its goal is the understanding of the present world, of which one of the imperatives is the overarching unity of knowledge. Another critical defining characteristic of transdisciplinary research is the inclusion of stakeholders in defining research objectives and strategies in order to better incorporate the diffusion of learning produced by the research. Collaboration between stakeholders is deemed essential – not merely at an academic or disciplinary collaboration level, but through active collaboration with people affected by the research and community-based stakeholders. In such a way, transdisciplinary collaboration becomes uniquely capable of engaging with different ways of knowing the world, generating new knowledge, and helping stakeholders understand and incorporate the results or lessons learned by the research. Transdisciplinarity is defined by Basarab Nicolescu through three methodological postulates: the existence of levels of Reality, the logic of the included middle, and complexity. In the presence of several levels of Reality the space between disciplines and beyond disciplines is full of information. Disciplinary research concerns, at most, one and the same level of Reality; moreover, in most cases, it only concerns fragments of one level of Reality. On the contrary, transdisciplinarity concerns the dynamics engendered by the action of several levels of Reality at once. The discovery of these dynamics necessarily passes through disciplinary knowledge. While not a new discipline or a new superdiscipline, transdisciplinarity is nourished by disciplinary research; in turn, disciplinary research is clarified by transdisciplinary knowledge in a new, fertile way. In this sense, disciplinary and transdisciplinary research are not antagonistic but complementary. As in the case of disciplinarity, transdisciplinary research is not antagonistic but complementary to multidisciplinarity and interdisciplinarity research. According to Nicolescu, transdisciplinarity is nevertheless radically distinct from multidisciplinarity and interdisciplinarity because of its goal, the understanding of the present world, which cannot be accomplished in the framework of disciplinary research. The goal of multidisciplinarity and interdisciplinarity always remains within the framework of disciplinary research. If transdisciplinarity is often confused with interdisciplinarity or multidisciplinarity (and by the same token, we note that interdisciplinarity is often confused with multidisciplinarity) this is explained in large part by the fact that all three overflow disciplinary boundaries. Advocates maintain this confusion hides the huge potential of transdisciplinarity. One of the best known professionals of transdisciplinarity in Argentina is Pablo Tigani, and his concept about transdisciplinarity is: Currently, transdisciplinarity is a consolidated academic field that is giving rise to new applied researches, especially in Latin America and the Caribbean. In this sense, the transdisciplinary and biomimetics research of Javier Collado on Big History represents an ecology of knowledge between scientific knowledge and the ancestral wisdom of native peoples, such as Indigenous peoples in Ecuador. According to Collado, the transdisciplinary methodology applied in the field of Big History seeks to understand the interconnections of the human race with the different levels of reality that co-exist in nature and in the cosmos, and this includes mystical and spiritual experiences, very present in the rituals of shamanism with ayahuasca and other sacred plants. In abstract, the teaching of Big History in universities of Brazil, Ecuador, Colombia, and Argentina implies a transdisciplinary vision that integrates and unifies diverse epistemes that are in, between, and beyond the scientific disciplines, that is, including ancestral wisdom, spirituality, art, emotions, mystical experiences and other dimensions forgotten in the history of science, specially by the positivist approach. Transdisciplinary education Transdisciplinary education is education that brings integration of different disciplines in a harmonious manner to construct new knowledge and uplift the learner to higher domains of cognitive abilities and sustained knowledge and skills. It involves better neural networking for lifelong learning. Transdisciplinarity has been flagged internationally as an important aim of education. For example, Global Education Magazine, an international journal supported by UNESCO and UNHCR: "transdisciplinarity represents the capable germ to promote an endogenous development of the evolutionary spirit of internal critical consciousness, where religion and science are complementary. Respect, solidarity and cooperation should be global standards for the entire human development with no boundaries. This requires a radical change in the ontological models of sustainable development, global education and world-society. We must rely on the recognition of a plurality of models, cultures and socio-economical diversification. As well as biodiversity is the way for the emergence of new species, cultural diversity represents the creative potential of world-society." Influence in disciplines and fields Arts and humanities Transdisciplinarity can be found in the arts and humanities. For example, the Planetary Collegium seeks "the development of transdisciplinary discourse in the convergence of art, science, technology and consciousness research." The Plasticities Sciences Arts (PSA) research group also develops transdisciplinary approaches regarding humanities and fundamental sciences relationships as well as the Art & Science field. An example of transdisciplinary research in the arts and humanities can be seen in Lucy Jeffery's study on the work of Samuel Beckett, entitled Transdisciplinary Beckett: Visual Arts, Music, and the Creative Process. Human sciences The range of transdisciplinarity becomes clear when the four central questions of biological research ((1) causation, (2) ontogeny, (3) adaptation, (4) phylogeny [after Niko Tinbergen 1963, see also Tinbergen's four questions, cf. Aristotle: Causality / Four Major Causes]) are graphed against distinct levels of analysis (e.g. cell, organ, individual, group; [cf. "Laws about the Levels of Complexity" of Nicolai Hartmann 1940/1964, see also Rupert Riedl 1984]): In this "scheme of transdisciplinarity", all anthropological disciplines (paragraph C in the table of the pdf-file below), their questions (paragraph A: see pdf-file) and results (paragraph B: see pdf-file) can be intertwined and allocated with each other for examples how these aspects go into those little boxes in the matrix. This chart includes all realms of anthropological research (no one is excluded). It is the starting point for a systematical order for all human sciences, and also a source for a consistent networking and structuring of their results. This "bio-psycho-social" orientation framework is the basis for the development of the "Fundamental Theory of Human Sciences" and for a transdisciplinary consensus. (In this tabulated orientation matrix the questions and reference levels in italics are also the subject of the humanities.). Niko Tinbergen was familiar with both conceptual categories (i.e. the four central questions of biological research and the levels of analysis), the tabulation was made by Gerhard Medicus. Certainly, a humanist perspective always involves a transdisciplinary focus. A good and classic example of mixing very different sciences was the work developed by Leibniz in seventeenth-eighteenth centuries in order to create a universal system of justice. Health science The term transdisciplinarity is increasingly prevalent in health care research and has been identified as important to improving the effectiveness and efficiency in health care. Transdisciplinary within public health emphasizes integrating diverse individuals, skills, perspectives, and expertise across disciplines to dissolve traditional boundaries and develop holistic approaches linking ecosystem and human health boundaries. See also International Association of Transdisciplinary Psychology Science of team science References Citations Works cited Further reading External links Ulli Vilsmaier: What Is Transdisciplinarity? Explainer Video, TU Berlin, 2024 transdisciplinary-net, Swiss Academies of Arts and Sciences Transdisciplinary Case Studies at ETH Zurich International Center for Transdisciplinary Research The site of the International Center for Transdisciplinary Research (CIRET). E-zine "Transdisciplinary Encounters". Transdisciplinary Studies The book series dedicated to transdisciplinary research. World Knowledge Dialogue Foundation Transdisciplinary Studies at Claremont Graduate University PLASTIR : The Transdisciplinary Review of human plasticity Journal of the International Association of Transdisciplinary Psychology Academic discipline interactions Holism
0.771698
0.985995
0.76089
Herpetology
Herpetology (from Greek ἑρπετόν herpetón, meaning "reptile" or "creeping animal") is a branch of zoology concerned with the study of amphibians (including frogs, toads, salamanders, newts, and caecilians (gymnophiona)) and reptiles (including snakes, lizards, amphisbaenids, turtles, terrapins, tortoises, crocodilians, and tuataras). Birds, which are cladistically included within Reptilia, are traditionally excluded here; the separate scientific study of birds is the subject of ornithology. The precise definition of herpetology is the study of ectothermic (cold-blooded) tetrapods. This definition of "herps" (otherwise called "herptiles" or "herpetofauna") excludes fish; however, it is not uncommon for herpetological and ichthyological scientific societies to collaborate. For instance, groups such as the American Society of Ichthyologists and Herpetologists have co-published journals and hosted conferences to foster the exchange of ideas between the fields. Herpetological societies are formed to promote interest in reptiles and amphibians, both captive and wild. Herpetological studies can offer benefits relevant to other fields by providing research on the role of amphibians and reptiles in global ecology. For example, by monitoring amphibians that are very sensitive to environmental changes, herpetologists record visible warnings that significant climate changes are taking place. Although they can be deadly, some toxins and venoms produced by reptiles and amphibians are useful in human medicine. Currently, some snake venom has been used to create anti-coagulants that work to treat strokes and heart attacks. Naming and etymology The word herpetology is from Greek: ἑρπετόν, herpetón, "creeping animal" and , -logia, "knowledge". "Herp" is a vernacular term for non-avian reptiles and amphibians. It is derived from the archaic term "herpetile", with roots back to Linnaeus's classification of animals, in which he grouped reptiles and amphibians in the same class. There are over 6700 species of amphibians and over 9000 species of reptiles. Despite its modern taxonomic irrelevance, the term has persisted, particularly in the names of herpetology, the scientific study of non-avian reptiles and amphibians, and herpetoculture, the captive care and breeding of reptiles and amphibians. Subfields The field of herpetology can be divided into areas dealing with particular taxonomic groups such as frogs and other amphibians (batrachology), snakes (ophiology or ophidiology), lizards (saurology) and turtles (cheloniology, chelonology, or testudinology). More generally, herpetologists work on functional problems in the ecology, evolution, physiology, behavior, taxonomy, or molecular biology of amphibians and reptiles. Amphibians or reptiles can be used as model organisms for specific questions in these fields, such as the role of frogs in the ecology of a wetland. All of these areas are related through their evolutionary history, an example being the evolution of viviparity (including behavior and reproduction). Careers Career options in the field of herpetology include lab research, field studies and surveys, assistance in veterinary and medical procedures, zoological staff, museum staff, and college teaching. In modern academic science, it is rare for an individual to solely consider themselves to be a herpetologist. Most individuals focus on a particular field such as ecology, evolution, taxonomy, physiology, or molecular biology, and within that field ask questions pertaining to or best answered by examining reptiles and amphibians. For example, an evolutionary biologist who is also a herpetologist may choose to work on an issue such as the evolution of warning coloration in coral snakes. Modern herpetological writers include Mark O'Shea and Philip Purser. Modern herpetological showmen include Jeff Corwin, Steve Irwin (popularly known as the "Crocodile Hunter"), and Austin Stevens, popularly known as "Austin Snakeman" in the TV series Austin Stevens: Snakemaster. Herpetology is an established hobby around the world due to the varied biodiversity in many environments. Many amateur herpetologists coin themselves as "herpers". Study Most colleges or universities do not offer a major in herpetology at the undergraduate or the graduate level. Instead, persons interested in herpetology select a major in the biological sciences. The knowledge learned about all aspects of the biology of animals is then applied to an individual study of herpetology. Journals Herpetology research is published in academic journals including Ichthyology & Herpetology, founded in 1913 (under the name Copeia in honour of Edward Drinker Cope); Herpetologica, founded in 1936; Reptiles and amphibians, founded in 1990; and Contemporary Herpetology, founded in 1997 and stopped publishing in 2009. See also Herping List of herpetologists List of herpetology academic journals Reptile Database AmphibiaWeb References Further reading Adler, Kraig (1989). Contributions to the History of Herpetology. Society for the Study of Amphibians and Reptiles (SSAR). Eatherley, Dan (2015). Bushmaster: Raymond Ditmars and the Hunt for the World's Largest Viper. New York: Arcade. 320 pp. . Goin, Coleman J.; Goin, Olive B.; Zug, George R. (1978). Introduction to Herpetology, Third Edition. San Francisco: W. H. Freeman and Company. xi + 378 pp. . External links Iranian Herpetological Studies Institute (IHSI) Field Herpetology Guide American Society of Ichthyologists and Herpetologists Herpetological Conservation and Biology Societas Europaea Herpetologica Distribution Maps for European Reptiles and Amphibians Center for North American Herpetology over 500 species of reptiles and amphibians European Field Herping Community New Zealand Herpetology Chicago Herpetological Society Biology of the Reptilia is an online copy of the full text of a 22-volume 13,000-page summary of the state of research of reptiles. HerpMapper is a database of reptile and amphibian sightings Amphibian and Reptile Atlas of Peninsular California, San Diego Natural History Museum A Primer on Reptiles and Amphibians Field Herp Forum Subfields of zoology Scoutcraft
0.763584
0.996464
0.760884
Bioacoustics
Bioacoustics is a cross-disciplinary science that combines biology and acoustics. Usually it refers to the investigation of sound production, dispersion and reception in animals (including humans). This involves neurophysiological and anatomical basis of sound production and detection, and relation of acoustic signals to the medium they disperse through. The findings provide clues about the evolution of acoustic mechanisms, and from that, the evolution of animals that employ them. In underwater acoustics and fisheries acoustics the term is also used to mean the effect of plants and animals on sound propagated underwater, usually in reference to the use of sonar technology for biomass estimation. The study of substrate-borne vibrations used by animals is considered by some a distinct field called biotremology. History For a long time humans have employed animal sounds to recognise and find them. Bioacoustics as a scientific discipline was established by the Slovene biologist Ivan Regen who began systematically to study insect sounds. In 1925 he used a special stridulatory device to play in a duet with an insect. Later, he put a male cricket behind a microphone and female crickets in front of a loudspeaker. The females were not moving towards the male but towards the loudspeaker. Regen's most important contribution to the field apart from realization that insects also detect airborne sounds was the discovery of tympanal organ's function. Relatively crude electro-mechanical devices available at the time (such as phonographs) allowed only for crude appraisal of signal properties. More accurate measurements were made possible in the second half of the 20th century by advances in electronics and utilization of devices such as oscilloscopes and digital recorders. The most recent advances in bioacoustics concern the relationships among the animals and their acoustic environment and the impact of anthropogenic noise. Bioacoustic techniques have recently been proposed as a non-destructive method for estimating biodiversity of an area. Importance In the terrestrial environment, animals often use light for sensing distance, since light propagates well through air. Underwater sunlight only reaches to tens of meters depth. However, sound propagates readily through water and across considerable distances. Many marine animals can see well, but using hearing for communication, and sensing distance and location. Gauging the relative importance of audition versus vision in animals can be performed by comparing the number of auditory and optic nerves. Since the 1950s to 1960s, studies on dolphin echolocation behavior using high frequency click sounds revealed that many different marine mammal species make sounds, which can be used to detect and identify species under water. Much research in bioacoustics has been funded by naval research organizations, as biological sound sources can interfere with military uses underwater. Methods Listening is still one of the main methods used in bioacoustical research. Little is known about neurophysiological processes that play a role in production, detection and interpretation of sounds in animals, so animal behaviour and the signals themselves are used for gaining insight into these processes. Acoustic signals An experienced observer can use animal sounds to recognize a "singing" animal species, its location and condition in nature. Investigation of animal sounds also includes signal recording with electronic recording equipment. Due to the wide range of signal properties and media they propagate through, specialized equipment may be required instead of the usual microphone, such as a hydrophone (for underwater sounds), detectors of ultrasound (very high-frequency sounds) or infrasound (very low-frequency sounds), or a laser vibrometer (substrate-borne vibrational signals). Computers are used for storing and analysis of recorded sounds. Specialized sound-editing software is used for describing and sorting signals according to their intensity, frequency, duration and other parameters. Animal sound collections, managed by museums of natural history and other institutions, are an important tool for systematic investigation of signals. Many effective automated methods involving signal processing, data mining, machine learning and artificial intelligence techniques have been developed to detect and classify the bioacoustic signals. Sound production, detection, and use in animals Scientists in the field of bioacoustics are interested in anatomy and neurophysiology of organs involved in sound production and detection, including their shape, muscle action, and activity of neuronal networks involved. Of special interest is coding of signals with action potentials in the latter. But since the methods used for neurophysiological research are still fairly complex and understanding of relevant processes is incomplete, more trivial methods are also used. Especially useful is observation of behavioural responses to acoustic signals. One such response is phonotaxis – directional movement towards the signal source. By observing response to well defined signals in a controlled environment, we can gain insight into signal function, sensitivity of the hearing apparatus, noise filtering capability, etc. Biomass estimation Biomass estimation is a method of detecting and quantifying fish and other marine organisms using sonar technology. As the sound pulse travels through water it encounters objects that are of different density than the surrounding medium, such as fish, that reflect sound back toward the sound source. These echoes provide information on fish size, location, and abundance. The basic components of the scientific echo sounder hardware function is to transmit the sound, receive, filter and amplify, record, and analyze the echoes. While there are many manufacturers of commercially available "fish-finders," quantitative analysis requires that measurements be made with calibrated echo sounder equipment, having high signal-to-noise ratios. Animal sounds Sounds used by animals that fall within the scope of bioacoustics include a wide range of frequencies and media, and are often not "sound" in the narrow sense of the word (i.e. compression waves that propagate through air and are detectable by the human ear). Katydid crickets, for example, communicate by sounds with frequencies higher than 100 kHz, far into the ultrasound range. Lower, but still in ultrasound, are sounds used by bats for echolocation. A segmented marine worm Leocratides kimuraorum produces one of the loudest popping sounds in the ocean at 157 dB, frequencies 1–100 kHz, similar to the snapping shrimps. On the other side of the frequency spectrum are low frequency-vibrations, often not detected by hearing organs, but with other, less specialized sense organs. The examples include ground vibrations produced by elephants whose principal frequency component is around 15 Hz, and low- to medium-frequency substrate-borne vibrations used by most insect orders. Many animal sounds, however, do fall within the frequency range detectable by a human ear, between 20 and 20,000 Hz. Mechanisms for sound production and detection are just as diverse as the signals themselves. Plant sounds In a series of scientific journal articles published between 2013 and 2016, Monica Gagliano of the University of Western Australia extended the science to include plant bioacoustics. See also Acoustic ecology Acoustical oceanography Animal communication Animal language Anthropophony Biomusic Biophony Diffusion (acoustics) Field recording Frog hearing and communication List of animal sounds List of Bioacoustics Software Music therapy Natural sounds Soundscape ecology Underwater acoustics Vocal learning Whale sound Zoomusicology Phonology References Further reading Ewing A.W. (1989): Arthropod bioacoustics: Neurobiology and behaviour. Edinburgh: Edinburgh University Press. Fletcher N. (2007): Animal Bioacoustics. IN: Rossing T.D. (ed.): Springer Handbook of Acoustics, Springer. External links ASA Animal Bioacoustics Technical Committee BioAcoustica: Wildlife Sounds Database The British Library Sound Archive has 150,000 recordings of over 10,000 species. International Bioacoustics Council links to many bioacoustics resources. Borror Laboratory of Bioacoustics at The Ohio State University has a large archive of animal sound recordings. Listen to Nature 400 examples of animal songs and calls Wildlife Sound Recording Society Bioacoustic Research Program at the Cornell Lab of Ornithology distributes a number of different free bioacoustics synthesis & analysis programs. Macaulay Library at the Cornell Lab of Ornithology is the world's largest collection of animal sounds and associated video. Xeno-canto A collection of bird vocalizations from around the world. Acoustics Zoosemiotics Soundscape ecology Sound Noise Hearing
0.780234
0.975194
0.760879
Leaching (chemistry)
Leaching is the process of a solute becoming detached or extracted from its carrier substance by way of a solvent. Leaching is a naturally occurring process which scientists have adapted for a variety of applications with a variety of methods. Specific extraction methods depend on the soluble characteristics relative to the sorbent material such as concentration, distribution, nature, and size. Leaching can occur naturally seen from plant substances (inorganic and organic), solute leaching in soil, and in the decomposition of organic materials. Leaching can also be applied affectedly to enhance water quality and contaminant removal, as well as for disposal of hazardous waste products such as fly ash, or rare earth elements (REEs). Understanding leaching characteristics is important in preventing or encouraging the leaching process and preparing for it in the case where it is inevitable. In an ideal leaching equilibrium stage, all the solute is dissolved by the solvent, leaving the carrier of the solute unchanged. The process of leaching however is not always ideal, and can be quite complex to understand and replicate, and often different methodologies will produce different results. Leaching processes There are many types of leaching scenarios; therefore, the extent of this topic is vast. In general, however, the three substances can be described as: a carrier, substance A; a solute, substance B; and a solvent, substance C. Substance A and B are somewhat homogenous in a system prior to the introduction of substance C. At the beginning of the leaching process, substance C will work at dissolving the surficial substance B at a fairly high rate. The rate of dissolution will decrease substantially once it needs to penetrate through the pores of substance A in order to continue targeting substance B. This penetration can often lead to dissolution of substance A, or the product of more than one solute, both unsatisfactory if specific leaching is desired. The physiochemical and biological properties of the carrier and solute should be considered when observing the leaching process, and certain properties may be more important depending on the material, the solvent, and their availability. These specific properties can include, but are not limited to: Particle size Solvent Temperature Agitation Surface area Homogeneity of the carrier and solute Microorganism activity Mineralogy Intermediate products Crystal structure The general process is typically broken up and summarized into three parts: Dissolution of surficial solute by solvent Diffusion of inner-solute through the pores of the carrier to reach the solvent Transfer of dissolved solute out of the system Leaching processes for biological substances Biological substances can experience leaching themselves, as well as be used for leaching as part of the solvent substance to recover heavy metals. Many plants experience leaching of phenolics, carbohydrates, and amino acids, and can experience as much as 30% mass loss from leaching, just from sources of water such as rain, dew, mist, and fog. These sources of water would be considered the solvent in the leaching process and can also lead to the leaching of organic nutrients from plants such as free sugars, pectic substances, and sugar alcohols. This can in turn lead to more diversity in plant species that may experience a more direct access to water. This type of leaching can often lead to the removal of an undesirable component from the solid by water, this process is called washing. A major concern for leaching of plants, is if pesticides are leached and carried through stormwater runoff,; this is not only necessary to plant health, but it is important to control because pesticides can be toxic to human and animal health. Bioleaching is a term that describes the removal of metal cations from insoluble ores by biological oxidation and complexation processes. This process is done in most part to extract copper, cobalt, nickel, zinc, and uranium from insoluble sulfides or oxides. Bioleaching processes can also be used in the re-use of fly ash by recovering aluminum using sulfuric acid. Leaching processes for fly ash Coal fly ash is a product that experiences heavy amounts of leaching during disposal. Though the re-use of fly ash in other materials such as concrete and bricks is encouraged, still much of it in the United States is disposed of in holding ponds, lagoons, landfills, and slag heaps. These disposal sites all contain water where washing effects can cause leaching of many different major elements, depending on the type of fly ash and the location where it originated. The leaching of fly ash is only concerning if the fly ash has not been disposed of properly, such as in the case of the Kingston Fossil Plant in Roane County, Tennessee. The Tennessee Valley Authority Kingston Fossil Plant structural failure lead to massive destruction throughout the area and serious levels of contamination downstream to both Emory River and Clinch River. Leaching processes in soil Leaching in soil is highly dependent on the characteristics of the soil, which makes modeling efforts difficult. Most leaching comes from infiltration of water, a washing effect much like that described for the leaching process of biological substances. The leaching is typically described by solute transport models, such as Darcy's Law, mass flow expressions, and diffusion-dispersion understandings. Leaching is controlled largely by the hydraulic conductivity of the soil, which is dependent on particle size and relative density that the soil has been consolidated to via stress. Diffusion is controlled by other factors such as pore size and soil skeleton, tortuosity of flow path, and distribution of the solvent (water) and solutes. Leaching for mineral extraction Leaching can sometimes be used to extract valuable materials from a wastewater product/ raw materials. In the field of mineralogy, acid leaching is common to extract Metals such as vanadium, Cobalt, Nickel, Manganese, Iron etc. from raw materials/ reused materials. In recent years, there has been more attention given to metal leaching to recover precious metals from waste materials. For example, the extraction of valuable metals from wastewater. Leaching mechanisms Due to the assortment of leaching processes there are many variations in the data to be collected through laboratory methods and modeling, making it hard to interpret the data itself. Not only is the specified leaching process important, but also the focus of the experimentation itself. For instance, the focus could be directed toward mechanisms causing leaching, mineralogy as a group or individually, or the solvent that causes leaching. Most tests are done by evaluating mass loss due to a reagent, heat, or simply washing with water. A summary of various leaching processes and their respective laboratory tests can be viewed in the following table: Environmentally friendly leaching Some recent work has been done to see if organic acids can be used to leach lithium and cobalt from spent batteries with some success. Experiments performed with varying temperatures and concentrations of malic acid show that the optimal conditions are 2.0 m/L of organic acid at a temperature of 90 °C. The reaction had an overall efficiency exceeding 90% with no harmful byproducts. 4 LiCoO2(solid) + 12 C4H6O5(liquid) → 4 LiC4H5O5(liquid) + 4 Co(C4H6O5)2(liquid) + 6 H2O(liquid) + O2(gas) The same analysis with citric acid showed similar results with an optimal temperature and concentration of 90 °C and 1.5 molar solution of citric acid. See also Extraction Leachate Parboiling Surfactant leaching Sorption Weathering References Industrial processes Solid-solid separation
0.764573
0.995168
0.760879
Protein dynamics
In molecular biology, proteins are generally thought to adopt unique structures determined by their amino acid sequences. However, proteins are not strictly static objects, but rather populate ensembles of (sometimes similar) conformations. Transitions between these states occur on a variety of length scales (tenths of angstroms to nm) and time scales (ns to s), and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis. The study of protein dynamics is most directly concerned with the transitions between these states, but can also involve the nature and equilibrium populations of the states themselves. These two perspectives—kinetics and thermodynamics, respectively—can be conceptually synthesized in an "energy landscape" paradigm: highly populated states and the kinetics of transitions between them can be described by the depths of energy wells and the heights of energy barriers, respectively. Local flexibility: atoms and residues Portions of protein structures often deviate from the equilibrium state. Some such excursions are harmonic, such as stochastic fluctuations of chemical bonds and bond angles. Others are anharmonic, such as sidechains that jump between separate discrete energy minima, or rotamers. Evidence for local flexibility is often obtained from NMR spectroscopy. Flexible and potentially disordered regions of a protein can be detected using the random coil index. Flexibility in folded proteins can be identified by analyzing the spin relaxation of individual atoms in the protein. Flexibility can also be observed in very high-resolution electron density maps produced by X-ray crystallography, particularly when diffraction data is collected at room temperature instead of the traditional cryogenic temperature (typically near 100 K). Information on the frequency distribution and dynamics of local protein flexibility can be obtained using Raman and optical Kerr-effect spectroscopy as well as anisotropic microspectroscopy in the terahertz frequency domain. Regional flexibility: intra-domain multi-residue coupling Many residues are in close spatial proximity in protein structures. This is true for most residues that are contiguous in the primary sequence, but also for many that are distal in sequence yet are brought into contact in the final folded structure. Because of this proximity, these residue's energy landscapes become coupled based on various biophysical phenomena such as hydrogen bonds, ionic bonds, and van der Waals interactions (see figure). Transitions between states for such sets of residues therefore become correlated. This is perhaps most obvious for surface-exposed loops, which often shift collectively to adopt different conformations in different crystal structures (see figure). However, coupled conformational heterogeneity is also sometimes evident in secondary structure. For example, consecutive residues and residues offset by 4 in the primary sequence often interact in α helices. Also, residues offset by 2 in the primary sequence point their sidechains toward the same face of β sheets and are close enough to interact sterically, as are residues on adjacent strands of the same β sheet. Some of these conformational changes are induced by post-translational modifications in protein structure, such as phosphorylation and methylation. When these coupled residues form pathways linking functionally important parts of a protein, they may participate in allosteric signaling. For example, when a molecule of oxygen binds to one subunit of the hemoglobin tetramer, that information is allosterically propagated to the other three subunits, thereby enhancing their affinity for oxygen. In this case, the coupled flexibility in hemoglobin allows for cooperative oxygen binding, which is physiologically useful because it allows rapid oxygen loading in lung tissue and rapid oxygen unloading in oxygen-deprived tissues (e.g. muscle). Global flexibility: multiple domains The presence of multiple domains in proteins gives rise to a great deal of flexibility and mobility, leading to protein domain dynamics. Domain motions can be inferred by comparing different structures of a protein (as in Database of Molecular Motions), or they can be directly observed using spectra measured by neutron spin echo spectroscopy. They can also be suggested by sampling in extensive molecular dynamics trajectories and principal component analysis. Domain motions are important for: ABC transporters catalysis cellular locomotion and motor proteins formation of protein complexes ion channels mechanoreceptors and mechanotransduction regulatory activity transport of metabolites across cell membranes One of the largest observed domain motions is the 'swivelling' mechanism in pyruvate phosphate dikinase. The phosphoinositide domain swivels between two states in order to bring a phosphate group from the active site of the nucleotide binding domain to that of the phosphoenolpyruvate/pyruvate domain. The phosphate group is moved over a distance of 45 Å involving a domain motion of about 100 degrees around a single residue. In enzymes, the closure of one domain onto another captures a substrate by an induced fit, allowing the reaction to take place in a controlled way. A detailed analysis by Gerstein led to the classification of two basic types of domain motion; hinge and shear. Only a relatively small portion of the chain, namely the inter-domain linker and side chains undergo significant conformational changes upon domain rearrangement. Hinge motions A study by Hayward found that the termini of α-helices and β-sheets form hinges in a large number of cases. Many hinges were found to involve two secondary structure elements acting like hinges of a door, allowing an opening and closing motion to occur. This can arise when two neighbouring strands within a β-sheet situated in one domain, diverge apart as they join the other domain. The two resulting termini then form the bending regions between the two domains. α-helices that preserve their hydrogen bonding network when bent are found to behave as mechanical hinges, storing `elastic energy' that drives the closure of domains for rapid capture of a substrate. Khade et. al. worked on prediction of the hinges in any conformation and further built an Elastic Network Model called hdANM that can model those motions. Helical to extended conformation The interconversion of helical and extended conformations at the site of a domain boundary is not uncommon. In calmodulin, torsion angles change for five residues in the middle of a domain linking α-helix. The helix is split into two, almost perpendicular, smaller helices separated by four residues of an extended strand. Shear motions Shear motions involve a small sliding movement of domain interfaces, controlled by the amino acid side chains within the interface. Proteins displaying shear motions often have a layered architecture: stacking of secondary structures. The interdomain linker has merely the role of keeping the domains in close proximity. Domain motion and functional dynamics in enzymes The analysis of the internal dynamics of structurally different, but functionally similar enzymes has highlighted a common relationship between the positioning of the active site and the two principal protein sub-domains. In fact, for several members of the hydrolase superfamily, the catalytic site is located close to the interface separating the two principal quasi-rigid domains. Such positioning appears instrumental for maintaining the precise geometry of the active site, while allowing for an appreciable functionally oriented modulation of the flanking regions resulting from the relative motion of the two sub-domains. Implications for macromolecular evolution Evidence suggests that protein dynamics are important for function, e.g. enzyme catalysis in dihydrofolate reductase (DHFR), yet they are also posited to facilitate the acquisition of new functions by molecular evolution. This argument suggests that proteins have evolved to have stable, mostly unique folded structures, but the unavoidable residual flexibility leads to some degree of functional promiscuity, which can be amplified/harnessed/diverted by subsequent mutations. Research on promiscuous proteins within the BCL-2 family revealed that nanosecond-scale protein dynamics can play a crucial role in protein binding behaviour and thus promiscuity. However, there is growing awareness that intrinsically unstructured proteins are quite prevalent in eukaryotic genomes, casting further doubt on the simplest interpretation of Anfinsen's dogma: "sequence determines structure (singular)". In effect, the new paradigm is characterized by the addition of two caveats: "sequence and cellular environment determine structural ensemble". References Protein folding Protein biosynthesis
0.780121
0.9753
0.760852
Organic farming
Organic farming, also known as organic agriculture or ecological farming or biological farming, is an agricultural system that uses fertilizers of organic origin such as compost manure, green manure, and bone meal and places emphasis on techniques such as crop rotation and companion planting. It originated early in the 20th century in reaction to rapidly changing farming practices. Indeed, so-called "organic pioneers" wanted to keep farming with nature, without being dependent on external inputs. Certified organic agriculture accounts for globally, with over half of that total in Australia. Biological pest control, mixed cropping, and the fostering of insect predators are encouraged. Organic standards are designed to allow the use of naturally-occurring substances while prohibiting or severely limiting synthetic substances. For instance, naturally-occurring pesticides such as garlic extract, bicarbonate of soda, or pyrethrin which is found naturally in the Chrysanthemum flower are permitted, while synthetic fertilizers and pesticides such as glyphosate are prohibited. Synthetic substances that are allowed, only in exceptional circumstances, include, for example, copper sulfate, elemental sulfur, and veterinary drugs. Genetically modified organisms, nanomaterials, human sewage sludge, plant growth regulators, hormones, and antibiotic use in livestock husbandry are prohibited. Organic farming positively impacts sustainability, self-sufficiency, autonomy and independence, health, animal welfare, food security, and food safety. Organic farming can therefore be seen as part of the solution to the impacts of climate change, as also established by the Food and Agriculture Organisation (FAO). Organic agricultural methods are internationally regulated and legally enforced by transnational organizations (as European Union) and many nations, based in large part on the standards set by the International Federation of Organic Agriculture Movements (IFOAM), an international umbrella organization for organic farming organizations established in 1972, with regional branches such as IFOAM Organics Europe and IFOAM Asia. Organic agriculture can be defined as "an integrated farming system that strives for sustainability, the enhancement of soil fertility and biological diversity while, with rare exceptions, prohibiting synthetic pesticides, antibiotics, synthetic fertilizers, genetically modified organisms, and growth hormones".<ref>H. Martin, '’Ontario Ministry of Agriculture, Food and Rural Affairs Introduction to Organic Farming, </ref> Organic agriculture is based on the principles of health, care for all living beings and the environment, ecology and fairness. Since 1990, the market for organic food and other products has grown rapidly, reaching $150 billion worldwide in 2022 - of which more than $64 billion in North America and EUR 53 billion in Europe . This demand has driven a similar increase in organically managed farmland that grew by 26.6% from 2021 to 2022. As of 2022, organic farming is practiced in 188 countries and approximately worldwide were farmed organically by 4.5 million farmers, representing approximately 2% of total world farmland. History Agriculture was practiced for thousands of years without the use of artificial chemicals. Artificial fertilizers were first developed during the mid-19th century. These early fertilizers were cheap, powerful, and easy to transport in bulk. Similar advances occurred in chemical pesticides in the 1940s, leading to the decade being referred to as the "pesticide era". These new agricultural techniques, while beneficial in the short-term, had serious longer-term side-effects such as soil compaction, erosion, and declines in overall soil fertility, along with health concerns about toxic chemicals entering the food supply. In the late 1800s and early 1900s, soil biology scientists began to seek ways to remedy these side effects while still maintaining higher production. In 1921 the founder and pioneer of the organic movement Albert Howard and his wife Gabrielle Howard, accomplished botanists, founded an Institute of Plant Industry to improve traditional farming methods in India. Among other things, they brought improved implements and improved animal husbandry methods from their scientific training; then by incorporating aspects of Indian traditional methods, developed protocols for the rotation of crops, erosion prevention techniques, and the systematic use of composts and manures. Stimulated by these experiences of traditional farming, when Albert Howard returned to Britain in the early 1930s he began to promulgate a system of organic agriculture. In 1924 Rudolf Steiner gave a series of eight lectures on agriculture with a focus on influences of the moon, planets, non-physical beings and elemental forces.Paull, John (2013) "Breslau (Wrocław): In the footsteps of Rudolf Steiner", Journal of Bio- Dynamics Tasmania, 110:10-15. They were held in response to a request by adherent farmers who noticed degraded soil conditions and a deterioration in the health and quality of crops and livestock resulting from the use of chemical fertilizers. The lectures were published in November 1924; the first English translation appeared in 1928 as The Agriculture Course. In July 1939, Ehrenfried Pfeiffer, the author of the standard work on biodynamic agriculture (Bio-Dynamic Farming and Gardening), came to the UK at the invitation of Walter James, 4th Baron Northbourne as a presenter at the Betteshanger Summer School and Conference on Biodynamic Farming at Northbourne's farm in Kent. One of the chief purposes of the conference was to bring together the proponents of various approaches to organic agriculture in order that they might cooperate within a larger movement. Howard attended the conference, where he met Pfeiffer. In the following year, Northbourne published his manifesto of organic farming, Look to the Land, in which he coined the term "organic farming". The Betteshanger conference has been described as the 'missing link' between biodynamic agriculture and other forms of organic farming. In 1940 Howard published his An Agricultural Testament. In this book he adopted Northbourne's terminology of "organic farming". Howard's work spread widely, and he became known as the "father of organic farming" for his work in applying scientific knowledge and principles to various traditional and natural methods. In the United States J. I. Rodale, who was keenly interested both in Howard's ideas and in biodynamics, founded in the 1940s both a working organic farm for trials and experimentation, The Rodale Institute, and Rodale, Inc. in Emmaus, Pennsylvania to teach and advocate organic methods to the wider public. These became important influences on the spread of organic agriculture. Further work was done by Lady Eve Balfour (the Haughley Experiment) in the United Kingdom, and many others across the world. The term "eco-agriculture" was coined in 1970 by Charles Walters, founder of Acres Magazine, to describe agriculture which does not use "man-made molecules of toxic rescue chemistry", effectively another name for organic agriculture. Increasing environmental awareness in the general population in modern times has transformed the originally supply-driven organic movement to a demand-driven one. Premium prices and some government subsidies attracted farmers. In the developing world, many producers farm according to traditional methods that are comparable to organic farming, but not certified, and that may not include the latest scientific advancements in organic agriculture. In other cases, farmers in the developing world have converted to modern organic methods for economic reasons. Terminology The use of "organic" popularized by Howard and Rodale refers more narrowly to the use of organic matter derived from plant compost and animal manures to improve the humus content of soils, grounded in the work of early soil scientists who developed what was then called "humus farming". Since the early 1940s the two camps have tended to merge. Biodynamic agriculturists, on the other hand, used the term "organic" to indicate that a farm should be viewed as a living organism, in the sense of the following quotation: They based their work on Steiner's spiritually-oriented alternative agriculture which includes various esoteric concepts. Methods Organic farming methods combine scientific knowledge of ecology and some modern technology with traditional farming practices based on naturally occurring biological processes. Organic farming methods are studied in the field of agroecology. While conventional agriculture uses synthetic pesticides and water-soluble synthetically purified fertilizers, organic farmers are restricted by regulations to using natural pesticides and fertilizers. An example of a natural pesticide is pyrethrin, which is found naturally in the Chrysanthemum flower. The principal methods of organic farming include crop rotation, green manures and compost, biological pest control, and mechanical cultivation. These measures use the natural environment to enhance agricultural productivity: legumes are planted to fix nitrogen into the soil, natural insect predators are encouraged, crops are rotated to confuse pests and renew soil, and natural materials such as potassium bicarbonate and mulches are used to control disease and weeds. Genetically modified seeds and animals are excluded. While organic is fundamentally different from conventional because of the use of carbon-based fertilizers compared with highly soluble synthetic based fertilizers and biological pest control instead of synthetic pesticides, organic farming and large-scale conventional farming are not entirely mutually exclusive. Many of the methods developed for organic agriculture have been borrowed by more conventional agriculture. For example, Integrated Pest Management is a multifaceted strategy that uses various organic methods of pest control whenever possible, but in conventional farming could include synthetic pesticides only as a last resort. Examples of beneficial insects that are used in organic farming include ladybugs and lacewings, both of which feed on aphids. The use of IPM lowers the possibility of pest developing resistance to pesticides that are applied to crops. Crop diversity Organic farming encourages crop diversity by promoting polyculture (multiple crops in the same space). Planting a variety of vegetable crops supports a wider range of beneficial insects, soil microorganisms, and other factors that add up to overall farm health. Crop diversity helps the environment to thrive and protects species from going extinct.Crop diversity: A Distinctive Characteristic of an Organic Farming Method - Organic Farming; 15 April 2013 The science of Agroecology has revealed the benefits of polyculture, which is often employed in organic farming. Agroecology is a scientific discipline that uses ecological theory to study, design, manage, and evaluate agricultural systems that are productive and resource-conserving, and that are also culturally sensitive, socially just, and economically viable. Incorporating crop diversity into organic farming practices can have several benefits. For instance, it can help to increase soil fertility by promoting the growth of beneficial soil microorganisms. It can also help to reduce pest and disease pressure by creating a more diverse and resilient agroecosystem. Furthermore, crop diversity can help to improve the nutritional quality of food by providing a wider range of essential nutrients. Soil management Organic farming relies more heavily on the natural breakdown of organic matter than the average conventional farm, using techniques like green manure and composting, to replace nutrients taken from the soil by previous crops. This biological process, driven by microorganisms such as mycorrhiza and earthworms, releases nutrients available to plants throughout the growing season. Farmers use a variety of methods to improve soil fertility, including crop rotation, cover cropping, reduced tillage, and application of compost. By reducing fuel-intensive tillage, less soil organic matter is lost to the atmosphere. This has an added benefit of carbon sequestration, which reduces greenhouse gases and helps reverse climate change. Reducing tillage may also improve soil structure and reduce the potential for soil erosion. Plants need a large number of nutrients in various quantities to flourish. Supplying enough nitrogen and particularly synchronization, so that plants get enough nitrogen at the time when they need it most, is a challenge for organic farmers. Crop rotation and green manure ("cover crops") help to provide nitrogen through legumes (more precisely, the family Fabaceae), which fix nitrogen from the atmosphere through symbiosis with rhizobial bacteria. Intercropping, which is sometimes used for insect and disease control, can also increase soil nutrients, but the competition between the legume and the crop can be problematic and wider spacing between crop rows is required. Crop residues can be ploughed back into the soil, and different plants leave different amounts of nitrogen, potentially aiding synchronization. Organic farmers also use animal manure, certain processed fertilizers such as seed meal and various mineral powders such as rock phosphate and green sand, a naturally occurring form of potash that provides potassium. In some cases pH may need to be amended. Natural pH amendments include lime and sulfur, but in the U.S. some compounds such as iron sulfate, aluminum sulfate, magnesium sulfate, and soluble boron products are allowed in organic farming. Mixed farms with both livestock and crops can operate as ley farms, whereby the land gathers fertility through growing nitrogen-fixing forage grasses such as white clover or alfalfa and grows cash crops or cereals when fertility is established. Farms without livestock ("stockless") may find it more difficult to maintain soil fertility, and may rely more on external inputs such as imported manure as well as grain legumes and green manures, although grain legumes may fix limited nitrogen because they are harvested. Horticultural farms that grow fruits and vegetables in protected conditions often rely even more on external inputs. Manure is very bulky and is often not cost-effective to transport more than a short distance from the source. Manure for organic farms' may become scarce if a sizable number of farms become organically managed. Weed management Organic weed management promotes weed suppression, rather than weed elimination, by enhancing crop competition and phytotoxic effects on weeds. Organic farmers integrate cultural, biological, mechanical, physical and chemical tactics to manage weeds without synthetic herbicides. Organic standards require rotation of annual crops, meaning that a single crop cannot be grown in the same location without a different, intervening crop. Organic crop rotations frequently include weed-suppressive cover crops and crops with dissimilar life cycles to discourage weeds associated with a particular crop. Research is ongoing to develop organic methods to promote the growth of natural microorganisms that suppress the growth or germination of common weeds. Other cultural practices used to enhance crop competitiveness and reduce weed pressure include selection of competitive crop varieties, high-density planting, tight row spacing, and late planting into warm soil to encourage rapid crop germination. Mechanical and physical weed control practices used on organic farms can be broadly grouped as: Tillage - Turning the soil between crops to incorporate crop residues and soil amendments; remove existing weed growth and prepare a seedbed for planting; turning soil after seeding to kill weeds, including cultivation of row crops. Mowing and cutting - Removing top growth of weeds. Flame weeding and thermal weeding - Using heat to kill weeds. Mulching - Blocking weed emergence with organic materials, plastic films, or landscape fabric. Some naturally sourced chemicals are allowed for herbicidal use. These include certain formulations of acetic acid (concentrated vinegar), corn gluten meal, and essential oils. A few selective bioherbicides based on fungal pathogens have also been developed. At this time, however, organic herbicides and bioherbicides play a minor role in the organic weed control toolbox. Weeds can be controlled by grazing. For example, geese have been used successfully to weed a range of organic crops including cotton, strawberries, tobacco, and corn, reviving the practice of keeping cotton patch geese, common in the southern U.S. before the 1950s. Similarly, some rice farmers introduce ducks and fish to wet paddy fields to eat both weeds and insects. Controlling other organisms Organisms aside from weeds that cause problems on farms include arthropods (e.g., insects, mites), nematodes, fungi and bacteria. Practices include, but are not limited to: Examples of predatory beneficial insects include minute pirate bugs, big-eyed bugs, and to a lesser extent ladybugs (which tend to fly away), all of which eat a wide range of pests. Lacewings are also effective, but tend to fly away. Praying mantis tend to move more slowly and eat less heavily. Parasitoid wasps tend to be effective for their selected prey, but like all small insects can be less effective outdoors because the wind controls their movement. Predatory mites are effective for controlling other mites. Naturally derived insecticides allowed for use on organic farms include Bacillus thuringiensis (a bacterial toxin), pyrethrum (a chrysanthemum extract), spinosad (a bacterial metabolite), neem (a tree extract) and rotenone (a legume root extract). Fewer than 10% of organic farmers use these pesticides regularly; a 2003 survey found that only 5.3% of vegetable growers in California use rotenone while 1.7% use pyrethrum. These pesticides are not always more safe or environmentally friendly than synthetic pesticides and can cause harm. The main criterion for organic pesticides is that they are naturally derived, and some naturally derived substances have been controversial. Controversial natural pesticides include rotenone, copper, nicotine sulfate, and pyrethrumsPottorff LP. Some Pesticides Permitted in Organic Gardening. Colorado State University Cooperative Extension. Rotenone and pyrethrum are particularly controversial because they work by attacking the nervous system, like most conventional insecticides. Rotenone is extremely toxic to fish and can induce symptoms resembling Parkinson's disease in mammals. Although pyrethrum (natural pyrethrins) is more effective against insects when used with piperonyl butoxide (which retards degradation of the pyrethrins), organic standards generally do not permit use of the latter substance.OGA. 2004. OGA standard. Organic Growers of Australia. Inc. 32 pp. Naturally derived fungicides allowed for use on organic farms include the bacteria Bacillus subtilis and Bacillus pumilus; and the fungus Trichoderma harzianum. These are mainly effective for diseases affecting roots. Compost tea contains a mix of beneficial microbes, which may attack or out-compete certain plant pathogens, but variability among formulations and preparation methods may contribute to inconsistent results or even dangerous growth of toxic microbes in compost teas. Some naturally derived pesticides are not allowed for use on organic farms. These include nicotine sulfate, arsenic, and strychnine. Synthetic pesticides allowed for use on organic farms include insecticidal soaps and horticultural oils for insect management; and Bordeaux mixture, copper hydroxide and sodium bicarbonate for managing fungi. Copper sulfate and Bordeaux mixture (copper sulfate plus lime), approved for organic use in various jurisdictions, can be more environmentally problematic than some synthetic fungicides disallowed in organic farming.Leake, A. R. 1999. House of Lords Select Committee on the European Communities. Session 1998-99, 16th Report. Organic Farming and the European Union. p. 81. Cited by Similar concerns apply to copper hydroxide. Repeated application of copper sulfate or copper hydroxide as a fungicide may eventually result in copper accumulation to toxic levels in soil, and admonitions to avoid excessive accumulations of copper in soil appear in various organic standards and elsewhere. Environmental concerns for several kinds of biota arise at average rates of use of such substances for some crops. In the European Union, where replacement of copper-based fungicides in organic agriculture is a policy priority, research is seeking alternatives for organic production. Livestock Raising livestock and poultry, for meat, dairy and eggs, is another traditional farming activity that complements growing. Organic farms attempt to provide animals with natural living conditions and feed. Organic certification verifies that livestock are raised according to the USDA organic regulations throughout their lives. These regulations include the requirement that all animal feed must be certified organic. Organic livestock may be, and must be, treated with medicine when they are sick, but drugs cannot be used to promote growth, their feed must be organic, and they must be pastured. Also, horses and cattle were once a basic farm feature that provided labour, for hauling and plowing, fertility, through recycling of manure, and fuel, in the form of food for farmers and other animals. While today, small growing operations often do not include livestock, domesticated animals are a desirable part of the organic farming equation, especially for true sustainability, the ability of a farm to function as a self-renewing unit. Genetic modification A key characteristic of organic farming is the exclusion of genetically engineered plants and animals. On 19 October 1998, participants at IFOAM's 12th Scientific Conference issued the Mar del Plata Declaration, where more than 600 delegates from over 60 countries voted unanimously to exclude the use of genetically modified organisms in organic food production and agriculture. Although opposition to the use of any transgenic technologies in organic farming is strong, agricultural researchers Luis Herrera-Estrella and Ariel Alvarez-Morales continue to advocate integration of transgenic technologies into organic farming as the optimal means to sustainable agriculture, particularly in the developing world. Organic farmer Raoul Adamchak and geneticist Pamela Ronald write that many agricultural applications of biotechnology are consistent with organic principles and have significantly advanced sustainable agriculture. Although GMOs are excluded from organic farming, there is concern that the pollen from genetically modified crops is increasingly penetrating organic and heirloom seed stocks, making it difficult, if not impossible, to keep these genomes from entering the organic food supply. Differing regulations among countries limits the availability of GMOs to certain countries, as described in the article on regulation of the release of genetic modified organisms. Tools Organic farmers use a number of traditional farm tools to do farming, and may make use of agricultural machinery in similar ways to conventional farming. In the developing world, on small organic farms, tools are normally constrained to hand tools and diesel powered water pumps. Standards Standards regulate production methods and in some cases final output for organic agriculture. Standards may be voluntary or legislated. As early as the 1970s private associations certified organic producers. In the 1980s, governments began to produce organic production guidelines. In the 1990s, a trend toward legislated standards began, most notably with the 1991 EU-Eco-regulation developed for European Union, which set standards for 12 countries, and a 1993 UK program. The EU's program was followed by a Japanese program in 2001, and in 2002 the U.S. created the National Organic Program (NOP). As of 2007 over 60 countries regulate organic farming (IFOAM 2007:11). In 2005 IFOAM created the Principles of Organic Agriculture, an international guideline for certification criteria. Typically the agencies accredit certification groups rather than individual farms. Production materials used for the creation of USDA Organic certified foods require the approval of a NOP accredited certifier. EU-organic production-regulation on "organic" food labels define "organic" primarily in terms of whether "natural" or "artificial" substances were allowed as inputs in the food production process. Composting Using manure as a fertilizer risks contaminating food with animal gut bacteria, including pathogenic strains of E. coli that have caused fatal poisoning from eating organic food. To combat this risk, USDA organic standards require that manure must be sterilized through high temperature thermophilic composting. If raw animal manure is used, 120 days must pass before the crop is harvested if the final product comes into direct contact with the soil. For products that do not directly contact soil, 90 days must pass prior to harvest. In the US, the Organic Food Production Act of 1990 (OFPA) as amended, specifies that a farm can not be certified as organic if the compost being used contains any synthetic ingredients. The OFPA singles out commercially blended fertilizers [composts] disallowing the use of any fertilizer [compost] that contains prohibited materials. Economics The economics of organic farming, a subfield of agricultural economics, encompasses the entire process and effects of organic farming in terms of human society, including social costs, opportunity costs, unintended consequences, information asymmetries, and economies of scale. Labour input, carbon and methane emissions, energy use, eutrophication, acidification, soil quality, effect on biodiversity, and overall land use vary considerably between individual farms and between crops, making general comparisons between the economics of organic and conventional agriculture difficult.Clark, M., & Tilman, D. (2017). Comparative analysis of environmental impacts of agricultural production systems, agricultural input efficiency, and food choice. Environmental Research Letters, 12(6). In the European Union "organic farmers receive more subsidies under agri-environment and animal welfare subsidies than conventional growers". Geographic producer distribution The markets for organic products are strongest in North America and Europe, which as of 2001 are estimated to have $6 and $8 billion respectively of the $20 billion global market. As of 2007 Australasia has 39% of the total organic farmland, including Australia's but 97% of this land is sprawling rangeland (2007:35). US sales are 20x as much. Europe farms 23% of global organic farmland, followed by Latin America and the Caribbean with 20%. Asia has 9.5% while North America has 7.2%. Africa has 3%. Besides Australia, the countries with the most organic farmland are Argentina, China, and the United States. Much of Argentina's organic farmland is pasture, like that of Australia (2007:42). Spain, Germany, Brazil (the world's largest agricultural exporter), Uruguay, and England follow the United States in the amount of organic land (2007:26). In the European Union (EU25) 3.9% of the total utilized agricultural area was used for organic production in 2005. The countries with the highest proportion of organic land were Austria (11%) and Italy (8.4%), followed by the Czech Republic and Greece (both 7.2%). The lowest figures were shown for Malta (0.2%), Poland (0.6%) and Ireland (0.8%). In 2009, the proportion of organic land in the EU grew to 4.7%. The countries with the highest share of agricultural land were Liechtenstein (26.9%), Austria (18.5%) and Sweden (12.6%). 16% of all farmers in Austria produced organically in 2010. By the same year the proportion of organic land increased to 20%. In 2005, of land in Poland was under organic management. In 2012, were under organic production, and there were about 15,500 organic farmers; retail sales of organic products were EUR 80 million in 2011. As of 2012 organic exports were part of the government's economic development strategy. After the collapse of the Soviet Union in 1991, agricultural inputs that had previously been purchased from Eastern bloc countries were no longer available in Cuba, and many Cuban farms converted to organic methods out of necessity. Consequently, organic agriculture is a mainstream practice in Cuba, while it remains an alternative practice in most other countries.Andrea Swenson for Modern Farmer. 17 November 2014 Photo Essay: Cuban Farmers Return to the Old Ways Cuba's organic strategy includes development of genetically modified crops; specifically corn that is resistant to the palomilla moth. Growth In 2001, the global market value of certified organic products was estimated at US$20 billion. By 2002, this was US$23 billion and by 2015 more than US$43 billion. By 2014, retail sales of organic products reached US$80 billion worldwide. North America and Europe accounted for more than 90% of all organic product sales. In 2018 Australia accounted for 54% of the world's certified organic land with the country recording more than . Organic agricultural land increased almost fourfold in 15 years, from in 1999 to in 2014. Between 2013 and 2014, organic agricultural land grew by worldwide, increasing in every region except Latin America. During this time period, Europe's organic farmland increased to (+2.3%), Asia's increased to (+4.7%), Africa's increased to total (+4.5%), and North America's increased to total (+1.1%). As of 2014, the country with the most organic land was Australia, followed by Argentina, and the United States. Australia's organic land area has increased at a rate of 16.5% per annum for the past eighteen years. In 2013, the number of organic producers grew by almost 270,000, or more than 13%. By 2014, there were a reported 2.3 million organic producers in the world. Most of the total global increase took place in the Philippines, Peru, China, and Thailand. Overall, the majority of all organic producers are in India (650,000 in 2013), Uganda (190,552 in 2014), Mexico (169,703 in 2013) and the Philippines (165,974 in 2014). In 2016, organic farming produced over of bananas, over of soybean, and just under of coffee. Productivity Studies comparing yields have had mixed results. These differences among findings can often be attributed to variations between study designs including differences in the crops studied and the methodology by which results were gathered. A 2012 meta-analysis found that productivity is typically lower for organic farming than conventional farming, but that the size of the difference depends on context and in some cases may be very small. While organic yields can be lower than conventional yields, another meta-analysis published in Sustainable Agriculture Research in 2015, concluded that certain organic on-farm practices could help narrow this gap. Timely weed management and the application of manure in conjunction with legume forages/cover crops were shown to have positive results in increasing organic corn and soybean productivity. Another meta-analysis published in the journal Agricultural Systems in 2011 analyzed 362 datasets and found that organic yields were on average 80% of conventional yields. The author's found that there are relative differences in this yield gap based on crop type with crops like soybeans and rice scoring higher than the 80% average and crops like wheat and potato scoring lower. Across global regions, Asia and Central Europe were found to have relatively higher yields and Northern Europe relatively lower than the average. Long term studies A study published in 2005 compared conventional cropping, organic animal-based cropping, and organic legume-based cropping on a test farm at the Rodale Institute over 22 years. The study found that "the crop yields for corn and soybeans were similar in the organic animal, organic legume, and conventional farming systems". It also found that "significantly less fossil energy was expended to produce corn in the Rodale Institute's organic animal and organic legume systems than in the conventional production system. There was little difference in energy input between the different treatments for producing soybeans. In the organic systems, synthetic fertilizers and pesticides were generally not used". As of 2013 the Rodale study was ongoing and a thirty-year anniversary report was published by Rodale in 2012. A long-term field study comparing organic/conventional agriculture carried out over 21 years in Switzerland concluded that "Crop yields of the organic systems averaged over 21 experimental years at 80% of the conventional ones. The fertilizer input, however, was 34 – 51% lower, indicating an efficient production. The organic farming systems used 20 – 56% less energy to produce a crop unit and per land area this difference was 36 – 53%. In spite of the considerably lower pesticide input the quality of organic products was hardly discernible from conventional analytically and even came off better in food preference trials and picture creating methods." Profitability In the United States, organic farming has been shown to be 2.7 to 3.8 times more profitable for the farmer than conventional farming when prevailing price premiums are taken into account. Globally, organic farming is 22–35% more profitable for farmers than conventional methods, according to a 2015 meta-analysis of studies conducted across five continents. The profitability of organic agriculture can be attributed to a number of factors. First, organic farmers do not rely on synthetic fertilizer and pesticide inputs, which can be costly. In addition, organic foods currently enjoy a price premium over conventionally produced foods, meaning that organic farmers can often get more for their yield. The price premium for organic food is an important factor in the economic viability of organic farming. In 2013 there was a 100% price premium on organic vegetables and a 57% price premium for organic fruits. These percentages are based on wholesale fruit and vegetable prices, available through the United States Department of Agriculture's Economic Research Service. Price premiums exist not only for organic versus nonorganic crops, but may also vary depending on the venue where the product is sold: farmers' markets, grocery stores, or wholesale to restaurants. For many producers, direct sales at farmers' markets are most profitable because the farmer receives the entire markup, however this is also the most time and labour-intensive approach. There have been signs of organic price premiums narrowing in recent years, which lowers the economic incentive for farmers to convert to or maintain organic production methods. Data from 22 years of experiments at the Rodale Institute found that, based on the current yields and production costs associated with organic farming in the United States, a price premium of only 10% is required to achieve parity with conventional farming. A separate study found that on a global scale, price premiums of only 5-7% were needed to break even with conventional methods. Without the price premium, profitability for farmers is mixed. For markets and supermarkets organic food is profitable as well, and is generally sold at significantly higher prices than non-organic food. Energy efficiency Compared to conventional agriculture, the energy efficiency of organic farming depends upon crop type and farm size. Two studies – both comparing organically- versus conventionally-farmed apples – declare contradicting results, one saying organic farming is more energy efficient, the other saying conventionally is more efficient. It has generally been found that the labor input per unit of yield was higher for organic systems compared with conventional production. Sales and marketing Most sales are concentrated in developed nations. In 2008, 69% of Americans claimed to occasionally buy organic products, down from 73% in 2005. One theory for this change was that consumers were substituting "local" produce for "organic" produce.The Hartman Group Organic Marketplace Reports . Distributors The USDA requires that distributors, manufacturers, and processors of organic products be certified by an accredited state or private agency. In 2007, there were 3,225 certified organic handlers, up from 2,790 in 2004. Organic handlers are often small firms; 48% reported sales below $1 million annually, and 22% between $1 and $5 million per year. Smaller handlers are more likely to sell to independent natural grocery stores and natural product chains whereas large distributors more often market to natural product chains and conventional supermarkets, with a small group marketing to independent natural product stores. Some handlers work with conventional farmers to convert their land to organic with the knowledge that the farmer will have a secure sales outlet. This lowers the risk for the handler as well as the farmer. In 2004, 31% of handlers provided technical support on organic standards or production to their suppliers and 34% encouraged their suppliers to transition to organic. Smaller farms often join in cooperatives to market their goods more effectively. 93% of organic sales are through conventional and natural food supermarkets and chains, while the remaining 7% of U.S. organic food sales occur through farmers' markets, foodservices, and other marketing channels. Direct-to-consumer sales In the 2012 Census, direct-to-consumer sales equalled $1.3 billion, up from $812 million in 2002, an increase of 60 percent. The number of farms that utilize direct-to-consumer sales was 144,530 in 2012 in comparison to 116,733 in 2002. Direct-to-consumer sales include farmers' markets, community supported agriculture (CSA), on-farm stores, and roadside farm stands. Some organic farms also sell products direct to retailer, direct to restaurant and direct to institution. According to the 2008 Organic Production Survey, approximately 7% of organic farm sales were direct-to-consumers, 10% went direct to retailers, and approximately 83% went into wholesale markets. In comparison, only 0.4% of the value of convention agricultural commodities were direct-to-consumers. While not all products sold at farmer's markets are certified organic, this direct-to-consumer avenue has become increasingly popular in local food distribution and has grown substantially since 1994. In 2014, there were 8,284 farmer's markets in comparison to 3,706 in 2004 and 1,755 in 1994, most of which are found in populated areas such as the Northeast, Midwest, and West Coast. Labour and employment Organic production is more labour-intensive than conventional production. Increased labor cost is one factor that contributes to organic food being more expensive. Organic farming's increased labor requirements can be seen in a good way providing more job opportunities for people. The 2011 UNEP Green Economy Report suggests that "[a]n increase in investment in green agriculture is projected to lead to growth in employment of about 60 per cent compared with current levels" and that "green agriculture investments could create 47 million additional jobs compared with BAU2 over the next 40 years". Much of the growth in women labour participation in agriculture is outside the "male dominated field of conventional agriculture". Organic farming has a greater percentage of women working in the farms with 21% compared to farming in general with 14%. World's food security In 2007 the United Nations Food and Agriculture Organization (FAO) said that organic agriculture often leads to higher prices and hence a better income for farmers, so it should be promoted. However, FAO stressed that organic farming could not feed the current human population, much less the larger future population. Both data and models showed that organic farming was far from sufficient. Therefore, chemical fertilizers were needed to avoid hunger. Others have argued that organic farming is particularly well-suited to food-insecure areas, and therefore could be "an important part of increased food security" in places like sub-Saharan Africa FAO stressed that fertilizers and other chemical inputs can increase production, particularly in Africa where fertilizers are currently used 90% less than in Asia. For example, in Malawi the yield has been boosted using seeds and fertilizers. Also NEPAD, a development organization of African governments, announced that feeding Africans and preventing malnutrition requires fertilizers and enhanced seeds. According to a 2012 study from McGill University, organic best management practices show an average yield only 13% less than conventional. In the world's poorer nations where most of the world's hungry live, and where conventional agriculture's expensive inputs are not affordable for the majority of farmers, adopting organic management actually increases yields 93% on average, and could be an important part of increased food security. Capacity building in developing countries Organic agriculture can contribute to ecological sustainability, especially in poorer countries. The application of organic principles enables employment of local resources (e.g., local seed varieties, manure, etc.) and therefore cost-effectiveness. Local and international markets for organic products show tremendous growth prospects and offer creative producers and exporters excellent opportunities to improve their income and living conditions. Organic agriculture is knowledge intensive. Globally, capacity building efforts are underway, including localized training material, to limited effect. As of 2007, the International Federation of Organic Agriculture Movements hosted more than 170 free manuals and 75 training opportunities online. In 2008 the United Nations Environmental Programme (UNEP) and the United Nations Conference on Trade and Development (UNCTAD) stated that "organic agriculture can be more conducive to food security in Africa than most conventional production systems, and that it is more likely to be sustainable in the long-term" and that "yields had more than doubled where organic, or near-organic practices had been used" and that soil fertility and drought resistance improved. Millennium Development Goals The value of organic agriculture (OA) in the achievement of the Millennium Development Goals (MDG), particularly in poverty reduction efforts in the face of climate change, is shown by its contribution to both income and non-income aspects of the MDGs. These benefits are expected to continue in the post-MDG era. A series of case studies conducted in selected areas in Asian countries by the Asian Development Bank Institute (ADBI) and published as a book compilation by ADB in Manila document these contributions to both income and non-income aspects of the MDGs. These include poverty alleviation by way of higher incomes, improved farmers' health owing to less chemical exposure, integration of sustainable principles into rural development policies, improvement of access to safe water and sanitation, and expansion of global partnership for development as small farmers are integrated in value chains. A related ADBI study also sheds on the costs of OA programs and set them in the context of the costs of attaining the MDGs. The results show considerable variation across the case studies, suggesting that there is no clear structure to the costs of adopting OA. Costs depend on the efficiency of the OA adoption programs. The lowest cost programs were more than ten times less expensive than the highest cost ones. However, further analysis of the gains resulting from OA adoption reveals that the costs per person taken out of poverty was much lower than the estimates of the World Bank, based on income growth in general or based on the detailed costs of meeting some of the more quantifiable MDGs (e.g., education, health, and environment). Externalities Agriculture imposes negative externalities upon society through public land and other public resource use, biodiversity loss, erosion, pesticides, nutrient pollution, and assorted other problems. Positive externalities include self-reliance, entrepreneurship, respect for nature, and air quality. Organic methods differ from conventional methods in the impacts of their respective externalities, dependent on implementation and crop type. Overall land use is generally higher for organic methods, but organic methods generally use less energy in production. The analysis and comparison of externalities is complicated by whether the comparison is done using a per unit area measurement or per unit of production, and whether analysis is done on isolated plots or on farm units as a whole. Measurements of biodiversity are highly variable between studies, farms, and organism groups. "Birds, predatory insects, soil organisms and plants responded positively to organic farming, while non-predatory insects and pests did not. A 2005 review found that the positive effects of organic farming on abundance were prominent at the plot and field scales, but not for farms in matched landscapes." Other studies that have attempted to examine and compare conventional and organic systems of farming and have found that organic techniques reduce levels of biodiversity less than conventional systems do, and use less energy and produce less waste when calculated per unit area, although not when calculated per unit of output. "Farm comparisons show that actual (nitrate) leaching rates per hectare[/acre] are up to 57% lower on organic than on conventional fields. However, the leaching rates per unit of output were similar or slightly higher." "On a per-hectare[/-acre] scale, the CO2 emissions are 4060% lower in organic farming systems than in conventional ones, whereas on a per-unit output scale, the CO2 emissions tend to be higher in organic farming systems." It has been proposed that organic agriculture can reduce the level of some negative externalities from (conventional) agriculture. Whether the benefits are private, or public depends upon the division of property rights. Issues According to a meta analysis published in 2017, compared to conventional agriculture, biological agriculture has a higher land requirement per yield unit, a higher eutrophication potential, a higher acidification potential and a lower energy requirement, but is associated with similarly high greenhouse gas emissions. A 2003 to 2005 investigation by the Cranfield University for the Department for Environment, Food and Rural Affairs in the UK found that it is difficult to compare the Global warming potential, acidification and eutrophication emissions but "Organic production often results in increased burdens, from factors such as N leaching and N2O emissions", even though primary energy use was less for most organic products. N2O is always the largest global warming potential contributor except in tomatoes. However, "organic tomatoes always incur more burdens (except pesticide use)". Some emissions were lower "per area", but organic farming always required 65 to 200% more field area than non-organic farming. The numbers were highest for bread wheat (200+ % more) and potatoes (160% more).Determining the environmental burdens and resource use in the production of agricultural and horticultural commodities. - IS0205 , Williams, A.G. et al., Cranfield University, U.K., August 2006. Svensk mat- och miljöinformation. Pages 4-6, 29 and 84-85. As of 2020 it seems that organic agriculture can help in mitigating climate change but only if used in certain ways. Environmental impact and emissions Researchers at Oxford University analysed 71 peer-reviewed studies and observed that organic products are sometimes worse for the environment. Organic milk, cereals, and pork generated higher greenhouse gas emissions per product than conventional ones but organic beef and olives had lower emissions in most studies. Usually organic products required less energy, but more land. Per unit of product, organic produce generates higher nitrogen leaching, nitrous oxide emissions, ammonia emissions, eutrophication, and acidification potential than conventionally grown produce. Other differences were not significant. The researchers concluded that public debate should consider various manners of employing conventional or organic farming, and not merely debate conventional farming as opposed to organic farming. They also sought to find specific solutions to specific circumstances. A 2018 review article in the Annual Review of Resource Economics found that organic agriculture is more polluting per unit of output and that widespread upscaling of organic agriculture would cause additional loss of natural habitats. Proponents of organic farming have claimed that organic agriculture emphasizes closed nutrient cycles, biodiversity, and effective soil management providing the capacity to mitigate and even reverse the effects of climate change and that organic agriculture can decrease fossil fuel emissions. "The carbon sequestration efficiency of organic systems in temperate climates is almost double that of conventional treatment of soils, mainly owing to the use of grass clovers for feed and of cover crops in organic rotations." However, studies acknowledge organic systems require more acreage to produce the same yield as conventional farms. By converting to organic farms in developed countries where most arable land is accounted for, increased deforestation would decrease overall carbon sequestration. Nutrient leaching According to a 2012 meta-analysis of 71 studies, nitrogen leaching, nitrous oxide emissions, ammonia emissions, eutrophication potential and acidification potential were higher for organic products. Specifically, the emission per area of land is lower, but per amount of food produced is higher. This is due to the lower crop yield of organic farms. Excess nutrients in lakes, rivers, and groundwater can cause algal blooms, eutrophication, and subsequent dead zones. In addition, nitrates are harmful to aquatic organisms by themselves. Land use A 2012 Oxford meta-analysis of 71 studies found that organic farming requires 84% more land for an equivalent amount of harvest, mainly due to lack of nutrients but sometimes due to weeds, diseases or pests, lower yielding animals and land required for fertility building crops. While organic farming does not necessarily save land for wildlife habitats and forestry in all cases, the most modern breakthroughs in organic are addressing these issues with success. Professor Wolfgang Branscheid says that organic animal production is not good for the environment, because organic chicken requires twice as much land as "conventional" chicken and organic pork a quarter more. According to a calculation by Hudson Institute, organic beef requires three times as much land. On the other hand, certain organic methods of animal husbandry have been shown to restore desertified, marginal, and/or otherwise unavailable land to agricultural productivity and wildlife. Or by getting both forage and cash crop production from the same fields simultaneously, reduce net land use. SRI methods for rice production, without external inputs, have produced record yields on some farms, but not others. Pesticides In organic farming the use of synthetic pesticides and certain natural compounds that are produced using chemical synthesis are prohibited. The organic labels restrictions are not only based on the nature of the compound, but also on the method of production. A non-exhaustive list of organic approved pesticides with their median lethal doses: Boric acid is used as an insecticide (LD50: 2660 mg/kg). Copper(II) sulfate is used as a fungicide and is also used in conventional agriculture (LD50 300 mg/kg). Conventional agriculture has the option to use the less toxic Mancozeb (LD50 4,500 to 11,200 mg/kg) Lime sulfur (aka calcium polysulfide) and sulfur are considered to be allowed, synthetic materials (LD50: 820 mg/kg) Neem oil is used as an insect repellant in India; since it contains azadirachtin its use is restricted in the UK and Europe. Pyrethrin comes from chemicals extracted from flowers of the genus Pyrethrum (LD50 of 370 mg/kg). Its potent toxicity is used to control insects. Food quality and safety While there may be some differences in the amounts of nutrients and anti-nutrients when organically produced food and conventionally-produced food are compared, the variable nature of food production and handling makes it difficult to generalize results, and there is insufficient evidence to make claims that organic food is safer or healthier than conventional food.Blair, Robert. (2012). Organic Production and Food Quality: A Down to Earth Analysis. Wiley-Blackwell, Oxford, UK. Soil conservation Supporters claim that organically managed soil has a higher quality and higher water retention. This may help increase yields for organic farms in drought years. Organic farming can build up soil organic matter better than conventional no-till farming, which suggests long-term yield benefits from organic farming. An 18-year study of organic methods on nutrient-depleted soil concluded that conventional methods were superior for soil fertility and yield for nutrient-depleted soils in cold-temperate climates, arguing that much of the benefit from organic farming derives from imported materials that could not be regarded as self-sustaining. In Dirt: The Erosion of Civilizations, geomorphologist David Montgomery outlines a coming crisis from soil erosion. Agriculture relies on roughly one meter of topsoil, and that is being depleted ten times faster than it is being replaced. No-till farming, which some claim depends upon pesticides, is one way to minimize erosion. However, a 2007 study by the USDA's Agricultural Research Service has found that manure applications in tilled organic farming are better at building up the soil than no-till.Hepperly, Paul, Jeff Moyer, and Dave Wilson. "Developments in Organic No-till Agriculture." Acres USA: The Voice of Eco-agriculture September 2008: 16-19. And Roberts, Paul. "The End of Food: Investigating a Global Crisis." Interview with Acres USA. Acres USA: The Voice of Eco-Agriculture October 2008: 56-63. Gunsmoke Farms, a organic farming project in South Dakota, suffered from massive soil erosion as result of tiling after it switched to organic farming. Biodiversity The conservation of natural resources and biodiversity is a core principle of organic production. Three broad management practices (prohibition/reduced use of chemical pesticides and inorganic fertilizers; sympathetic management of non-cropped habitats; and preservation of mixed farming) that are largely intrinsic (but not exclusive) to organic farming are particularly beneficial for farmland wildlife. Using practices that attract or introduce beneficial insects, provide habitat for birds and mammals, and provide conditions that increase soil biotic diversity serve to supply vital ecological services to organic production systems. Advantages to certified organic operations that implement these types of production practices include: 1) decreased dependence on outside fertility inputs; 2) reduced pest-management costs; 3) more reliable sources of clean water; and 4) better pollination. Nearly all non-crop, naturally occurring species observed in comparative farm land practice studies show a preference for organic farming both by abundance and diversity. An average of 30% more species inhabit organic farms. Birds, butterflies, soil microbes, beetles, earthworms, spiders, vegetation, and mammals are particularly affected. Lack of herbicides and pesticides improve biodiversity fitness and population density. Many weed species attract beneficial insects that improve soil qualities and forage on weed pests. Soil-bound organisms often benefit because of increased bacteria populations due to natural fertilizer such as manure, while experiencing reduced intake of herbicides and pesticides. Increased biodiversity, especially from beneficial soil microbes and mycorrhizae have been proposed as an explanation for the high yields experienced by some organic plots, especially in light of the differences seen in a 21-year comparison of organic and control fields. Organic farming contributes to human capital by promoting biodiversity. The presence of various species in organic farms helps to reduce human input, such as fertilizers, and pesticides, which enhances sustainability. The USDA's Agricultural Marketing Service (AMS) published a Federal Register notice on 15 January 2016, announcing the National Organic Program (NOP) final guidance on Natural Resources and Biodiversity Conservation for Certified Organic Operations. Given the broad scope of natural resources which includes soil, water, wetland, woodland and wildlife, the guidance provides examples of practices that support the underlying conservation principles and demonstrate compliance with USDA organic regulations § 205.200. The final guidance provides organic certifiers and farms with examples of production practices that support conservation principles and comply with the USDA organic regulations, which require operations to maintain or improve natural resources. The final guidance also clarifies the role of certified operations (to submit an OSP to a certifier), certifiers (ensure that the OSP describes or lists practices that explain the operator's monitoring plan and practices to support natural resources and biodiversity conservation), and inspectors (onsite inspection) in the implementation and verification of these production practices. A wide range of organisms benefit from organic farming, but it is unclear whether organic methods confer greater benefits than conventional integrated agri-environmental programs. Organic farming is often presented as a more biodiversity-friendly practice, but the generality of the beneficial effects of organic farming is debated as the effects appear often species- and context-dependent, and current research has highlighted the need to quantify the relative effects of local- and landscape-scale management on farmland biodiversity. There are four key issues when comparing the impacts on biodiversity of organic and conventional farming: (1) It remains unclear whether a holistic whole-farm approach (i.e. organic) provides greater benefits to biodiversity than carefully targeted prescriptions applied to relatively small areas of cropped and/or non-cropped habitats within conventional agriculture (i.e. agri-environment schemes); (2) Many comparative studies encounter methodological problems, limiting their ability to draw quantitative conclusions; (3) Our knowledge of the impacts of organic farming in pastoral and upland agriculture is limited; (4) There remains a pressing need for longitudinal, system-level studies in order to address these issues and to fill in the gaps in our knowledge of the impacts of organic farming, before a full appraisal of its potential role in biodiversity conservation in agroecosystems can be made. Labour standards Organic agriculture is often considered to be more socially just and economically sustainable for farmworkers than conventional agriculture. However, there is little social science research or consensus as to whether or not organic agriculture provides better working conditions than conventional agriculture. As many consumers equate organic and sustainable agriculture with small-scale, family-owned organizations it is widely interpreted that buying organic supports better conditions for farmworkers than buying with conventional producers. Organic agriculture is generally more labour-intensive due to its dependence on manual practices for fertilization and pest removal. Although illnesses from inputs pose less of a risk, hired workers still fall victim to debilitating musculoskeletal disorders associated with agricultural work. The USDA certification requirements outline growing practices and ecological standards but do nothing to codify labour practices. Independent certification initiatives such as the Agricultural Justice Project, Domestic Fair Trade Working Group, and the Food Alliance have attempted to implement farmworker interests but because these initiatives require voluntary participation of organic farms, their standards cannot be widely enforced. Despite the benefit to farmworkers of implementing labour standards, there is little support among the organic community for these social requirements. Many actors of the organic industry believe that enforcing labour standards would be unnecessary, unacceptable, or unviable due to the constraints of the market. Regional support for organic farming The following is a selected list of support given in some regions. Europe The EU-organic production-regulation is a part of the European Union regulation that sets rules about the production of organic agricultural and livestock products and how to label them. In the EU, organic farming and organic food are more commonly known as ecological or biological. The regulation is derived from the guidelines of the International Federation of Organic Agriculture Movements (IFOAM), which is an association of about 800 member organizations in 119 countries. As in the rest of the world, the organic market in Europe continues to grow and more land is farmed organically each year. "More farmers cultivate organically, more land is certified organic, and more countries report organic farming activities" as per the 2016 edition of the study "The World of Organic Agriculture " according to data from the end of 2014 published by FiBL and IFOAM in 2016. Denmark Denmark has a long ongoing support for converting conventional farming into organic farming, which has been taught in academic classes in universities since 1986. The state began substitutes and has promoted a special national label for products that qualify as organic since 1989. Denmark is thus the first country in the world to substitute organic farming, promoting the concept and organizing the distribution of organic products. Today the government accept applicants for financial support during conversion years, as in Danish regulations farms must not have utilized conventional farming methods such as the usage of pesticides for several years before products can be assessed for qualification as organic. This financial support has in recent years been cut due to organic farming increasing in profitability, and some goods surpassing the profitability of conventional farming in domestic markets. In general, the financial situation of organic farmers in Denmark boomed between 2010 and 2018, while in 2018 serious nationwide long-lasting droughts stagnated the economic results of organic farmers; however, the average farmer still achieved a net positive result that year. In 2021 Denmark's (and Europe's) largest slaughterhouse, Danish Crown, publicized its expectations of stagnating sales of conventional pork domestically, however it expected increasing sales of organic pork and especially free range organic pork. Besides the conversion support, there are still base subsidies for organic farming paid per area of qualified farm land. The first Danish private development organisation, SamsØkologisk, was established in 2013, by veteran organic farmers from the existing organisation Økologisk Samsø. The development organisation has intentions to buy and invest in farmland and then lend the land to young and aspiring farmers seeking to get into farming, especially organic farming. This organisation reports 300 economical active members as of 2021, but does not publish the amount of acquired land or active lenders. However, the organic farming concept in Denmark is often not limited to organic farming as the definition is globally. Instead, the majority of organic farming is instead "ecological farming". The development of this concept has been parallel with the general organic farming movement, and is most often used interchangeable with organic farming. Thus, there is a much stronger focus on the environmental and especially the ecological impact of ecological farming than organic farming. E.g. besides the base substitute for organic farming, farmers can qualify for an extra substitute equal to 2/3 of the base for realizing a specific reduction in the usage of added nitrogen to the farmland (also by organic means). There are also parallels to the extended organic movements of regenerative agriculture, although far from all concepts in regenerative agriculture are included in the national strategy at this time, but exist as voluntary options for each farmer. For these reasons, international organic products do not fulfill the requirements of ecological farming and thus do not receive the domestic label for ecological products, rather they receive the standard European Union organic label. Ukraine The Ministry of Agrarian Policy and Food of Ukraine is the central executive body that develops the regulatory framework for the organic sector in Ukraine, maintains the state registers of certification bodies, operators and organic seeds and planting material, and provides training and professional development for organic inspectors. Thanks to the hard work on organic legislation by the Ministry of Agrarian Policy and Food of Ukraine and the organic working group that includes the main players of the Ukraine's organic sector, on 10 July 2018, the Verkhovna Rada of Ukraine (the Ukrainian Parliament) adopted the Law of Ukraine “On Basic Principles and Requirements for Organic Production, Circulation and Labelling of Organic Products” No. 2496, which was enacted on 2 August 2019. As of April 2024, organic production, circulation and labelling of organic products in Ukraine is regulated by this law as well as relevant by-laws. One more important governmental institution of the organic sector of Ukraine is the State Service of Ukraine on Food Safety and Consumer Protection. It is the central executive body authorised to conduct state supervision (control) in the field of organic production, circulation and labelling of organic products in accordance with the organic legislation of Ukraine. This includes state supervision (control) over compliance with the legislation in the field of organic production, circulation and labelling of organic products: inspection of certification bodies; random inspection of operators; monitoring of organic products on the market to prevent the entry of non-organic products labelled as organic. The State Institution “Entrepreneurship and Export Promotion Office” (EEPO, Ukraine) contributes to the development of the Ukrainian organic exporters’ potential, promotion of the organic sector and formation of a positive image of Ukraine as a reliable supplier of organic products abroad. EEPO actively supports and organises various events for organic exporters, including national pavilions at key international trade fairs, such as BIOFACH (Nuremberg, Germany), Anuga (Cologne, Germany), SIAL (Paris, France), and Middle East Organic & Natural Products Expo (Dubai, UAE). EEPO also created the Catalogue of Ukrainian Exporters of Organic Products in partnership with Organic Standard certification body. Organic farming is Ukraine is also supported by international technical assistance projects and programmes implementation of which is funded and supported by Switzerland, Germany, and other countries. These project/programmes are the Swiss-Ukrainian program “Higher Value Added Trade from the Organic and Dairy Sector in Ukraine” (QFTP), financed by Switzerland and implemented by the Research Institute of Organic Agriculture (FiBL, Switzerland) in partnership with SAFOSO AG (Switzerland); the Swiss-Ukrainian program “Organic Trade for Development in Eastern Europe” (OT4D), financed by Switzerland through the Swiss State Secretariat for Economic Affairs (SECO) and implemented by IFOAM – Organics International in partnership HELVETAS Swiss Intercooperation and the Research Institute of Organic Agriculture (FiBL, Switzerland); Project “German-Ukrainian Cooperation in Organic Agriculture” (COA). The project/programme representatives provide their expertise during development of the organic legislative framework and implementation of the legislation in the field of organic production, circulation and labelling of organic products and support various activities related to organic farming and production. China The Chinese government, especially the local government, has provided various supports for the development of organic agriculture since the 1990s. Organic farming has been recognized by local governments for its potential in promoting sustainable rural development. It is common for local governments to facilitate land access of agribusinesses by negotiating land leasing with local farmers. The government also establishes demonstration organic gardens, provides training for organic food companies to pass certifications, subsidizes organic certification fees, pest repellent lamps, organic fertilizer and so on. The government has also been playing an active role in marketing organic products through organizing organic food expos and branding supports. India In India, in 2016, the northern state of Sikkim achieved its goal of converting to 100% organic farming."Sikkim makes an organic shift". Times of India. 7 May 2010. Retrieved 29 November 2012."Sikkim races on organic route" . Telegraph India. 12 December 2011. Retrieved 29 November 2012. Other states of India, including Kerala, Mizoram, Goa, Rajasthan, and Meghalaya, have also declared their intentions to shift to fully organic cultivation. The South Indian state Andhra Pradesh is also promoting organic farming, especially Zero Budget Natural Farming (ZBNF) which is a form of regenerative agriculture. As of 2018, India has the largest number of organic farmers in the world and constitutes more than 30% of the organic farmers globally. India has 835,000 certified organic producers. However, the total land under organic cultivation is around 2% of overall farm lands. Dominican Republic The Dominican Republic has successfully converted a large amount of its banana crop to organic. The Dominican Republic accounts for 55% of the world's certified organic bananas. South Korea The most noticeable change in Korea's agriculture occurred throughout the 1960s and 1970s. More specifically, the "Green Revolution" program where South Korea experienced reforestations and agricultural revolution. Due to a food shortage during Park Chung Hee's presidency, the government encouraged rice varieties suited for organic farming. Farmers were able to strategize risk minimization efforts by breeding a variety of rice called Japonica with Tongil. They also used less fertilizer and made other economic adjustments to alleviate potential risk factors. In modern society, organic farming and food policies have changed, more specifically since the 1990s. As expected, the guidelines focus on basic dietary recommendations for consumption of nutrients and Korean-style diets. The main reason for this encouragement is that around 88% of countries across the world face forms of malnutrition. Then in 2009, the Special Act on Safety Management of Children's Dietary Life was passed, restricting foods low in energy and poor in nutrients. It also focused on other nutritional problems Korean students may have had as well. Thailand In Thailand, the (ISAC) was established in 1991 to promote organic farming (among other sustainable agricultural practices). The national target via the National Plan for Organic Farming is to attain, by 2021, of organically farmed land. Another target is for 40% of the produce from these farmlands to be consumed domestically. Much progress has been made: Many organic farms have sprouted, growing produce ranging from mangosteen to stinky bean. Some of the farms have also established education centres to promote and share their organic farming techniques and knowledge. In Chiang Mai Province, there are 18 organic markets. (ISAC-linked) United States The United States Department of Agriculture Rural Development (USDARD) was created in 1994 as a subsection of the USDA that implements programs to stimulate growth in rural communities. One of the programs that the USDARD created provided grants to farmers who practiced organic farming through the Organic Certification Cost Share Program (OCCSP). During the 21st century, the United States has continued to expand its reach in the organic foods market, doubling the number of organic farms in the U.S. in 2016 when compared to 2011. Employment on organic farms offers potentially large numbers of jobs for people, and this may better manage the Fourth Industrial Revolution. Moreover, sustainable forestry, fishing, and mining, and other conservation-oriented activities provide larger numbers of jobs than more fossil fuel and mechanized work. Organic Farming has grown by in the U.S. from 2000 to 2011. In 2016, California had 2,713 organic farms, which makes California the largest producer of organic goods in the U.S. 4% of food sales in the U.S. are of organic goods. Sri Lanka As was the case with most countries, Sri Lanka made the transition away from organic farming upon the arrival of the Green Revolution, whereupon it started depending more on chemical fertilizers. This became a highly popularized method when the nation started offering subsidies on the import of artificial fertilizers to increase rice paddy production, and to incentivize farmers to switch from growing traditional varieties into using high yielding varieties (HYVs). This was especially true for young farmers who saw short-term economic profit as more sustainable to their wellbeing, compared to the long term drawbacks to the environment. However, due to the various health concerns with inorganic farming including the possibility of a chronic kidney disease being associated with chemical fertilizers, many middle aged and experienced farmers displayed skepticism towards these new approaches. Some even resorted to organic farming or utilizing insecticide free fertilizers for their crops. In a study conducted by F. Horgan and E. Kudavidanage, the researchers compared crop yields of farmers in Sri Lanka who employed distinct farming techniques including organic farmers who grew traditional varieties, and insecticide-free fertilizer users and pesticide users who grew modern varieties. No significant difference was found among the yield productions and in fact, organic farmers and insecticide-free fertilizer users lamented less about insects such as planthoppers as a challenge to their production. Regardless, many farmers continued to use insecticides to avoid the predicted dangers of pests to their crops, and the cheap sale of agrochemicals provided an easy approach to augment crop growth. Additionally, while organic farming has health benefits, it's a strenuous task which requires more man power. Although that presented a great opportunity for increased employment in Sri Lanka, the economic compensation was not enough to suffice the living expenses of those employed. Thus, most farmers relied on modern methods to run their household, especially after the economic stressors brought on by COVID-19. However, while Sri Lanka was still facing the new challenges of the pandemic, in the 2019 presidential election campaign, the president, Gotabaya Rajapaksa proposed a 10-year, national transition to organic farming to declare Sri Lanka as the first nation to be known for its organic produce. On April 27, 2021, the country issued an order prohibiting the import of any inorganic pesticides or fertilizers, creating chaos among farmers. While such a change was made over concerns for the nation's ecosystems and the health of citizens where pesticide poisonings prevailed over other health related deaths, the precipitous decision was met with criticism from the agriculture industry. This included fears that the mandate would harm the yields of the country's major crops (despite claims to the contrary), that the country would not be able to produce enough organic fertilizer domestically, and organic farming being more expensive and complex than conventional agriculture. To put this into perspective, 7.4% of Sri Lanka's GDP is reliant on agriculture and 30% of citizens work in this sector. This means that about ⅓ of its population is dependent on this sector for jobs, making its maintenance highly crucial for the prosperity of the nation's social and economic status. Of special concern was rice and tea, which are a staple food and major export respectively. Despite it being a record crop in the first half of 2021, the tea crop began to decline in July of that year. Rice production fell by 20% over the first six months of the ban, and prices increased by around 50%. Contrary to its past success at self-sustainability, the country had to import US$450 million worth of rice to meet domestic demand. In late August, the government acknowledged the ban had created a critical dependency on supplies of imported organic fertilizers, but by then food prices had already increased twofold in some cases. In September 2021, the government declared an economic emergency, citing the ban's impact on food prices, as well as inflation from the devaluation of Sri Lankan currency due to the crashing tea industry, and a lack of tourism induced by COVID-19 restrictions. In November 2021, the country partially lifted the ban on inorganic farming for certain key crops such as rubber and tea, and began to offer compensation and subsidies to farmers and rice producers in an attempt to cover losses. The previous subsidies on synthetic fertilizer imports were not reintroduced. See also Agroecology Biointensive Biological pest control Certified Naturally Grown Holistic management (agriculture) List of countries by organic farmland List of organic food topics List of organic gardening and farming topics List of pest-repelling plants Natural Farming Organic farming by continent Organic lawn management Organic movement Organic food culture Permaculture References Further reading Avery, A. The Truth About Organic Foods (Volume 1, Series 1). Henderson Communications, L.L.C. 2006. Committee on the Role of Alternative Farming Methods in Modern Production Agriculture, National Research Council. 1989. Alternative Agriculture. National Academies Press. Guthman, J. Agrarian Dreams: The Parodox of Organic Farming in California, Berkeley and London: University of California Press. 2004. Lampkin, N. and S. Padel. (eds.) The Economics of Organic Farming: An International Perspective'', CAB International. 1994. Kuepper, G. and L. Gegner. Organic Crop Production Overview., ATTRA — National Sustainable Agriculture Information Service. August 2004. External links Agroecology Sustainable technologies Sustainable food system
0.761834
0.9987
0.760844
Introduction to genetics
Genetics is the study of genes and tries to explain what they are and how they work. Genes are how living organisms inherit features or traits from their ancestors; for example, children usually look like their parents because they have inherited their parents' genes. Genetics tries to identify which traits are inherited and to explain how these traits are passed from generation to generation. Some traits are part of an organism's physical appearance, such as eye color or height. Other sorts of traits are not easily seen and include blood types or resistance to diseases. Some traits are inherited through genes, which is the reason why tall and thin people tend to have tall and thin children. Other traits come from interactions between genes and the environment, so a child who inherited the tendency of being tall will still be short if poorly nourished. The way our genes and environment interact to produce a trait can be complicated. For example, the chances of somebody dying of cancer or heart disease seems to depend on both their genes and their lifestyle. Genes are made from a long molecule called DNA, which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within it, carrying genetic information. The language used by DNA is called genetic code, which lets organisms read the information in the genes. This information is the instructions for the construction and operation of a living organism. The information within a particular gene is not always exactly the same between one organism and another, so different copies of a gene do not always give exactly the same instructions. Each unique form of a single gene is called an allele. As an example, one allele for the gene for hair color could instruct the body to produce much pigment, producing black hair, while a different allele of the same gene might give garbled instructions that fail to produce any pigment, giving white hair. Mutations are random changes in genes and can create new alleles. Mutations can also produce new traits, such as when mutations to an allele for black hair produce a new allele for white hair. This appearance of new traits is important in evolution. Genes and inheritance Genes are pieces of DNA that contain information for the synthesis of ribonucleic acids (RNAs) or polypeptides. Genes are inherited as units, with two parents dividing out copies of their genes to their offspring. Humans have two copies of each of their genes, but each egg or sperm cell only gets one of those copies for each gene. An egg and sperm join to form a zygote with a complete set of genes. The resulting offspring has the same number of genes as their parents, but for any gene, one of their two copies comes from their father and one from their mother. Example of mixing The effects of mixing depend on the types (the alleles) of the gene. If the father has two copies of an allele for red hair, and the mother has two copies for brown hair, all their children get the two alleles that give different instructions, one for red hair and one for brown. The hair color of these children depends on how these alleles work together. If one allele dominates the instructions from another, it is called the dominant allele, and the allele that is overridden is called the recessive allele. In the case of a daughter with alleles for both red and brown hair, brown is dominant and she ends up with brown hair. Although the red color allele is still there in this brown-haired girl, it doesn't show. This is a difference between what is seen on the surface (the traits of an organism, called its phenotype) and the genes within the organism (its genotype). In this example, the allele for brown can be called "B" and the allele for red "b". (It is normal to write dominant alleles with capital letters and recessive ones with lower-case letters.) The brown hair daughter has the "brown hair phenotype" but her genotype is Bb, with one copy of the B allele, and one of the b allele. Now imagine that this woman grows up and has children with a brown-haired man who also has a Bb genotype. Her eggs will be a mixture of two types, one sort containing the B allele, and one sort the b allele. Similarly, her partner will produce a mix of two types of sperm containing one or the other of these two alleles. When the transmitted genes are joined up in their offspring, these children have a chance of getting either brown or red hair, since they could get a genotype of BB = brown hair, Bb = brown hair or bb = red hair. In this generation, there is, therefore, a chance of the recessive allele showing itself in the phenotype of the children—some of them may have red hair like their grandfather. Many traits are inherited in a more complicated way than the example above. This can happen when there are several genes involved, each contributing a small part to the result. Tall people tend to have tall children because their children get a package of many alleles that each contribute a bit to how much they grow. However, there are not clear groups of "short people" and "tall people", like there are groups of people with brown or red hair. This is because of the large number of genes involved; this makes the trait very variable and people are of many different heights. Despite a common misconception, the green/blue eye traits are also inherited in this complex inheritance model. Inheritance can also be complicated when the trait depends on the interaction between genetics and environment. For example, malnutrition does not change traits like eye color, but can stunt growth. How genes work Genes make proteins The function of genes is to provide the information needed to make molecules called proteins in cells. Cells are the smallest independent parts of organisms: the human body contains about 100 trillion cells, while very small organisms like bacteria are just a single cell. A cell is like a miniature and very complex factory that can make all the parts needed to produce a copy of itself, which happens when cells divide. There is a simple division of labor in cells—genes give instructions and proteins carry out these instructions, tasks like building a new copy of a cell, or repairing the damage. Each type of protein is a specialist that only does one job, so if a cell needs to do something new, it must make a new protein to do this job. Similarly, if a cell needs to do something faster or slower than before, it makes more or less of the protein responsible. Genes tell cells what to do by telling them which proteins to make and in what amounts. Proteins are made of a chain of 20 different types of amino acid molecules. This chain folds up into a compact shape, rather like an untidy ball of string. The shape of the protein is determined by the sequence of amino acids along its chain and it is this shape that, in turn, determines what the protein does. For example, some proteins have parts of their surface that perfectly match the shape of another molecule, allowing the protein to bind to this molecule very tightly. Other proteins are enzymes, which are like tiny machines that alter other molecules. The information in DNA is held in the sequence of the repeating units along the DNA chain. These units are four types of nucleotides (A, T, G and C) and the sequence of nucleotides stores information in an alphabet called the genetic code. When a gene is read by a cell the DNA sequence is copied into a very similar molecule called RNA (this process is called transcription). Transcription is controlled by other DNA sequences (such as promoters), which show a cell where genes are, and control how often they are copied. The RNA copy made from a gene is then fed through a structure called a ribosome, which translates the sequence of nucleotides in the RNA into the correct sequence of amino acids and joins these amino acids together to make a complete protein chain. The new protein then folds up into its active form. The process of moving information from the language of RNA into the language of amino acids is called translation. If the sequence of the nucleotides in a gene changes, the sequence of the amino acids in the protein it produces may also change—if part of a gene is deleted, the protein produced is shorter and may not work anymore. This is the reason why different alleles of a gene can have different effects on an organism. As an example, hair color depends on how much of a dark substance called melanin is put into the hair as it grows. If a person has a normal set of the genes involved in making melanin, they make all the proteins needed and they grow dark hair. However, if the alleles for a particular protein have different sequences and produce proteins that can't do their jobs, no melanin is produced and the person has white skin and hair (albinism). Genes are copied Genes are copied each time a cell divides into two new cells. The process that copies DNA is called DNA replication. It is through a similar process that a child inherits genes from its parents when a copy from the mother is mixed with a copy from the father. DNA can be copied very easily and accurately because each piece of DNA can direct the assembly of a new copy of its information. This is because DNA is made of two strands that pair together like the two sides of a zipper. The nucleotides are in the center, like the teeth in the zipper, and pair up to hold the two strands together. Importantly, the four different sorts of nucleotides are different shapes, so for the strands to close up properly, an A nucleotide must go opposite a T nucleotide, and a G opposite a C. This exact pairing is called base pairing. When DNA is copied, the two strands of the old DNA are pulled apart by enzymes; then they pair up with new nucleotides and then close. This produces two new pieces of DNA, each containing one strand from the old DNA and one newly made strand. This process is not predictably perfect as proteins attach to a nucleotide while they are building and cause a change in the sequence of that gene. These changes in the DNA sequence are called mutations. Mutations produce new alleles of genes. Sometimes these changes stop the functioning of that gene or make it serve another advantageous function, such as the melanin genes discussed above. These mutations and their effects on the traits of organisms are one of the causes of evolution. Genes and evolution A population of organisms evolves when an inherited trait becomes more common or less common over time. For instance, all the mice living on an island would be a single population of mice: some with white fur, some gray. If over generations, white mice became more frequent and gray mice less frequent, then the color of the fur in this population of mice would be evolving. In terms of genetics, this is called an increase in allele frequency. Alleles become more or less common either by chance in a process called genetic drift or by natural selection. In natural selection, if an allele makes it more likely for an organism to survive and reproduce, then over time this allele becomes more common. But if an allele is harmful, natural selection makes it less common. In the above example, if the island were getting colder each year and snow became present for much of the time, then the allele for white fur would favor survival since predators would be less likely to see them against the snow, and more likely to see the gray mice. Over time white mice would become more and more frequent, while gray mice less and less. Mutations create new alleles. These alleles have new DNA sequences and can produce proteins with new properties. So if an island was populated entirely by black mice, mutations could happen creating alleles for white fur. The combination of mutations creating new alleles at random, and natural selection picking out those that are useful, causes an adaptation. This is when organisms change in ways that help them to survive and reproduce. Many such changes, studied in evolutionary developmental biology, affect the way the embryo develops into an adult body. Inherited diseases Some diseases are hereditary and run in families; others, such as infectious diseases, are caused by the environment. Other diseases come from a combination of genes and the environment. Genetic disorders are diseases that are caused by a single allele of a gene and are inherited in families. These include Huntington's disease, cystic fibrosis or Duchenne muscular dystrophy. Cystic fibrosis, for example, is caused by mutations in a single gene called CFTR and is inherited as a recessive trait. Other diseases are influenced by genetics, but the genes a person gets from their parents only change their risk of getting a disease. Most of these diseases are inherited in a complex way, with either multiple genes involved, or coming from both genes and the environment. As an example, the risk of breast cancer is 50 times higher in the families most at risk, compared to the families least at risk. This variation is probably due to a large number of alleles, each changing the risk a little bit. Several of the genes have been identified, such as BRCA1 and BRCA2, but not all of them. However, although some of the risks are genetic, the risk of this cancer is also increased by being overweight, heavy alcohol consumption and not exercising. A woman's risk of breast cancer, therefore, comes from a large number of alleles interacting with her environment, so it is very hard to predict. Genetic engineering Since traits come from the genes in a cell, putting a new piece of DNA into a cell can produce a new trait. This is how genetic engineering works. For example, rice can be given genes from a maize and a soil bacteria so the rice produces beta-carotene, which the body converts to vitamin A. This can help children with Vitamin A deficiency. Another gene being put into some crops comes from the bacterium Bacillus thuringiensis; the gene makes a protein that is an insecticide. The insecticide kills insects that eat the plants but is harmless to people. In these plants, the new genes are put into the plant before it is grown, so the genes are in every part of the plant, including its seeds. The plant's offspring inherit the new genes, which has led to concern about the spread of new traits into wild plants. The kind of technology used in genetic engineering is also being developed to treat people with genetic disorders in an experimental medical technique called gene therapy. However, here the new, properly working gene is put in targeted cells, not altering the chance of future children inheriting the disease causing alleles. See also Common misunderstandings of genetics Epigenetics Whole genome sequencing History of genetics Genetics in simple English Outline of genetics Molecular genetics Predictive medicine References External links Introduction to Genetics, University of Utah Introduction to Genes and Disease, NCBI open book Genetics glossary, A talking glossary of genetic terms. Khan Academy on YouTube What Color Eyes Would Your Children Have? Genetics of human eye color: An interactive introduction Transcribe and translate a gene, University of Utah StarGenetics software simulates mating experiments between organisms that are genetically different across a range of traits
0.770394
0.987601
0.760842
Calvin cycle
The Calvin cycle, light-independent reactions, bio synthetic phase, dark reactions, or photosynthetic carbon reduction (PCR) cycle of photosynthesis is a series of chemical reactions that convert carbon dioxide and hydrogen-carrier compounds into glucose. The Calvin cycle is present in all photosynthetic eukaryotes and also many photosynthetic bacteria. In plants, these reactions occur in the stroma, the fluid-filled region of a chloroplast outside the thylakoid membranes. These reactions take the products (ATP and NADPH) of light-dependent reactions and perform further chemical processes on them. The Calvin cycle uses the chemical energy of ATP and reducing power of NADPH from the light dependent reactions to produce sugars for the plant to use. These substrates are used in a series of reduction-oxidation (redox) reactions to produce sugars in a step-wise process; there is no direct reaction that converts several molecules of to a sugar. There are three phases to the light-independent reactions, collectively called the Calvin cycle: carboxylation, reduction reactions, and ribulose 1,5-bisphosphate (RuBP) regeneration. Though it is also called the "dark reaction", the Calvin cycle does not actually occur in the dark or during night time. This is because the process requires NADPH, which is short-lived and comes from light-dependent reactions. In the dark, plants instead release sucrose into the phloem from their starch reserves to provide energy for the plant. The Calvin cycle thus happens when light is available independent of the kind of photosynthesis (C3 carbon fixation, C4 carbon fixation, and crassulacean acid metabolism (CAM)); CAM plants store malic acid in their vacuoles every night and release it by day to make this process work. Coupling to other metabolic pathways The reactions of the Calvin cycle are closely coupled to the thylakoid electron transport chain, as the energy required to reduce the carbon dioxide is provided by NADPH produced during the light dependent reactions. The process of photorespiration, also known as C2 cycle, is also coupled to the Calvin cycle, as it results from an alternative reaction of the RuBisCO enzyme, and its final byproduct is another glyceraldehyde-3-P molecule. Calvin cycle The Calvin cycle, Calvin–Benson–Bassham (CBB) cycle, reductive pentose phosphate cycle (RPP cycle) or C3 cycle is a series of biochemical redox reactions that take place in the stroma of chloroplast in photosynthetic organisms. The cycle was discovered in 1950 by Melvin Calvin, James Bassham, and Andrew Benson at the University of California, Berkeley by using the radioactive isotope carbon-14. Photosynthesis occurs in two stages in a cell. In the first stage, light-dependent reactions capture the energy of light and use it to make the energy-storage molecule ATP and the moderate-energy hydrogen carrier NADPH. The Calvin cycle uses these compounds to convert carbon dioxide and water into organic compounds that can be used by the organism (and by animals that feed on it). This set of reactions is also called carbon fixation. The key enzyme of the cycle is called RuBisCO. In the following biochemical equations, the chemical species (phosphates and carboxylic acids) exist in equilibria among their various ionized states as governed by the pH. The enzymes in the Calvin cycle are functionally equivalent to most enzymes used in other metabolic pathways such as gluconeogenesis and the pentose phosphate pathway, but the enzymes in the Calvin cycle are found in the chloroplast stroma instead of the cell cytosol, separating the reactions. They are activated in the light (which is why the name "dark reaction" is misleading), and also by products of the light-dependent reaction. These regulatory functions prevent the Calvin cycle from being respired to carbon dioxide. Energy (in the form of ATP) would be wasted in carrying out these reactions when they have no net productivity. The sum of reactions in the Calvin cycle is the following: 3 + 6 NADPH + 9 ATP + 5 → glyceraldehyde-3-phosphate (G3P) + 6 NADP+ + 9 ADP + 8 Pi   (Pi = inorganic phosphate) Hexose (six-carbon) sugars are not products of the Calvin cycle. Although many texts list a product of photosynthesis as , this is mainly for convenience to match the equation of aerobic respiration, where six-carbon sugars are oxidized in mitochondria. The carbohydrate products of the Calvin cycle are three-carbon sugar phosphate molecules, or "triose phosphates", namely, glyceraldehyde-3-phosphate (G3P). Steps In the first stage of the Calvin cycle, a molecule is incorporated into one of two three-carbon molecules (glyceraldehyde 3-phosphate or G3P), where it uses up two molecules of ATP and two molecules of NADPH, which had been produced in the light-dependent stage. The three steps involved are: The enzyme RuBisCO catalyses the carboxylation of ribulose-1,5-bisphosphate, RuBP, a 5-carbon compound, by carbon dioxide (a total of 6 carbons) in a two-step reaction. The product of the first step is enediol-enzyme complex that can capture or . Thus, enediol-enzyme complex is the real carboxylase/oxygenase. The that is captured by enediol in second step produces an unstable six-carbon compound called 2-carboxy 3-keto 1,5-biphosphoribotol (CKABP) (or 3-keto-2-carboxyarabinitol 1,5-bisphosphate) that immediately splits into 2 molecules of 3-phosphoglycerate (also written as 3-phosphoglyceric acid, PGA, 3PGA, or 3-PGA), a 3-carbon compound. The enzyme phosphoglycerate kinase catalyses the phosphorylation of 3-PGA by ATP (which was produced in the light-dependent stage). 1,3-Bisphosphoglycerate (glycerate-1,3-bisphosphate) and ADP are the products. (However, note that two 3-PGAs are produced for every that enters the cycle, so this step utilizes two ATP per fixed.) The enzyme glyceraldehyde 3-phosphate dehydrogenase catalyses the reduction of 1,3BPGA by NADPH (which is another product of the light-dependent stage). Glyceraldehyde 3-phosphate (also called G3P, GP, TP, PGAL, GAP) is produced, and the NADPH itself is oxidized and becomes NADP+. Again, two NADPH are utilized per fixed. The next stage in the Calvin cycle is to regenerate RuBP. Five G3P molecules produce three RuBP molecules, using up three molecules of ATP. Since each molecule produces two G3P molecules, three molecules produce six G3P molecules, of which five are used to regenerate RuBP, leaving a net gain of one G3P molecule per three molecules (as would be expected from the number of carbon atoms involved). The regeneration stage can be broken down into a series of steps. Triose phosphate isomerase converts one of the G3P reversibly into dihydroxyacetone phosphate (DHAP), also a 3-carbon molecule. Aldolase and fructose-1,6-bisphosphatase convert a G3P and a DHAP into fructose 6-phosphate (6C). A phosphate ion is lost into solution. Then fixation of another generates two more G3P. F6P has two carbons removed by transketolase, giving erythrose-4-phosphate (E4P). The two carbons on transketolase are added to a G3P, giving the ketose xylulose-5-phosphate (Xu5P). E4P and a DHAP (formed from one of the G3P from the second fixation) are converted into sedoheptulose-1,7-bisphosphate (7C) by aldolase enzyme. Sedoheptulose-1,7-bisphosphatase (one of only three enzymes of the Calvin cycle that are unique to plants) cleaves sedoheptulose-1,7-bisphosphate into sedoheptulose-7-phosphate, releasing an inorganic phosphate ion into solution. Fixation of a third generates two more G3P. The ketose S7P has two carbons removed by transketolase, giving ribose-5-phosphate (R5P), and the two carbons remaining on transketolase are transferred to one of the G3P, giving another Xu5P. This leaves one G3P as the product of fixation of 3 , with generation of three pentoses that can be converted to Ru5P. R5P is converted into ribulose-5-phosphate (Ru5P, RuP) by phosphopentose isomerase. Xu5P is converted into RuP by phosphopentose epimerase. Finally, phosphoribulokinase (another plant-unique enzyme of the pathway) phosphorylates RuP into RuBP, ribulose-1,5-bisphosphate, completing the Calvin cycle. This requires the input of one ATP. Thus, of six G3P produced, five are used to make three RuBP (5C) molecules (totaling 15 carbons), with only one G3P available for subsequent conversion to hexose. This requires nine ATP molecules and six NADPH molecules per three molecules. The equation of the overall Calvin cycle is shown diagrammatically below. RuBisCO also reacts competitively with instead of in photorespiration. The rate of photorespiration is higher at high temperatures. Photorespiration turns RuBP into 3-PGA and 2-phosphoglycolate, a 2-carbon molecule that can be converted via glycolate and glyoxalate to glycine. Via the glycine cleavage system and tetrahydrofolate, two glycines are converted into serine plus . Serine can be converted back to 3-phosphoglycerate. Thus, only 3 of 4 carbons from two phosphoglycolates can be converted back to 3-PGA. It can be seen that photorespiration has very negative consequences for the plant, because, rather than fixing , this process leads to loss of . C4 carbon fixation evolved to circumvent photorespiration, but can occur only in certain plants native to very warm or tropical climates—corn, for example. Furthermore, RuBisCOs catalyzing the light-independent reactions of photosynthesis generally exhibit an improved specificity for CO2 relative to O2, in order to minimize the oxygenation reaction. This improved specificity evolved after RuBisCO incorporated a new protein subunit. Products The immediate products of one turn of the Calvin cycle are 2 glyceraldehyde-3-phosphate (G3P) molecules, 3 ADP, and 2 NADP+. (ADP and NADP+ are not really "products". They are regenerated and later used again in the light-dependent reactions). Each G3P molecule is composed of 3 carbons. For the Calvin cycle to continue, RuBP (ribulose 1,5-bisphosphate) must be regenerated. So, 5 out of 6 carbons from the 2 G3P molecules are used for this purpose. Therefore, there is only 1 net carbon produced to play with for each turn. To create 1 surplus G3P requires 3 carbons, and therefore 3 turns of the Calvin cycle. To make one glucose molecule (which can be created from 2 G3P molecules) would require 6 turns of the Calvin cycle. Surplus G3P can also be used to form other carbohydrates such as starch, sucrose, and cellulose, depending on what the plant needs. Light-dependent regulation These reactions do not occur in the dark or at night. There is a light-dependent regulation of the cycle enzymes, as the third step requires NADPH. There are two regulation systems at work when the cycle must be turned on or off: the thioredoxin/ferredoxin activation system, which activates some of the cycle enzymes; and the RuBisCo enzyme activation, active in the Calvin cycle, which involves its own activase. The thioredoxin/ferredoxin system activates the enzymes glyceraldehyde-3-P dehydrogenase, glyceraldehyde-3-P phosphatase, fructose-1,6-bisphosphatase, sedoheptulose-1,7-bisphosphatase, and ribulose-5-phosphatase kinase, which are key points of the process. This happens when light is available, as the ferredoxin protein is reduced in the photosystem I complex of the thylakoid electron chain when electrons are circulating through it. Ferredoxin then binds to and reduces the thioredoxin protein, which activates the cycle enzymes by severing a cystine bond found in all these enzymes. This is a dynamic process as the same bond is formed again by other proteins that deactivate the enzymes. The implications of this process are that the enzymes remain mostly activated by day and are deactivated in the dark when there is no more reduced ferredoxin available. The enzyme RuBisCo has its own, more complex activation process. It requires that a specific lysine amino acid be carbamylated to activate the enzyme. This lysine binds to RuBP and leads to a non-functional state if left uncarbamylated. A specific activase enzyme, called RuBisCo activase, helps this carbamylation process by removing one proton from the lysine and making the binding of the carbon dioxide molecule possible. Even then the RuBisCo enzyme is not yet functional, as it needs a magnesium ion bound to the lysine to function. This magnesium ion is released from the thylakoid lumen when the inner pH drops due to the active pumping of protons from the electron flow. RuBisCo activase itself is activated by increased concentrations of ATP in the stroma caused by its phosphorylation. References Further reading Rubisco Activase, from the Plant Physiology Online website Thioredoxins, from the Plant Physiology Online website External links The Biochemistry of the Calvin Cycle at Rensselaer Polytechnic Institute The Calvin Cycle and the Pentose Phosphate Pathway from Biochemistry, Fifth Edition by Jeremy M. Berg, John L. Tymoczko and Lubert Stryer. Published by W. H. Freeman and Company (2002). Biochemical reactions Carbohydrate metabolism Photosynthesis
0.763575
0.996414
0.760836
Organic food
Organic food, ecological food, or biological food are foods and drinks produced by methods complying with the standards of organic farming. Standards vary worldwide, but organic farming features practices that cycle resources, promote ecological balance, and conserve biodiversity. Organizations regulating organic products may restrict the use of certain pesticides and fertilizers in the farming methods used to produce such products. Organic foods are typically not processed using irradiation, industrial solvents, or synthetic food additives. In the 21st century, the European Union, the United States, Canada, Mexico, Japan, and many other countries require producers to obtain special certification to market their food as organic. Although the produce of kitchen gardens may actually be organic, selling food with an organic label is regulated by governmental food safety authorities, such as the National Organic Program of the US Department of Agriculture (USDA) or the European Commission (EC). From an environmental perspective, fertilizing, overproduction, and the use of pesticides in conventional farming may negatively affect ecosystems, soil health, biodiversity, groundwater, and drinking water supplies. These environmental and health issues are intended to be minimized or avoided in organic farming. Demand for organic foods is primarily driven by consumer concerns for personal health and the environment, such as the detrimental environmental impacts of pesticides. From the perspective of science and consumers, there is insufficient evidence in the scientific and medical literature to support claims that organic food is either substantially safer or healthier to eat than conventional food. Organic agriculture has higher production costs and lower yields, higher labor costs, and higher consumer prices as compared to conventional farming methods. Meaning, history and origin of the term For the vast majority of its history, agriculture can be described as having been organic; only during the 20th century was a large supply of new products, generally deemed not organic, introduced into food production. The organic farming movement arose in the 1940s in response to the industrialization of agriculture. In 1939, Lord Northbourne coined the term organic farming in his book Look to the Land (1940), out of his conception of "the farm as organism", to describe a holistic, ecologically balanced approach to farming—in contrast to what he called chemical farming, which relied on "imported fertility" and "cannot be self-sufficient nor an organic whole". Early soil scientists also described the differences in soil composition when animal manures were used as "organic", because they contain carbon compounds, whereas superphosphates and Haber process nitrogen do not. Their respective use affects humus content of soil. This is different from the scientific use of the term "organic" in chemistry, which refers to a class of molecules that contain carbon, especially those involved in the chemistry of life. This class of molecules includes everything likely to be considered edible, as well as most pesticides and toxins too, therefore the term "organic" and, especially, the term "inorganic" (sometimes wrongly used as a contrast by the popular press) as they apply to organic chemistry is an equivocation fallacy when applied to farming, the production of food, and to foodstuffs themselves. Properly used in this agricultural science context, "organic" refers to the methods grown and processed, not necessarily the chemical composition of the food. Ideas that organic food could be healthier and better for the environment originated in the early days of the organic movement as a result of publications like the 1943 book The Living Soil and Farming and Gardening for Health or Disease (1945). In the industrial era, organic gardening reached a modest level of popularity in the United States in the 1950s. In the 1960s, environmentalists and the counterculture championed organic food, but it was only in the 1970s that a national marketplace for organic foods developed. Early consumers interested in organic food would look for non-chemically treated, non-use of unapproved pesticides, fresh or minimally processed food. They mostly had to buy directly from growers. Later, "Know your farmer, know your food" became the motto of a new initiative instituted by the USDA in September 2009. Personal definitions of what constituted "organic" were developed through firsthand experience: by talking to farmers, seeing farm conditions, and farming activities. Small farms grew vegetables (and raised livestock) using organic farming practices, with or without certification, and the individual consumer monitored. Small specialty health food stores and co-operatives were instrumental to bringing organic food to a wider audience. As demand for organic foods continued to increase, high-volume sales through mass outlets such as supermarkets rapidly replaced the direct farmer connection. Today, many large corporate farms have an organic division. However, for supermarket consumers, food production is not easily observable, and product labeling, like "certified organic", is relied upon. Government regulations and third-party inspectors are looked to for assurance. In the 1970s, interest in organic food grew with the rise of the environmental movement and was also spurred by food-related health scares like the concerns about Alar that arose in the mid-1980s. Legal definition Organic food production is distinct from private gardening. In the EU, organic farming and organic food are more commonly known as ecological or biological, or in short 'eco' and 'bio'. Currently, the European Union, the United States, Canada, Japan, and many other countries require producers to obtain special certification based on government-defined standards to market food as organic within their borders. In the context of these regulations, foods marketed as organic are produced in a way that complies with organic standards set by national governments and international organic industry trade organizations. In the United States, organic production is managed in accordance with the Organic Foods Production Act of 1990 (OFPA) and regulations in Title 7, Part 205 of the Code of Federal Regulations to respond to site-specific conditions by integrating cultural, biological, and mechanical practices that foster cycling of resources, promote ecological balance, and conserve biodiversity. If livestock are involved, the livestock must be reared with regular access to pasture and without the routine use of antibiotics or growth hormones. Processed organic food usually contains only organic ingredients. If non-organic ingredients are present, at least a certain percentage of the food's total plant and animal ingredients must be organic (95% in the United States, Canada, and Australia). Foods claiming to be organic must be free of artificial food additives, and are often processed with fewer artificial methods, materials and conditions, such as chemical ripening, food irradiation, solvents such as hexane, and genetically modified ingredients. Pesticides are allowed as long as they are not synthetic. However, under US federal organic standards, if pests and weeds are not controllable through management practices, nor via organic pesticides and herbicides, "a substance included on the National List of synthetic substances allowed for use in organic crop production may be applied to prevent, suppress, or control pests, weeds, or diseases". Several groups have called for organic standards to prohibit nanotechnology on the basis of the precautionary principle in light of unknown risks of nanotechnology. The use of nanotechnology-based products in the production of organic food is prohibited in some jurisdictions (Canada, the UK, and Australia) and is unregulated in others. To be certified organic, products must be grown and manufactured in a manner that adheres to standards set by the country they are sold in: Australia: NASAA Organic Standard Canada: Organic Products Regulations European Union: EU-Eco-regulation Sweden: KRAV United Kingdom: DEFRA Poland: Association of Polish Ecology Norway: Debio Organic certification India: National Program for Organic Production (NPOP) Indonesia: BIOCert, run by Agricultural Ministry of Indonesia. Japan: JAS Standards Mexico: Consejo Nacional de Producción Orgánica, department of Sagarpa New Zealand: there are three bodies; BioGro, AsureQuality, and OFNZ United States: National Organic Program (NOP) Standards In the United States, there are four different levels or categories for organic labeling: "100% Organic": This means that all ingredients are produced organically. It also may have the USDA seal. "Organic": At least 95% or more of the ingredients are organic. "Made With Organic Ingredients": Contains at least 70% organic ingredients. "Less Than 70% Organic Ingredients": Three of the organic ingredients must be listed under the ingredient section of the label. In the U.S., the food label "natural" or "all natural" does not mean that the food was produced and processed organically. Environmental sustainability From an environmental perspective, fertilizing, overproduction and the use of pesticides in conventional farming has caused, and is causing, enormous damage worldwide to local ecosystems, soil health, biodiversity, groundwater and drinking water supplies, and sometimes farmers' health and fertility. Organic farming typically reduces some environmental impact relative to conventional farming, but the scale of reduction can be difficult to quantify and varies depending on farming methods. In some cases, reducing food waste and dietary changes might provide greater benefits. A 2020 study at the Technical University of Munich found that the greenhouse gas emissions of organically farmed plant-based food were lower than conventionally-farmed plant-based food. The greenhouse gas costs of organically produced meat were approximately the same as non-organically produced meat. However, the same paper noted that a shift from conventional to organic practices would likely be beneficial for long-term efficiency and ecosystem services, and probably improve soil over time. A 2019 life-cycle assessment study found that converting the total agricultural sector (both crop and livestock production) for England and Wales to organic farming methods would result in a net increase in greenhouse gas emissions as increased overseas land use for production and import of crops would be needed to make up for lower organic yields domestically. Health and safety There is little scientific evidence of benefit or harm to human health from a diet high in organic food, and conducting any sort of rigorous experiment on the subject is very difficult. A 2012 meta-analysis noted that "there have been no long-term studies of health outcomes of populations consuming predominantly organic versus conventionally produced food controlling for socioeconomic factors; such studies would be expensive to conduct." A 2009 meta-analysis noted that "most of the included articles did not study direct human health outcomes. In ten of the included studies (83%), a primary outcome was the change in antioxidant activity. Antioxidant status and activity are useful biomarkers but do not directly equate to a health outcome. Of the remaining two articles, one recorded proxy-reported measures of atopic manifestations as its primary health outcome, whereas the other article examined the fatty acid composition of breast milk and implied possible health benefits for infants from the consumption of different amounts of conjugated linoleic acids from breast milk." In addition, as discussed above, difficulties in accurately and meaningfully measuring chemical differences between organic and conventional food make it difficult to extrapolate health recommendations based solely on chemical analysis. According to a newer review, studies found adverse effects of certain pesticides on children's cognitive development at current levels of exposure. Many pesticides show neurotoxicity in laboratory animal models and some are considered to cause endocrine disruption. As of 2012, the scientific consensus is that while "consumers may choose to buy organic fruit, vegetables and meat because they believe them to be more nutritious than other food.... the balance of current scientific evidence does not support this view." The evidence of beneficial health effects of organic food consumption is scarce, which has led researchers to call for more long-term studies. In addition, studies that suggest that organic foods may be healthier than conventional foods face significant methodological challenges, such as the correlation between organic food consumption and factors known to promote a healthy lifestyle. When the American Academy of Pediatrics reviewed the literature on organic foods in 2012, they found that "current evidence does not support any meaningful nutritional benefits or deficits from eating organic compared with conventionally grown foods, and there are no well-powered human studies that directly demonstrate health benefits or disease protection as a result of consuming an organic diet." Prevalent use of antibiotics in livestock used in non-organic meat is a key driver of antibiotic resistance. Consumer safety Pesticide exposure Claims of improved safety of organic food have largely focused on pesticide residues. These concerns are driven by the facts that "(1) acute, massive exposure to pesticides can cause significant adverse health effects; (2) food products have occasionally been contaminated with pesticides, which can result in acute toxicity; and (3) most, if not all, commercially purchased food contains trace amounts of agricultural pesticides." However, as is frequently noted in the scientific literature: "What does not follow from this, however, is that chronic exposure to the trace amounts of pesticides found in food results in demonstrable toxicity. This possibility is practically impossible to study and quantify;" therefore firm conclusions about the relative safety of organic foods have been hampered by the difficulty in proper study design and relatively small number of studies directly comparing organic food to conventional food. Additionally, the Carcinogenic Potency Project, which is a part of the US EPA's Distributed Structure-Searchable Toxicity (DSSTox) Database Network, has been systemically testing the carcinogenicity of chemicals, both natural and synthetic, and building a publicly available database of the results for the past ~30 years. Their work attempts to fill in the gaps in our scientific knowledge of the carcinogenicity of all chemicals, both natural and synthetic, as the scientists conducting the Project described in the journal, Science, in 1992: Toxicological examination of synthetic chemicals, without similar examination of chemicals that occur naturally, has resulted in an imbalance in both the data on and the perception of chemical carcinogens. Three points that we have discussed indicate that comparisons should be made with natural as well as synthetic chemicals. 1) The vast proportion of chemicals that humans are exposed to occur naturally. Nevertheless, the public tends to view chemicals as only synthetic and to think of synthetic chemicals as toxic despite the fact that every natural chemical is also toxic at some dose. The daily average exposure of Americans to burnt material in the diet is ~2000 mg, and exposure to natural pesticides (the chemicals that plants produce to defend themselves) is ~1500 mg. In comparison, the total daily exposure to all synthetic pesticide residues combined is ~0.09 mg. Thus, we estimate that 99.99% of the pesticides humans ingest are natural. Despite this enormously greater exposure to natural chemicals, 79% (378 out of 479) of the chemicals tested for carcinogenicity in both rats and mice are synthetic (that is, do not occur naturally). 2) It has often been wrongly assumed that humans have evolved defenses against the natural chemicals in our diet but not against the synthetic chemicals. However, defenses that animals have evolved are mostly general rather than specific for particular chemicals; moreover, defenses are generally inducible and therefore protect well from low doses of both synthetic and natural chemicals. 3) Because the toxicology of natural and synthetic chemicals is similar, one expects (and finds) a similar positivity rate for carcinogenicity among synthetic and natural chemicals. The positivity rate among chemicals tested in rats and mice is ~50%. Therefore, because humans are exposed to so many more natural than synthetic chemicals (by weight and by number), humans are exposed to an enormous background of rodent carcinogens, as defined by high-dose tests on rodents. We have shown that even though only a tiny proportion of natural pesticides in plant foods have been tested, the 29 that are rodent carcinogens among the 57 tested, occur in more than 50 common plant foods. It is probable that almost every fruit and vegetable in the supermarket contains natural pesticides that are rodent carcinogens. While studies have shown via chemical analysis, as discussed above, that organically grown fruits and vegetables have significantly lower pesticide residue levels, the significance of this finding on actual health risk reduction is debatable as both conventional foods and organic foods generally have pesticide levels (maximum residue limits) well below government established guidelines for what is considered safe. This view has been echoed by the U.S. Department of Agriculture and the UK Food Standards Agency. A study published by the National Research Council in 1993 determined that for infants and children, the major source of exposure to pesticides is through diet. A study published in 2006 by Lu et al. measured the levels of organophosphorus pesticide exposure in 23 school children before and after replacing their diet with organic food. In this study, it was found that levels of organophosphorus pesticide exposure dropped from negligible levels to undetectable levels when the children switched to an organic diet, the authors presented this reduction as a significant reduction in risk. The conclusions presented in Lu et al. were criticized in the literature as a case of bad scientific communication.Alex Avery (2006) Organic Diets and Children’s Health Environ Health Perspect.114(4) A210–A211. More specifically, claims related to pesticide residue of increased risk of infertility or lower sperm counts have not been supported by the evidence in the medical literature. Likewise, the American Cancer Society (ACS) has stated their official position that "whether organic foods carry a lower risk of cancer because they are less likely to be contaminated by compounds that might cause cancer is largely unknown." Reviews have noted that the risks from microbiological sources or natural toxins are likely to be much more significant than short term or chronic risks from pesticide residues. Microbiological contamination Organic farming has a preference for using manure as fertilizer, compared to conventional farming in general. This practice seems to imply an increased risk of microbiological contamination, such as E. coli O157:H7, from organic food consumption, but reviews have found little evidence that the actual incidence of outbreaks can be positively linked to organic food production. The 2011 Germany E. coli O104:H4 outbreak, however, was blamed on organically farmed fenugreek sprouts. Public perception There is a widespread public belief that organic food is safer, more nutritious, and better tasting than conventional food, which has largely contributed to the development of an organic food culture. Consumers purchase organic foods for different reasons, including concerns about the effects of conventional farming practices on the environment, human health, and animal welfare. While there may be some differences in the nutrient and antinutrient contents of organically and conventionally produced food, the variable nature of food production, shipping, storage, and handling makes it difficult to generalize results.Blair, Robert. (2012). Organic Production and Food Quality: A Down to Earth Analysis. Wiley-Blackwell, Oxford, UK. Pages 72, 223, 225. Claims that "organic food tastes better" are generally not supported by tests, but consumers often perceive organic food produce like fruits and vegetables to taste better. The appeal of organic food varies with demographic group and attitudinal characteristics. Several high quality surveys find that income, educational level, physical activity, dietary habits and number of children are associated with the level of organic food consumption. USA research has found that women, young adults, liberals, and college graduates were significantly more likely to buy organic food regularly when compared to men, older age groups, people of different political affiliations, and less educated individuals. Income level and race/ethnicity did not appear to affect interest in organic foods in this same study. Furthermore, individuals who are only moderately-religious were more likely to purchase organic foods than individuals who were less religious or highly-religious. Additionally, the pursuit of organic foods was positively associated with valuing vegetarian/vegan food options, "natural" food options, and USA-made food options. Organic food may also be more appealing to people who follow other restricted diets. One study found that individuals who adhered to vegan, vegetarian, or pescetarian diet patterns incorporated substantially more organic foods in their diets when compared to omnivores. The most important reason for purchasing organic foods seems to be beliefs about the products' health-giving properties and higher nutritional value. These beliefs are promoted by the organic food industry, and have fueled increased demand for organic food despite higher prices and difficulty in confirming these claimed benefits scientifically.Dangour AD et al. (2009) Nutritional quality of organic foods: a systematic review The American Journal of Clinical Nutrition 92(1) 203–210 Organic labels also stimulate the consumer to view the product as having more positive nutritional value. Psychological effects such as the "halo" effect are also important motivating factors in the purchase of organic food. In China the increasing demand for organic products of all kinds, and in particular milk, baby food and infant formula, has been "spurred by a series of food scares, the worst being the death of six children who had consumed baby formula laced with melamine" in 2009 and the 2008 Chinese milk scandal, making the Chinese market for organic milk the largest in the world as of 2014. A Pew Research Center survey in 2012 indicated that 41% of Chinese consumers thought of food safety as a very big problem, up by three times from 12% in 2008. A 2020 study on marketing processed organic foods shows that, after much growth in the fresh organic foods sector, consumers have started to buy processed organic foods, which they sometime perceive to be just as healthy or even healthier than the non-organic version – depending on the marketing message. Taste There is no good evidence that organic food tastes better than its non-organic counterparts. There is evidence that some organic fruit is drier than conventionally grown fruit; a slightly drier fruit may also have a more intense flavor due to the higher concentration of flavoring substances. Some foods which are picked when unripe, such as bananas, are cooled to prevent ripening while they are shipped to market, and then are induced to ripen quickly by exposing them to propylene or ethylene, chemicals produced by plants to induce their own ripening; as flavor and texture changes during ripening, this process may affect those qualities of the treated fruit.Fresh Air, National Public Radio. 30 August 2011 Transcript: Bananas: The Uncertain Future Of A Favorite Fruit Chemical composition With respect to chemical differences in the composition of organically grown food compared with conventionally grown food, studies have examined differences in nutrients, antinutrients, and pesticide residues. These studies generally suffer from confounding variables, and are difficult to generalize due to differences in the tests that were done, the methods of testing, and because the vagaries of agriculture affect the chemical composition of food; these variables include variations in weather (season to season as well as place to place); crop treatments (fertilizer, pesticide, etc.); soil composition; the cultivar used, and in the case of meat and dairy products, the parallel variables in animal production. Treatment of the foodstuffs after initial gathering (whether milk is pasteurized or raw), the length of time between harvest and analysis, as well as conditions of transport and storage, also affect the chemical composition of a given item of food. Additionally, there is evidence that organic produce is drier than conventionally grown produce; a higher content in any chemical category may be explained by higher concentration rather than in absolute amounts. Nutrients Many people believe that organic foods have higher content of nutrients and thus are healthier than conventionally produced foods. However, scientists have not been equally convinced that this is the case as the research conducted in the field has not shown consistent results. A 2009 systematic review found that organically produced foodstuffs are not richer in vitamins and minerals than conventionally produced foodstuffs. This systematic review found a lower nitrogen and higher phosphorus content in organic produced compared to conventionally grown foodstuffs. Content of vitamin C, calcium, potassium, total soluble solids, copper, iron, nitrates, manganese, and sodium did not differ between the two categories. A 2012 survey of the scientific literature did not find significant differences in the vitamin content of organic and conventional plant or animal products, and found that results varied from study to study. Produce studies reported on ascorbic acid (vitamin C) (31 studies), beta-carotene (a precursor for vitamin A) (12 studies), and alpha-tocopherol (a form of vitamin E) (5 studies) content; milk studies reported on beta-carotene (4 studies) and alpha-tocopherol levels (4 studies). Few studies examined vitamin content in meats, but these found no difference in beta-carotene in beef, alpha-tocopherol in pork or beef, or vitamin A (retinol) in beef. The authors analyzed 11 other nutrients reported in studies of produce. A 2011 literature review found that organic foods had a higher micronutrient content overall than conventionally produced foods. Similarly, organic chicken contained higher levels of omega-3 fatty acids than conventional chicken. The authors found no difference in the protein or fat content of organic and conventional raw milk. A 2016 systematic review and meta-analysis found that organic meat had comparable or slightly lower levels of saturated fat and monounsaturated fat as conventional meat, but higher levels of both overall and n-3 polyunsaturated fatty acids. Another meta-analysis published the same year found no significant differences in levels of saturated and monounsaturated fat between organic and conventional milk, but significantly higher levels of overall and n-3 polyunsaturated fatty acids in organic milk than in conventional milk. Anti-nutrients The amount of nitrogen content in certain vegetables, especially green leafy vegetables and tubers, has been found to be lower when grown organically as compared to conventionally. When evaluating environmental toxins such as heavy metals, the USDA has noted that organically raised chicken may have lower arsenic levels. Early literature reviews found no significant evidence that levels of arsenic, cadmium or other heavy metals differed significantly between organic and conventional food products. However, a 2014 review found lower concentrations of cadmium, particularly in organically grown grains. Phytochemicals A 2014 meta-analysis of 343 studies on phytochemical composition found that organically grown crops had lower cadmium and pesticide residues, and 17% higher concentrations of polyphenols than conventionally grown crops. Concentrations of phenolic acids, flavanones, stilbenes, flavones, flavonols, and anthocyanins were elevated, with flavanones being 69% higher. Studies on phytochemical composition of organic crops have numerous deficiencies, including absence of standardized measurements and poor reporting on measures of variability, duplicate or selective reporting of data, publication bias, lack of rigor in studies comparing pesticide residue levels in organic and conventional crops, the geographical origin of samples, and inconsistency of farming and post-harvest methods. Pesticide residues The amount of pesticides that remain in or on food is called pesticide residue. In the United States, before a pesticide can be used on a food crop, the U.S. Environmental Protection Agency must determine whether that pesticide can be used without posing a risk to human health. A 2012 meta-analysis determined that detectable pesticide residues were found in 7% of organic produce samples and 38% of conventional produce samples. This result was statistically heterogeneous, potentially because of the variable level of detection used among these studies. Only three studies reported the prevalence of contamination exceeding maximum allowed limits; all were from the European Union. A 2014 meta-analysis found that conventionally grown produce was four times more likely to have pesticide residue than organically grown crops. The American Cancer Society has stated that no evidence exists that the small amount of pesticide residue found on conventional foods will increase the risk of cancer, although it recommends thoroughly washing fruits and vegetables. They have also stated that there is no research to show that organic food reduces cancer risk compared to foods grown with conventional farming methods. The Environmental Protection Agency maintains strict guidelines on the regulation of pesticides by setting a tolerance on the amount of pesticide residue allowed to be in or on any particular food. Although some residue may remain at the time of harvest, residue tend to decline as the pesticide breaks down over time. In addition, as the commodities are washed and processed prior to sale, the residues often diminish further. Bacterial contamination A 2012 meta-analysis determined that prevalence of E. coli contamination was not statistically significant (7% in organic produce and 6% in conventional produce). Differences in the prevalence of bacterial contamination between organic and conventional animal products were also statistically insignificant. Organic meat production requirements United States Organic meat certification in the United States requires farm animals to be raised according to USDA organic regulations throughout their lives. These regulations require that livestock are fed certified organic food that contains no animal byproducts. Further, organic farm animals can receive no growth hormones or antibiotics, and they must be raised using techniques that protect native species and other natural resources. Irradiation and genetic engineering are not allowed with organic animal production. One of the major differences in organic animal husbandry protocol is the "pasture rule": minimum requirements for time on pasture do vary somewhat by species and between the certifying agencies, but the common theme is to require as much time on pasture as possible and reasonable. Economics Organic agriculture has higher potential costs due to lower yields and higher labor costs, leading to higher consumer prices. Demand for organic foods is primarily driven by concerns for personal health and for the environment. Global sales for organic foods climbed by more than 170 percent since 2002 reaching more than $63 billion in 2011 while certified organic farmland remained relatively small at less than 2 percent of total farmland under production, increasing in OECD and EU countries (which account for the majority of organic production) by 35 percent for the same time period. Organic products typically cost 10% to 50% more than similar conventionally produced products, to several times the price. Processed organic foods vary in price when compared to their conventional counterparts. While organic food accounts for about 1% of total food production worldwide, the organic food sales market is growing rapidly with between 5 and 10 percent of the food market share in the United States according to the Organic Trade Association, significantly outpacing sales growth volume in dollars of conventional food products. World organic food sales jumped from US$23 billion in 2002 to $63 billion in 2011. Asia Production and consumption of organic products is rising rapidly in Asia, and both China and India are becoming global producers of organic crops and a number of countries, particularly China and Japan, also becoming large consumers of organic food and drink. The disparity between production and demand, is leading to a two-tier organic food industry, typified by significant and growing imports of primary organic products such as dairy and beef from Australia, Europe, New Zealand and the United States. China China's organic food production was originally for exportation in the early 2000s. Due to the food safety crisis since the late 2000s, China's domestic market outweighed the exportation market. The organic food production in China involves diverse players. Besides certified organic food production mainly conducted by private organic food companies, there are also non-certified organic farming practiced by entrepreneurs and civil society organizations. These initiatives have unique marketing channels such as ecological farmers' markets and community-supported agriculture emerging in and around Chinese major cities. China's domestic organic market is the fourth largest in the world. The Chinese Organic Food Development Center estimated domestic sales of organic food products to be around US$500 million per annum as of 2013. This is predicted to increase by 30 percent to 50 percent in 2014. As of 2015, organic foods made up about 1% of the total Chinese food market. China is the world's biggest infant formula market with $12.4 billion in sales annually; of this, organic infant formula and baby food accounted for approximately 5.5 per cent of sales in 2011. Australian organic infant formula and baby food producer Bellamy's Organic have reported that their sales in this market grew 70 per cent annually over the period 2008–2013, while Organic Dairy Farmers of Australia, reported that exports of long-life organic milk to China had grown by 20 to 30 per cent per year over the same period. Sri Lanka In April 2021, Sri Lanka started its "100% organic farming" program, banning imports of chemical fertilisers, pesticides and herbicides. In November 2021, it was announced that the country will lift its import ban, explained by both a lack of sudden changes to widely applied practices or education systems and contemporary economics and, by extension, food security, protests and high food costs. The effort for the first transition to a completely organic farming nation was further challenged by effects of the COVID-19 pandemic. Bhutan In 2013 the government of Bhutan announced that the country will become the first in the world with 100% organic farming and started a program for qualification. This program is being supported by the International Federation of Organic Agriculture Movements (IFOAM). A 2021 news report found that "globally, only Bhutan has a complete ban on synthetic pesticides". A 2018 study found that "current organic by default farming practices in Bhutan are still underdeveloped". Japan In 2010, the Japanese organic market was estimated to be around $1.3 billion. North America United States Organic food is the fastest growing sector of the American food industry. In 2005 the organic food market was only worth about US$13 billion. By 2012 the total size of the organic food market in the United States was about $30 billion (out of the total market for organic and natural consumer products being about $81 billion)Carl Edstrom of IRI and Kathryn Peters of SPINS October 2013 Natural / Organic Consumer Segmentation, A Total Market Perspective In 2020 the organic food market was worth over $56 billion. Organic food sales have grown by 17 to 20 percent a year in the early 2000s while sales of conventional food have grown only about 2 to 3 percent a year. The US organic market grew 9.5% in 2011, breaking the $30bn barrier for the first time, and continued to outpace sales of non-organic food. In 2003 organic products were available in nearly 20,000 natural food stores and 73% of conventional grocery stores. Organic products accounted for 3.7% of total food and beverage sales, and 11.4% of all fruit and vegetable sales in the year 2009. , many independent organic food processors in the USA had been acquired by multinational firms. For a product to become USDA organic certified, the farmer cannot plant genetically modified seeds and livestock cannot eat genetically modified plants. Farmers must provide substantial evidence showing there was no genetic modification involved in the operation. Canada Organic food sales surpassed $1 billion in 2006, accounting for 0.9% of food sales in Canada. By 2012, Canadian organic food sales reached $3 billion. British Columbians account for 13% of the Canadian population, but purchased 26% of the organic food sold in Canada in 2006. Europe Denmark In 2012, organic products accounted for 7.8% of the total retail consumption market in Denmark, the highest national market share in the world. Many public institutions have voluntarily committed themselves to buy some organic food and in Copenhagen 75% of all food served in public institutions is organic. A governmental action plan initiated in 2012–2014 aims at 60% organic food in all public institutions across the country before 2020. In 1987, the first Danish Action Plan was implemented which was meant to support and stimulate farmers to switch from conventional food production systems to organic ones . Since then Denmark has constantly worked on further developing the market by promoting organic food and keeping prices low in comparison to conventional food products by offering farmers subvention and extra support if they choose to produce organic food. Then and even today is the bench mark for organic food policy and certification of organic food in the whole world. The new European Organic food label and organic food policy was developed based on the 1987 Danish Model. Austria In 2011, 7.4% of all food products sold in Austrian supermarkets (including discount stores) were organic. In 2007, 8,000 different organic products were available. Italy Since 2000, the use of some organic food is compulsory in Italian schools and hospitals. A 2002 law of the Emilia Romagna region implemented in 2005, explicitly requires that the food in nursery and primary schools (from 3 months to 10 years) must be 100% organic, and the food in meals at schools, universities and hospitals must be at least 35% organic. Poland In 2005 7 percent of Polish consumers buy food that was produced according to the EU-Eco-regulation. The value of the organic market is estimated at 50 million euros (2006). Romania 70%–80% of the local organic production, amounting to 100 million euros in 2010, is exported. The organic products market grew to 50 million euros in 2010. Switzerland , 11 per cent of Swiss farms are organic. Bio Suisse, the Swiss organic producers' association, provides guidelines for organic farmers. Ukraine During 2022, despite the full-scale war Ukraine exported 245,600 metric tons of organic products in the amount of USD 219 million to 36 countries around the world which is almost the same as in 2021 (261,000 metric tonnes, USD 222 million). 95% of organic products from Ukraine were exported to European countries. Most products were exported by rail and road. Export volumes by vessels decreased, in particular, air transportation for export from Ukraine became impossible. The largest importing countries of Ukrainian organic products in 2022 were the Netherlands, Germany, Austria, Switzerland, Poland, Lithuania, the United States, Italy, the United Kingdom, and the Czech Republic. Ukrainian organic producers also exported to some countries in Asia and North America. According to the European Commission's Report, in 2022, Ukraine ranked the 3rd out of 125 countries by volume of organic products imported to the EU. Thus, in 2022, the EU imported 2.73 million tonnes of organic agri-food products, including 219 thousand tonnes (8%) from Ukraine, which is 85% of total Ukrainian organic export. Thus, Ukraine had leading positions among the exporting countries to the EU, having exported 93 thousand tonnes (77.1%) of cereals (excluding wheat and rice) and 20 thousand tonnes (22%) of organic oilseeds (excluding soybeans). In Ukraine, organic is regulated in accordance with the Law of Ukraine On Basic Principles and Requirements for Organic Production, Circulation and Labelling of Organic Products. Majority of Ukrainian producers, processing units, traders are also certified under international organic legislation (e.g. EU Organic Regulations, NOP, etc. The Order on the Approval of the State Logo for Organic Products was approved by the Ministry of Agrarian Policy and Food of Ukraine in 2019. The state logo for organic products is registered as a trademark and owned by the Ministry of Agrarian Policy and Food of Ukraine. The requirements for proper use of the Ukrainian state logo for organic products and labelling are described on the website of the Ministry of Agrarian Policy and Food of Ukraine as well as in the Methodical Recommendations on the Use of the State Logo for Organic Products. United Kingdom Organic food sales increased from just over £100 million in 1993/94 to £1.21 billion in 2004 (an 11% increase on 2003). In 2010, the UK sales of organic products fell 5.9% to £1.73 billion. 86% of households buy organic products, the most popular categories being dairies (30.5% of sales) and fresh fruits and vegetables (23.2% of sales). As of 2011, 4.2% of UK farmland is organically managed. Latin America Cuba After the collapse of the Soviet Union in 1991, agricultural inputs that had previously been purchased from Eastern bloc countries were no longer available in Cuba, and many Cuban farms converted to organic methods out of necessity. Consequently, organic agriculture is a mainstream practice in Cuba, while it remains an alternative practice in most other countries. Although some products called organic in Cuba would not satisfy certification requirements in other countries (crops may be genetically modified, for example), Cuba exports organic citrus and citrus juices to EU markets that meet EU organic standards. Cuba's forced conversion to organic methods may position the country to be a global supplier of organic products. See also Agroecology Genetically modified food List of diets List of organic food topics List of foods Natural food Organic clothing Permaculture Regenerative agriculture Soil Association Whole food Organic food culture Silent Spring'', a book about pesticides and the environment by Rachel Carson References Further reading External links A World Map of Organic Agriculture UK Organic certification and standards India – National Program for Organic Production (NPOP) Product certification Diets Environmental controversies
0.763933
0.995932
0.760826
Paradox of the plankton
In aquatic biology, the paradox of the plankton describes the situation in which a limited range of resources supports an unexpectedly wide range of plankton species, apparently flouting the competitive exclusion principle, which holds that when two species compete for the same resource, one will be driven to extinction. Ecological paradox The paradox of the plankton results from the clash between the observed diversity of plankton and the competitive exclusion principle, also known as Gause's law, which states that, when two species compete for the same resource, ultimately only one will persist and the other will be driven to extinction. Coexistence between two such species is impossible because the dominant one will inevitably deplete the shared resources, thus decimating the inferior population. Phytoplankton life is diverse at all phylogenetic levels despite the limited range of resources (e.g. light, nitrate, phosphate, silicic acid, iron) for which they compete amongst themselves. The paradox of the plankton was originally described in 1961 by G. Evelyn Hutchinson, who proposed that the paradox could be resolved by factors such as vertical gradients of light or turbulence, symbiosis or commensalism, differential predation, or constantly changing environmental conditions. Later studies found that the paradox can be resolved by factors such as: zooplankton grazing pressure; chaotic fluid motion; size-selective grazing; spatio-temporal heterogeneity; bacterial mediation; or environmental fluctuations. In general, researchers suggest that ecological and environmental factors continually interact such that the planktonic habitat never reaches an equilibrium for which a single species is favoured. While it was long assumed that turbulence disrupts plankton patches at spatial scales less than a few metres, researchers using small-scale analysis of plankton distribution found that these exhibited patches of aggregation (on the order of 10cm) that had sufficient lifetimes (more than 10 minutes) to enable plankton grazing, competition, and infection. Resolution by viral lysis One potential resolution to the paradox is the control on plankton populations by marine lytic viruses. Marine viruses play an important role in bacteria and plankton ecology. They are a significant component of biogeochemical cycling and horizontal gene transfer in both bacterial and plankton communities. Viruses are the most abundant organisms in the ocean, and have the capacity to deplete host populations very rapidly. Marine viruses infect specific host species, and therefore an abundance of a virus can quickly and effectively alter the structure of the phytoplankton and bacterial communities. Via the lytic cycle, a virus encounters a host and reproduces until the cell bursts, releasing viruses. Viruses can also enter a lysogenic cycle, in which the virus writes its DNA into the host genome. When a phytoplankton species enters a bloom period, cell concentration increases and many viral targets suddenly become available. One explanation to the paradox of the plankton is the "Boom-and-busted dynamic" hypothesis, also called "Kill the winner." In a phytoplankton bloom, an individual species multiplies rapidly in ideal conditions, which increases its cell concentration in an area, outcompeting other phytoplankton. This "boom" in host cells creates an opportunity for rapid infection by viruses, leading to a "bust" in which the phytoplankton population rapidly diminishes. This creates a large gap in the local phytoplankton ecology and allows other species to fill in and continue growing. Such population control by viruses creates temporal and spatial diversity in phytoplankton communities. Long term control results, as the virus prevents the formerly dominant species from booming during future bloom events. See also Unified neutral theory of biodiversity References External links The Paradox of the Plankton by Klaus Rohde Biological interactions Biological oceanography Aquatic ecology Mathematical and theoretical biology Paradoxes Planktology
0.786079
0.967857
0.760812
Protein primary structure
Protein primary structure is the linear sequence of amino acids in a peptide or protein. By convention, the primary structure of a protein is reported starting from the amino-terminal (N) end to the carboxyl-terminal (C) end. Protein biosynthesis is most commonly performed by ribosomes in cells. Peptides can also be synthesized in the laboratory. Protein primary structures can be directly sequenced, or inferred from DNA sequences. Formation Biological Amino acids are polymerised via peptide bonds to form a long backbone, with the different amino acid side chains protruding along it. In biological systems, proteins are produced during translation by a cell's ribosomes. Some organisms can also make short peptides by non-ribosomal peptide synthesis, which often use amino acids other than the standard 20, and may be cyclised, modified and cross-linked. Chemical Peptides can be synthesised chemically via a range of laboratory methods. Chemical methods typically synthesise peptides in the opposite order (starting at the C-terminus) to biological protein synthesis (starting at the N-terminus). Notation Protein sequence is typically notated as a string of letters, listing the amino acids starting at the amino-terminal end through to the carboxyl-terminal end. Either a three letter code or single letter code can be used to represent the 20 naturally occurring amino acids, as well as mixtures or ambiguous amino acids (similar to nucleic acid notation). Peptides can be directly sequenced, or inferred from DNA sequences. Large sequence databases now exist that collate known protein sequences. Modification In general, polypeptides are unbranched polymers, so their primary structure can often be specified by the sequence of amino acids along their backbone. However, proteins can become cross-linked, most commonly by disulfide bonds, and the primary structure also requires specifying the cross-linking atoms, e.g., specifying the cysteines involved in the protein's disulfide bonds. Other crosslinks include desmosine. Isomerisation The chiral centers of a polypeptide chain can undergo racemization. Although it does not change the sequence, it does affect the chemical properties of the sequence. In particular, the L-amino acids normally found in proteins can spontaneously isomerize at the atom to form D-amino acids, which cannot be cleaved by most proteases. Additionally, proline can form stable trans-isomers at the peptide bond. Post-translational modification Additionally, the protein can undergo a variety of post-translational modifications, which are briefly summarized here. The N-terminal amino group of a polypeptide can be modified covalently, e.g., acetylation The positive charge on the N-terminal amino group may be eliminated by changing it to an acetyl group (N-terminal blocking). formylation The N-terminal methionine usually found after translation has an N-terminus blocked with a formyl group. This formyl group (and sometimes the methionine residue itself, if followed by Gly or Ser) is removed by the enzyme deformylase. pyroglutamate An N-terminal glutamine can attack itself, forming a cyclic pyroglutamate group. myristoylation Similar to acetylation. Instead of a simple methyl group, the myristoyl group has a tail of 14 hydrophobic carbons, which make it ideal for anchoring proteins to cellular membranes. The C-terminal carboxylate group of a polypeptide can also be modified, e.g., amination (see Figure) The C-terminus can also be blocked (thus, neutralizing its negative charge) by amination. glycosyl phosphatidylinositol (GPI) attachment Glycosyl phosphatidylinositol(GPI) is a large, hydrophobic phospholipid prosthetic group that anchors proteins to cellular membranes. It is attached to the polypeptide C-terminus through an amide linkage that then connects to ethanolamine, thence to sundry sugars and finally to the phosphatidylinositol lipid moiety. Finally, the peptide side chains can also be modified covalently, e.g., phosphorylation Aside from cleavage, phosphorylation is perhaps the most important chemical modification of proteins. A phosphate group can be attached to the sidechain hydroxyl group of serine, threonine and tyrosine residues, adding a negative charge at that site and producing an unnatural amino acid. Such reactions are catalyzed by kinases and the reverse reaction is catalyzed by phosphatases. The phosphorylated tyrosines are often used as "handles" by which proteins can bind to one another, whereas phosphorylation of Ser/Thr often induces conformational changes, presumably because of the introduced negative charge. The effects of phosphorylating Ser/Thr can sometimes be simulated by mutating the Ser/Thr residue to glutamate. glycosylation A catch-all name for a set of very common and very heterogeneous chemical modifications. Sugar moieties can be attached to the sidechain hydroxyl groups of Ser/Thr or to the sidechain amide groups of Asn. Such attachments can serve many functions, ranging from increasing solubility to complex recognition. All glycosylation can be blocked with certain inhibitors, such as tunicamycin. deamidation (succinimide formation) In this modification, an asparagine or aspartate side chain attacks the following peptide bond, forming a symmetrical succinimide intermediate. Hydrolysis of the intermediate produces either aspartate or the β-amino acid, iso(Asp). For asparagine, either product results in the loss of the amide group, hence "deamidation". hydroxylation Proline residues may be hydroxylated at either of two atoms, as can lysine (at one atom). Hydroxyproline is a critical component of collagen, which becomes unstable upon its loss. The hydroxylation reaction is catalyzed by an enzyme that requires ascorbic acid (vitamin C), deficiencies in which lead to many connective-tissue diseases such as scurvy. methylation Several protein residues can be methylated, most notably the positive groups of lysine and arginine. Arginine residues interact with the nucleic acid phosphate backbone and commonly form hydrogen bonds with the base residues, particularly guanine, in protein–DNA complexes. Lysine residues can be singly, doubly and even triply methylated. Methylation does not alter the positive charge on the side chain, however. acetylation Acetylation of the lysine amino groups is chemically analogous to the acetylation of the N-terminus. Functionally, however, the acetylation of lysine residues is used to regulate the binding of proteins to nucleic acids. The cancellation of the positive charge on the lysine weakens the electrostatic attraction for the (negatively charged) nucleic acids. sulfation Tyrosines may become sulfated on their atom. Somewhat unusually, this modification occurs in the Golgi apparatus, not in the endoplasmic reticulum. Similar to phosphorylated tyrosines, sulfated tyrosines are used for specific recognition, e.g., in chemokine receptors on the cell surface. As with phosphorylation, sulfation adds a negative charge to a previously neutral site. prenylation and palmitoylation The hydrophobic isoprene (e.g., farnesyl, geranyl, and geranylgeranyl groups) and palmitoyl groups may be added to the atom of cysteine residues to anchor proteins to cellular membranes. Unlike the GPI and myritoyl anchors, these groups are not necessarily added at the termini. carboxylation A relatively rare modification that adds an extra carboxylate group (and, hence, a double negative charge) to a glutamate side chain, producing a Gla residue. This is used to strengthen the binding to "hard" metal ions such as calcium. ADP-ribosylation The large ADP-ribosyl group can be transferred to several types of side chains within proteins, with heterogeneous effects. This modification is a target for the powerful toxins of disparate bacteria, e.g., Vibrio cholerae, Corynebacterium diphtheriae and Bordetella pertussis. ubiquitination and SUMOylation Various full-length, folded proteins can be attached at their C-termini to the sidechain ammonium groups of lysines of other proteins. Ubiquitin is the most common of these, and usually signals that the ubiquitin-tagged protein should be degraded. Most of the polypeptide modifications listed above occur post-translationally, i.e., after the protein has been synthesized on the ribosome, typically occurring in the endoplasmic reticulum, a subcellular organelle of the eukaryotic cell. Many other chemical reactions (e.g., cyanylation) have been applied to proteins by chemists, although they are not found in biological systems. Cleavage and ligation In addition to those listed above, the most important modification of primary structure is peptide cleavage (by chemical hydrolysis or by proteases). Proteins are often synthesized in an inactive precursor form; typically, an N-terminal or C-terminal segment blocks the active site of the protein, inhibiting its function. The protein is activated by cleaving off the inhibitory peptide. Some proteins even have the power to cleave themselves. Typically, the hydroxyl group of a serine (rarely, threonine) or the thiol group of a cysteine residue will attack the carbonyl carbon of the preceding peptide bond, forming a tetrahedrally bonded intermediate [classified as a hydroxyoxazolidine (Ser/Thr) or hydroxythiazolidine (Cys) intermediate]. This intermediate tends to revert to the amide form, expelling the attacking group, since the amide form is usually favored by free energy, (presumably due to the strong resonance stabilization of the peptide group). However, additional molecular interactions may render the amide form less stable; the amino group is expelled instead, resulting in an ester (Ser/Thr) or thioester (Cys) bond in place of the peptide bond. This chemical reaction is called an N-O acyl shift. The ester/thioester bond can be resolved in several ways: Simple hydrolysis will split the polypeptide chain, where the displaced amino group becomes the new N-terminus. This is seen in the maturation of glycosylasparaginase. A β-elimination reaction also splits the chain, but results in a pyruvoyl group at the new N-terminus. This pyruvoyl group may be used as a covalently attached catalytic cofactor in some enzymes, especially decarboxylases such as S-adenosylmethionine decarboxylase (SAMDC) that exploit the electron-withdrawing power of the pyruvoyl group. Intramolecular transesterification, resulting in a branched polypeptide. In inteins, the new ester bond is broken by an intramolecular attack by the soon-to-be C-terminal asparagine. Intermolecular transesterification can transfer a whole segment from one polypeptide to another, as is seen in the Hedgehog protein autoprocessing. Sequence compression The compression of amino acid sequences is a comparatively challenging task. The existing specialized amino acid sequence compressors are low compared with that of DNA sequence compressors, mainly because of the characteristics of the data. For example, modeling inversions is harder because of the reverse information loss (from amino acids to DNA sequence). The current lossless data compressor that provides higher compression is AC2. AC2 mixes various context models using Neural Networks and encodes the data using arithmetic encoding. History The proposal that proteins were linear chains of α-amino acids was made nearly simultaneously by two scientists at the same conference in 1902, the 74th meeting of the Society of German Scientists and Physicians, held in Karlsbad. Franz Hofmeister made the proposal in the morning, based on his observations of the biuret reaction in proteins. Hofmeister was followed a few hours later by Emil Fischer, who had amassed a wealth of chemical details supporting the peptide-bond model. For completeness, the proposal that proteins contained amide linkages was made as early as 1882 by the French chemist E. Grimaux. Despite these data and later evidence that proteolytically digested proteins yielded only oligopeptides, the idea that proteins were linear, unbranched polymers of amino acids was not accepted immediately. Some well-respected scientists such as William Astbury doubted that covalent bonds were strong enough to hold such long molecules together; they feared that thermal agitations would shake such long molecules asunder. Hermann Staudinger faced similar prejudices in the 1920s when he argued that rubber was composed of macromolecules. Thus, several alternative hypotheses arose. The colloidal protein hypothesis stated that proteins were colloidal assemblies of smaller molecules. This hypothesis was disproved in the 1920s by ultracentrifugation measurements by Theodor Svedberg that showed that proteins had a well-defined, reproducible molecular weight and by electrophoretic measurements by Arne Tiselius that indicated that proteins were single molecules. A second hypothesis, the cyclol hypothesis advanced by Dorothy Wrinch, proposed that the linear polypeptide underwent a chemical cyclol rearrangement C=O + HN C(OH)-N that crosslinked its backbone amide groups, forming a two-dimensional fabric. Other primary structures of proteins were proposed by various researchers, such as the diketopiperazine model of Emil Abderhalden and the pyrrol/piperidine model of Troensegaard in 1942. Although never given much credence, these alternative models were finally disproved when Frederick Sanger successfully sequenced insulin and by the crystallographic determination of myoglobin and hemoglobin by Max Perutz and John Kendrew. Primary structure in other molecules Any linear-chain heteropolymer can be said to have a "primary structure" by analogy to the usage of the term for proteins, but this usage is rare compared to the extremely common usage in reference to proteins. In RNA, which also has extensive secondary structure, the linear chain of bases is generally just referred to as the "sequence" as it is in DNA (which usually forms a linear double helix with little secondary structure). Other biological polymers such as polysaccharides can also be considered to have a primary structure, although the usage is not standard. Relation to secondary and tertiary structure The primary structure of a biological polymer to a large extent determines the three-dimensional shape (tertiary structure). Protein sequence can be used to predict local features, such as segments of secondary structure, or trans-membrane regions. However, the complexity of protein folding currently prohibits predicting the tertiary structure of a protein from its sequence alone. Knowing the structure of a similar homologous sequence (for example a member of the same protein family) allows highly accurate prediction of the tertiary structure by homology modeling. If the full-length protein sequence is available, it is possible to estimate its general biophysical properties, such as its isoelectric point. Sequence families are often determined by sequence clustering, and structural genomics projects aim to produce a set of representative structures to cover the sequence space of possible non-redundant sequences. See also Protein sequencing Nucleic acid primary structure Translation Pseudo amino acid composition Notes and references Protein structure 1 Stereochemistry
0.76608
0.993105
0.760798
Class (biology)
In biological classification, class is a taxonomic rank, as well as a taxonomic unit, a taxon, in that rank. It is a group of related taxonomic orders. Other well-known ranks in descending order of size are life, domain, kingdom, phylum, order, family, genus, and species, with class ranking between phylum and order. History The class as a distinct rank of biological classification having its own distinctive name – and not just called a top-level genus (genus summum) – was first introduced by French botanist Joseph Pitton de Tournefort in the classification of plants that appeared in his Eléments de botanique of 1694. Insofar as a general definition of a class is available, it has historically been conceived as embracing taxa that combine a distinct grade of organization—i.e. a 'level of complexity', measured in terms of how differentiated their organ systems are into distinct regions or sub-organs—with a distinct type of construction, which is to say a particular layout of organ systems. This said, the composition of each class is ultimately determined by the subjective judgment of taxonomists. In the first edition of his Systema Naturae (1735), Carl Linnaeus divided all three of his kingdoms of nature (minerals, plants, and animals) into classes. Only in the animal kingdom are Linnaeus's classes similar to the classes used today; his classes and orders of plants were never intended to represent natural groups, but rather to provide a convenient "artificial key" according to his Systema Sexuale, largely based on the arrangement of flowers. In botany, classes are now rarely discussed. Since the first publication of the APG system in 1998, which proposed a taxonomy of the flowering plants up to the level of orders, many sources have preferred to treat ranks higher than orders as informal clades. Where formal ranks have been assigned, the ranks have been reduced to a very much lower level, e.g. class Equisitopsida for the land plants, with the major divisions within the class assigned to subclasses and superorders. The class was considered the highest level of the taxonomic hierarchy until George Cuvier's embranchements, first called Phyla by Ernst Haeckel, were introduced in the early nineteenth century. See also Cladistics List of animal classes Phylogenetics Systematics Taxonomy Explanatory notes References Bacterial nomenclature Zoological nomenclature Class Plant taxonomy
0.763266
0.996745
0.760781
Phytosociology
Phytosociology, also known as phytocoenology or simply plant sociology, is the study of groups of species of plant that are usually found together. Phytosociology aims to empirically describe the vegetative environment of a given territory. A specific community of plants is considered a social unit, the product of definite conditions, present and past, and can exist only when such conditions are met. In phyto-sociology, such a unit is known as a phytocoenosis (or phytocoenose). A phytocoenosis is more commonly known as a plant community, and consists of the sum of all plants in a given area. It is a subset of a biocoenosis, which consists of all organisms in a given area. More strictly speaking, a phytocoenosis is a set of plants in area that are interacting with each other through competition or other ecological processes. Coenoses are not equivalent to ecosystems, which consist of organisms and the physical environment that they interact with. A phytocoensis has a distribution which can be mapped. Phytosociology has a system for describing and classifying these phytocoenoses in a hierarchy, known as syntaxonomy, and this system has a nomenclature. The science is most advanced in Europe, Africa and Asia. In the United States this concept was largely rejected in favour of studying environments in more individualistic terms regarding species, where specific associations of plants occur randomly because of individual preferences and responses to gradients, and there are no sharp boundaries between phytocoenoses. The terminology 'plant community' is usually used in the US for a habitat consisting of a number of specific plant species. It has been a successful approach in the scope of contemporary vegetation science because of its highly descriptive and predictive powers, and its usefulness in nature management issues. History The term 'phytosociology' was coined in 1896 by Józef Paczoski. The term 'phytocoenology' was coined by Helmut Gams in 1918. While the terminology phytocoenosis grew to be most popular in France, Switzerland, Germany and the Soviet Union, the terminology phytosociology remained in use in some European countries. Phytosociology is a further refinement of the phytogeography introduced by Alexander von Humboldt at the very beginning of the 19th century. Phytocoenology was initially considered to be a subdiscipline of 'geobotany'. In Scandinavia the concept of plant associations was popular at an early date. Hampus von Post (1842, 1862), Ragnar Hult (1881, 1898), Thore Christian Elias Fries (1913), Gustaf Einar Du Rietz (1921). Rübel (1922, 1930), Pavillard (1927), Schröter & Kirchner (1886–1902), Flahault & Carl Joseph Schröter (1910), In the Soviet Union an important botanist to apply and popularise the science was Vladimir Sukachev. The science of phytosociology has hardly penetrated into the English-speaking world, where the continuum concept of community prevailed, opposed to the concept of a 'society' of plants. Nonetheless it had some early adherents in the United States, notably Frederic Clements in particular, who used the concept to characterise the vegetation of California. Largely following European ideas, he devised his own system to classify habitat types using vegetation. Clements most important contribution was his study of succession. His work has seen much local usage. In Britain Arthur Tansley was the first to apply phytosociological concepts to the vegetation of the kingdom in 1911 after learning of its application elsewhere in Europe. Tansley eventually broadened the concept and thus came up with the idea of an ecosystem, combining all biotic and abiotic ecological aspects of an environment. The work of Tansley and Clements was quite divergent from the rest. Usage today Modern phytosociology for largely follows the work of Józef Paczoski in Poland, Josias Braun-Blanquet in France and Gustaf Einar Du Rietz in Sweden. In Europe a complete classification system has been developed to describe the vegetation types found across the continent. These are used as habitat-type classifications in the NATURA 2000 network and in Habitats Directive legislation. Each phytocoenose has been given a number, and protected areas can thus be classified according to the habitats they contain. In Europe this information is generally mapped per 2 km² blocks for conservation purposes, such as monitoring particularly endangered habitat types, predicting success of reintroductions, or estimating more specific carrying capacities. Because certain habitats are deemed more imperilled (i.e. having a higher conservation value) than others, a numerical conservation value of a specific site can be approximated. Overview The aim of phytosociology is to achieve a sufficient empirical model of vegetation using combinations of plant species (or subspecies, i.e. taxa) that characterize discrete vegetation units. Vegetation units as understood by phytosociologists may express largely abstract vegetation concepts (e.g. the set of all hard-leaved evergreen forests of western Mediterranean area) or actual readily recognizable vegetation types (e.g. cork-oak oceanic forests on Pleistocene dunes with dense canopy in Iberian Peninsula). Such conceptual units are called syntaxa (singular "syntaxon") and can be set in a hierarchy system called "synsystem" or syntaxonomic system. Creating new syntaxa or adjusting the synsystem is called syntaxonomy. Before the rules were agreed upon, a number of slightly different systems of classification existed. These were known as "schools" or "traditions", and there were two main systems: the older Scandinavian school and the Zürich-Montpellier school, also sometimes called the Braun-Blanquet approach. Relevé The first step in phytosociology is gathering data. This is done with what is known as a relevé, a plot in which all the species are identified, and their abundance both vertically and in area are calculated. Other data are also recorded for a relevé: the geographic location, environmental factors and vegetation structure. Boolean operators and (formerly) tables are used to sort the data. As the calculations needed are difficult and tedious to do manually, modern ecologists feed the relevé data into software programs that use algorithms to crunch the numbers. Association model The basic unit of syntaxonomy, the organisation and nomenclature of phytosociological relationships, is the "association", defined by its characteristic combination of plant taxa. Sometimes other habitat features such as the management by humans (mowing regime, for example), physiognomy and/or the stage in ecological succession may also be considered. Such an association is usually viewed as a discrete phytocoenose. Similar and neighbouring associations can be grouped in larger ecological conceptual units, with a group of plant associations called an "alliance". Similar alliances may be grouped in "orders" and orders in vegetation "classes". The setting of syntaxa in such a hierarchy makes up the syntaxonomical system. The most important workers to define the modern system were initially Charles Flahault, with the work of his student Josias Braun-Blanquet being the what is generally considered the final version of syntaxonomical nomenclature. Braun-Blanquet further refined and standardised the work of Flahault and many others when he worked on the phytocoenosis of the southern Cévennes. He established the modern system of classifying vegetation. Braun-Blanquet's method uses the scientific name of its most characteristic species as namesake, changing the ending of the generic epithet to "-etum" and treating the specific epithet as an adjective. Thus, a particular type of mesotrophic grassland widespread in western Europe and dominated only by the grass Arrhenatherum elatius becomes "Arrhenatheretum elatioris Br.-Bl.". To distinguish between similar plant communities dominated by the same species, other important species are included in the name, but the name is otherwise is formed according to the same rules. Another type of mesotrophic pasture dominated by black knapweed (Centaurea nigra) and the grass Cynosurus cristatus, which is also widespread in western Europe, is consequently named Centaureo-Cynosuretum cristati Br.-Bl. & Tx.. If the second species is characteristic but notably less dominant than the first one, its genus name may be used as the adjective, for example in Pterocarpetum rhizophorosus, a type of tropical scrubland near water which has abundant Pterocarpus officinalis and significant (though not overwhelmingly prominent) red mangrove (Rhizophora mangle). Today an International Code of Phytosociological Nomenclature exists, in which the rules for naming syntaxa are given. Its use has increased among botanists. In Anglo-American ecology, the association concept is mostly linked to the work of the mid-twentieth century botanist Henry Gleason, who set it up as an alternative to Frederic Clement's views on the superorganismic framework. The philosophical parameters of the association concept have also come under study by environmental philosophers as to how it values and defends the natural environment. Vegetation complexes Modern phytosociologists try to include higher levels of complexity in the perception of vegetation, namely by describing whole successional units (vegetation series) or, in general, vegetation complexes. Other developments include the use of multivariate statistics for the definition of syntaxa and their interpretation. Data collections Phytosociological data contain information collected in relevés (or plots) listing each species cover-abundance values and the measured environmental variables. This data is conveniently databanked in a program like TURBOVEG allowing for editing, storage and export to other applications. Data is usually classified and sorted using TWINSPAN in host programs like JUICE to create realistic species-relevé associations. Further patterns are investigated using clustering and resemblance methods, and ordination techniques available in software packages like CANOCO or the R-package vegan. See also Josias Braun-Blanquet Józef Paczoski António Rodrigo Pinto da Silva Victor Westhoff Biogeography JUICE - program for phytosociologists Plant community Phytogeography References External links International Code of Phytosociological Nomenclature, 3rd edition Landscape disagreement with phytosociological theories Phytosociology Methods of Ecosystem Analysis, yale.edu Technical University of Braunschweig, Working Group for Vegetation Ecology and Experimental Plant Sociology, accessed 20 April 2010 Branches of botany Biogeography Habitats Habitat management equipment and methods Nomenclature codes ru:Геоботаника
0.78741
0.966156
0.760761
Seeing Like a State
Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed is a book by James C. Scott critical of a system of beliefs he calls high modernism, that centers on governments' overconfidence in the ability to design and operate society in accordance with purported scientific laws. The book makes an argument that states seek to force "legibility" on their subjects by homogenizing them and creating standards that simplify pre-existing, natural, diverse social arrangements. Examples include the introduction of family names, censuses, uniform languages, and standard units of measurement. While such innovations aim to facilitate state control and economies of scale, Scott argues that the eradication of local differences and silencing of local expertise can have adverse effects. The book was first published in March 1998, with a paperback version appearing in February 1999. Summary Scott shows how central governments attempt to force legibility on their subjects, and fail to see complex, valuable forms of local social order and knowledge. A main theme of this book, illustrated by his historic examples, is that states operate systems of power toward 'legibility' in order to see their subjects correctly in a top-down, modernist, model that is flawed, problematic, and often ends poorly for subjects. The goal of local legibility by the state is transparency from the top down, from the top of the tower or the center/seat of the government, so the state can effectively operate upon their subjects. The book uses examples like the introduction of permanent last names in Great Britain, cadastral surveys in France, and standard units of measure across Europe to argue that a reconfiguration of social order is necessary for state scrutiny, and requires the simplification of pre-existing, natural arrangements. While, in earlier times, a field could be measured in the amount of cows it could sustain or the types of plants it could grow, post centralization, its size is measured in hectares. This allows governors who have little to no local knowledge to immediately understand the outline of the area but simultaneously blinds the state to the complex interactions which happen within nature and society. In agriculture and forestry, for example, it led to monoculture, or the sole focus on cultivating a single crop or tree at the cost of all others. While monoculture is easy to measure, manage, and understand, it is also less resilient to ecological crises than polyculture is. In the case of last names, Scott cites a Welsh man who appeared in court and identified himself with a long string of patronyms: "John, ap Thomas ap William" etc. In his local village, this naming system carried a lot of information, because people could identify him as the son of Thomas and grandson of William, and thus distinguish him from the other Johns, the other children of Thomas, and the other grandchildren of William. Yet it was of less use to the central government, which did not know Thomas or William. The court demanded that John take a permanent last name (in this case, the name of his village). This helped the central government keep track of its subjects, at the cost of a more nuanced yet fuzzy and less legible understanding of local conditions. Schemes that successfully improve human lives, Scott argues, must take into account local conditions, and that the high-modernist ideologies of the 20th century have prevented this. He highlights collective farms in the Soviet Union, the building of Brasilia, and forced villagization in 1970s Tanzania as examples of failed schemes which were led by top-down bureaucratic efforts and where officials ignored or silenced local expertise. Scott takes great effort to highlight that he is not necessarily anti-state. At times, the central role played by the state is necessary for programs such as disaster response or vaccinations. The flattening of knowledge which goes hand-in-hand with state centralization can have disastrous consequences when officials see centralised knowledge as the only legitimate information that they should consider, ignoring more specialised but less clearly defined indigenous and local expertise. Scott explores the concept of "metis," which refers to practical knowledge gained through experience and shaped by individual contexts. He compares this type of knowledge to "epistemic" knowledge, which is more formalized and associated with scientific methods and institutional education. Unlike epistemic knowledge, which is standardized and centralized, metis is adaptable and diverse. It emerges from the accumulated experiences of individuals within specific contexts, resulting in a rich tapestry of localized knowledge systems. This flexibility allows metis to evolve and respond to changing circumstances, making it highly applicable in various practical domains. However, Scott also discusses the challenges that metis faces in contemporary society, particularly in the context of industrialization and state control. He argues that attempts to standardize knowledge and impose universal ideologies often undermine the diverse nature of metis, marginalizing localized knowledge systems in favor of more centralized and standardized forms of knowledge production. Scott also criticizes authoritarian efforts to impose rigid knowledge frameworks, as they overlook the nuanced and context-dependent nature of metis. Instead of recognizing the value of diverse knowledge forms, these authoritarian approaches seek to homogenize and control knowledge production for political or economic purposes. Scott advocates for the preservation and acknowledgment of metis alongside epistemic knowledge. He highlights the importance of embracing the dynamic and diverse nature of practical knowledge derived from experience, emphasizing its relevance in addressing complex challenges and promoting resilience in the face of change. Scott examines the limitations of high-modernist urban planning and social engineering, contending that these approaches often result in unsustainable outcomes and diminish human autonomy and abilities. Scott contrasts the rigid, centralized designs of high modernism with the adaptable, diverse nature of institutions shaped by practical wisdom, or "metis." Scott criticizes the monocultural, one-dimensional nature of high-modernist projects, suggesting that they fail to account for the complexity and dynamism of real-life systems. Examples from agriculture, urban planning, and economics are used to illustrate how rigid, top-down approaches can lead to environmental degradation, social dislocation, and a loss of human agency. Furthermore, he emphasizes the importance of diversity, flexibility, and adaptability in human institutions, arguing that these qualities enhance resilience and effectiveness. He highlights the role of informal, bottom-up practices in complementing and sometimes subverting formal systems, demonstrating how metis-driven institutions can thrive in complex, ever-changing environments. Scott advocates for institutions that are shaped by the knowledge and experience of their participants, rather than imposed from above. He suggests that such institutions are better equipped to navigate uncertainty, respond to change, and foster the development of individuals with a wide range of skills and capabilities. Reception Book reviews Stanford University political scientist David D. Laitin described it as "a magisterial book." But he said there were flaws in the methodology of the book, saying the book "is a product of undisciplined history. For one, Scott’s evidence is selective and eclectic, with only minimal attempts to weigh disconfirming evidence... It is all too easy to select confirming evidence if the author can choose from the entire historical record and use material from all countries of the world." John N. Gray, author of False Dawn: The Delusions of Global Capitalism, reviewed the book favorably for the New York Times, concluding: "Today's faith in the free market echoes the faith of earlier generations in high modernist schemes that failed at great human cost. Seeing Like a State does not tell us what it is in late modern societies that predisposes them, against all the evidence of history, to put their trust in such utopias. Sadly, no one knows enough to explain that." Economist James Bradford DeLong wrote a detailed online review of the book. DeLong acknowledged Scott's adept examination of the pitfalls of centrally planned social-engineering projects, which aligns with the Austrian tradition's critique of central planning. Scott's book, according to DeLong, effectively demonstrates the limitations and failures of attempts to impose high modernist principles from the top down. However, DeLong also suggested that Scott may fail to fully acknowledge his intellectual roots, particularly within the Austrian tradition. DeLong argued that while Scott effectively critiques high modernism, he may avoid explicitly aligning his work with the Austrian perspective due to subconscious fears of being associated with certain political ideologies. DeLong's interpretation of the book was critiqued by Henry Farrell on the Crooked Timber blog, and there was a follow-up exchange including further discussion of the book. Economist Deepak Lal reviewed the book for the Summer 2000 issue of The Independent Review, concluding: "Although I am in sympathy with Scott’s diagnosis of the development disasters he recounts, I conclude that he has not burrowed deep enough to discover a systematic cause of these failures. (In my view, that cause lies in the continuing attraction of various forms of 'enterprises' in what at heart remains Western Christendom.) Nor is he right in so blithely dismissing the relevance of classical liberalism in finding remedies for the ills he eloquently describes." Political scientist Ulf Zimmermann reviewed the book for H-Net Online in December 1998, concluding: "It is important to keep in mind, as Scott likewise notes, that many of these projects replaced even worse social orders and at least occasionally introduced somewhat more egalitarian principles, never mind improving public health and such. And, in the end, many of the worst were sufficiently resisted in their absurdity, as he had shown so well in his, Weapons of the Weak and as best demonstrated by the utter collapse of the soviet system. "Metis" alone is not sufficient; we need to find a way to link it felicitously with—to stick with Scott's Aristotelian vocabulary—phronesis and praxis, or, in more ordinary terms, to produce theories more profoundly grounded in actual practice so that the state may see better in implementing policies." Michael Adas, professor of history at Rutgers University reviewed the book for the Summer 2000 issue of the Journal of Social History. Russell Hardin, a professor of politics at New York University, reviewed the book for The Good Society in 2001, disagreeing with Scott's diagnosis somewhat. Hardin, who believes in collectiveness (collective actions) concluded: "The failure of collectivization was therefore a failure of incentives, not a failure to rely on local knowledge." Discussions The September 2010 issue of Cato Unbound was devoted to discussing the themes of the book. Scott wrote the lead essay. Other participants were Donald Boudreaux, Timothy B. Lee, and J. Bradford DeLong. A number of people, including Henry Farrell and Tyler Cowen, weighed in on the discussion on their own blogs. See also Panopticism Further reading References 1998 non-fiction books Books about social history Modernism
0.765136
0.994239
0.760728
Startup ecosystem
A startup ecosystem is formed by people in startups in their various stages, and various types of organizations in a location (physical or virtual) that are interacting as a system to create and scale new startup companies. These organizations can be further divided into categories such as universities, funding organizations, support organizations (like incubators, accelerators, co-working spaces etc.), research organizations, service provider organizations (like legal, financial services etc.) and large corporations. Local Governments and Government organizations such as Commerce / Industry / Economic Development departments also play an important role in a startup ecosystem. Different organizations typically focus on specific parts of the ecosystem function and startups at their specific development stage(s). Emerging startup ecosystems are often evaluated using tangible metrics like new products, patents, and venture capital funding. However, Hannigan et al. (2022) argue that understanding these ecosystems requires considering cultural factors alongside material ones. They emphasize that cultural elements, such as community engagement and shared values, play a crucial role in the growth and success of emerging startup ecosystems. By incorporating both cultural and material perspectives, policymakers can better design incentives and regulations to foster economic growth and innovation in these ecosystems. This approach suggests that building cultural infrastructure is as important as financial and technical support in developing thriving entrepreneurial environments. Silicon Valley, NYC, Singapore and Tel Aviv are considered examples of global startup ecosystems. Composition of the startup ecosystem Ideas, inventions and research i.e., Intellectual property rights (IPR) Entrepreneurship Education Startups at various stages Entrepreneurs Start up team members Angel investors Startup mentors Startup advisors Other business-oriented people People from other organizations with start-up activities Startup events Venture Capitalists List of organizations and/or organized activities with startup activities Universities Students Advisory and mentoring organizations Startup incubators Startup accelerators Coworking spaces Service providers (Consulting, Accounting, Legal, etc.) Event organizers Start-up competitions Startup Business Model Evaluators Business Angel Networks Venture capital companies Equity Crowdfunding portals Corporates (telcos, banking, health, food, etc.) Other funding providers (loans, grants etc.) Start-up blogs and social networks Other facilitators Investors from these roles are linked together through shared events, activities, locations, and interactions. Startup ecosystems generally encompass the network of interactions between people, organizations, and their environment. Any particular start-up ecosystem is defined by its collection of specific cities or online communities. In addition, resources like skills, time, and money are also essential components of a start-up ecosystem. The resources that flow through ecosystems are obtained primarily from the meetings between people and organizations that are an active part of those startup ecosystems. These interactions help to create new potential startups and/or to strengthen the already existing ones. There are a few common mistakes that entrepreneurs make that end up costing them their business, like inability to secure adequate funding, sudden market downturn and a poor scaling plan. External and internal factors Startup ecosystems are controlled by both external and internal factors. External factors, such as financial climate, big market disruptions, and significant transitions, control the overall structure of an ecosystem and the way things work within it. Start-up ecosystems are dynamic entities that progress from formation stages to periodic disturbances (like the financial bubbles) and then to recovering processes. Several researchers have created lists of essential internal attributes for startup ecosystems. Spigel suggests that ecosystems require cultural attributes (a culture of entrepreneurship and histories of successful entrepreneurship), social attributes that are accessed through social ties (worker talent, investment capital, social networks, and entrepreneurial mentors) and material attributes grounded in a specific places (government policies, universities, support services, physical infrastructure, and open local markets). Stam distinguishes between framework conditions of ecosystems (formal institutions, culture, physical infrastructure, and market demand) with systematic conditions of networks, leadership, finance, talent, knowledge, and support services. Startup ecosystems in similar environments but located in different parts of the world can end up doing things differently simply because they have a different entrepreneurial culture and resource pool. The introduction of non-native peoples' knowledge and skills can also cause substantial shifts in the ecosystem's functions. Internal factors act as feedback loops inside any particular startup ecosystem. They not only control ecosystem processes, but are also controlled by them. While some resource inputs are generally controlled by external processes like financial climate and market disruptions, the availability of resources within the ecosystem are controlled by every organization's ability to contribute towards the ecosystem. Although people exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like financial climate. Role of employee diversity Employee diversity also affects startup ecosystem functions, as do the processes of disturbance and succession. Startup ecosystems provide a variety of goods and services upon which other people and companies depend on. Thus, the principles of start-up ecosystem management suggest that rather than managing individual people or organizations, resources should be managed at the level of the startup ecosystem itself. Classifying start-up ecosystems into structurally similar units is an important step towards effective ecosystem managing. Startup ecosystem studies There are several independent studies made to evaluate start-up ecosystems to better understand and compare various start-up ecosystems and to offer valuable insights of the strengths and weaknesses of different start-up ecosystems. Startup ecosystems can be studied through a variety of approaches — theoretical studies, studies monitoring specific start-up ecosystems over long periods of time and those that look at differences between start-up ecosystems to elucidate how they work. Since 2012, San Francisco-based Startup Genome has been the first organization to release comprehensive research reports that benchmark startup ecosystems globally. Under the leadership of JF Gauthier and Marc Penzel, the company has been the first organization to capture the requirements of a startup ecosystem in a data-driven framework. Startup Genome's work influenced startup policies globally and is supported by thought leaders such as Steve Blank and has appeared in leading business media such as The Economist, Bloomberg and Harvard Business Review. Global Startup ecosystem and ranking Multiple cities and hubs have been described as global Startup ecosystems. Startup Genome publishes a yearly ranking of global startup ecosystems. The study does yearly reports ranking the top 40 global startup hubs. In addition, StartupBlink, also releases an annual Index ranking global startup ecosystems, which evaluates over 1,000 cities worldwide. Top Cities in Startup Genome GSER 2024 Further reading Startup genome report (2024) References Notes Further reading Entrepreneurship Venture capital Ecosystems Types of business entity
0.776708
0.979426
0.760728
Anthropic principle
The anthropic principle, also known as the observation selection effect, is the hypothesis that the range of possible observations that could be made about the universe is limited by the fact that observations are only possible in the type of universe that is capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to be finely tuned for the existence of life. There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail. Definition and basis The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life on Earth. The anthropic principle states that this is an a posteriori necessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility. The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved. The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. The anthropic principle is often criticized for lacking falsifiability and therefore its critics may point out that the anthropic principle is a non-scientific concept, even though the weak anthropic principle, "conditions that are observed in the universe must allow the observer to exist", is "easy" to support in mathematics and philosophy (i.e., it is a tautology or truism). However, building a substantive argument based on a tautological foundation is problematic. Stronger variants of the anthropic principle are not tautologies and thus make claims considered controversial by some and that are contingent upon empirical verification. Anthropic observations In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old. If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspired Dirac's varying-G theory. Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter, account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics"). However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life. The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction (up to 50% for some authors) would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction also would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life. Origin The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Kraków symposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlies the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang). Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics. Roger Penrose explained the weak form as follows: One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse ("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of a selection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do the fundamental laws of physics take the particular form we observe and not another?" Since Carter's 1973 paper, the term anthropic principle has been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler, which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section. Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man." In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem." Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang Boltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably low entropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be. Variants Weak anthropic principle (WAP) (Carter): "... our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." For Carter, "location" refers to our location in time as well as space. Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism. In their 1986 book, The anthropic cosmological principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows: Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP. Strong anthropic principle (SAP) (Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler: "There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'." This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life emerges and evolves. "Observers are necessary to bring the Universe into being." Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory anthropic principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner. "An ensemble of other different universes is necessary for the existence of our Universe." By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation. The philosophers John Leslie and Nick Bostrom reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes: Strong self-sampling assumption (SSSA) (Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class." Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice. According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary. Playwright and novelist Michael Frayn describes a form of the strong anthropic principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe": Character of anthropic reasoning Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in is fine tuned to permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions". If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder. Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was the many-worlds interpretation of quantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking in the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions. The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein said: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind stated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle." The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the anthropic principle as conventionally stated actually undermines intelligent design. Paul Davies's book The Goldilocks Enigma (2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate: The absurd universe: Our universe just happens to be the way it is. The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. A Theory of Everything will explain why the various features of the Universe must have exactly the values that have been recorded. The multiverse: Multiple universes exist, having all possible combinations of characteristics, and humans inevitably find themselves within a universe that allows us to exist. Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence. The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind. The self-explaining universe: A closed explanatory or causal loop: "perhaps only universes with a capacity for consciousness can exist". This is Wheeler's participatory anthropic principle (PAP). The fake universe: Humans live inside a virtual reality simulation. Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005). Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994). The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983) inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links. Observational evidence No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist. Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following: Physical theory will evolve so as to strengthen the hypothesis that early phase transitions occur probabilistically rather than deterministically, in which case there will be no deep physical reason for the values of fundamental constants; Various theories for generating multiple universes will prove robust; Evidence that the universe is fine tuned will continue to accumulate; No life with a non-carbon chemistry will be discovered; Mathematical studies of galaxy formation will confirm that it is sensitive to the rate of expansion of the universe. Hogan has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universe not to support life. Probabilistic predictions of parameter values can be made given: a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range ), and an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe). The probability of observing value X is then proportional to . A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense. One thing that would not count as evidence for the anthropic principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers. Applications of the principle The nucleosynthesis of carbon-12 Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction. However, in 2010 Helge Kragh argued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance. Cosmic inflation Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle. While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require. String theory String theory predicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind has argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed. Steven Weinberg believes the anthropic principle may be appropriated by cosmologists committed to nontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Lubos Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present. Jürgen Schmidhuber (2000–2002) points out that Ray Solomonoff's theory of universal inductive inference and its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and some prior distribution on the set of possible explanations of the universe. Zhi-Wei Wang and Samuel L. Braunstein proved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life. Dimensions of spacetime There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That and , setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue. The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena". Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204). In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse. Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if , Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Lastly, if , gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when , nerves cannot cross without intersecting. Hence anthropic and other arguments rule out all cases except and , which describes the world around us. On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that -dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed. In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks. Metaphysical interpretations Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua. From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has laconically stated that William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point. The anthropic cosmological principle A thorough extant study of the anthropic principle is the book The anthropic cosmological principle by John D. Barrow, a cosmologist, and Frank J. Tipler, a cosmologist and mathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoretical astrophysics, it also touches on quantum physics, chemistry, and earth science. An entire chapter argues that Homo sapiens is, with high probability, the only intelligent species in the Milky Way. The book begins with an extensive review of many topics in the history of ideas the authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological reasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks. Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose the final anthropic principle (FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out. Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas. In his review of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP): Reception and controversies Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involves humans in particular, to the exclusion of non-human intelligence more broadly. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects. A common criticism of Carter's SAP is that it is an easy deus ex machina that discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts." Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different", which is a valid statement, but does not make a claim of some factual alternative over another. Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result. Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer, and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection to adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa. Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine-tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc. Lee Smolin has offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe. The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation". A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours: See also (discussing the anthropic principle) (an immediate precursor of the idea) (work of Alejandro Jenkins) Notes Footnotes References 5 chapters available online. Stenger, Victor J. (1999), "Anthropic design", The skeptical inquirer 23 (August 31, 1999): 40–43 Mosterín, Jesús (2005). "Anthropic explanations in cosmology". In P. Háyek, L. Valdés and D. Westerstahl (ed.), Logic, methodology and philosophy of science, Proceedings of the 12th international congress of the LMPS. London: King's college publications, pp. 441–473. . A simple anthropic argument for why there are 3 spatial and 1 temporal dimensions. Shows that some of the common criticisms of anthropic principle based on its relationship with numerology or the theological design argument are wrong. External links Nick Bostrom: web site devoted to the anthropic principle. Friederich, Simon. Fine-tuning, review article of the discussion about fine-tuning, highlighting the role of the anthropic principles. Gijsbers, Victor. (2000). Theistic anthropic principle refuted – Positive atheism magazine. Chown, Marcus, Anything Goes, New scientist, 6 June 1998. On Max Tegmark's work. Stephen Hawking, Steven Weinberg, Alexander Vilenkin, David Gross and Lawrence Krauss: Debate on anthropic reasoning Kavli-CERCA conference video archive. Sober, Elliott R. 2009, "Absence of evidence and evidence of absence – Evidential transitivity in connection with fossils, fishing, fine-tuning, and firing squads." Philosophical Studies, 2009, 143: 63–90. "Anthropic coincidence" – The anthropic controversy as a segue to Lee Smolin's theory of cosmological natural selection. Leonard Susskind and Lee Smolin debate the anthropic principle. Debate among scientists on arxiv.org. Evolutionary probability and fine tuning Benevolent design and the anthropic principle at MathPages Critical review of "The privileged planet" The anthropic principle – a review. Berger, Daniel, 2002, "An impertinent résumé of the Anthropic cosmological principle. " A critique of Barrow & Tipler. Jürgen Schmidhuber: Papers on algorithmic theories of everything and the anthropic principle's lack of predictive power. Paul Davies: Cosmic jackpot – Interview about the anthropic principle (starts at 40 min), 15 May 2007. Astronomical hypotheses Concepts in epistemology Physical cosmology Principles Religion and science
0.761835
0.998542
0.760724
Virulence
Virulence is a pathogen's or microorganism's ability to cause damage to a host. In most, especially in animal systems, virulence refers to the degree of damage caused by a microbe to its host. The pathogenicity of an organism—its ability to cause disease—is determined by its virulence factors. In the specific context of gene for gene systems, often in plants, virulence refers to a pathogen's ability to infect a resistant host. The noun virulence (Latin noun ) derives from the adjective virulent, meaning disease severity. The word virulent derives from the Latin word virulentus, meaning "a poisoned wound" or "full of poison". The term virulence does not only apply to viruses. From an ecological standpoint, virulence is the loss of fitness induced by a parasite upon its host. Virulence can be understood in terms of proximate causes—those specific traits of the pathogen that help make the host ill—and ultimate causes—the evolutionary pressures that lead to virulent traits occurring in a pathogen strain. Virulent bacteria The ability of bacteria to cause disease is described in terms of the number of infecting bacteria, the route of entry into the body, the effects of host defense mechanisms, and intrinsic characteristics of the bacteria called virulence factors. Many virulence factors are so-called effector proteins that are injected into the host cells by specialized secretion apparati, such as the type three secretion system. Host-mediated pathogenesis is often important because the host can respond aggressively to infection with the result that host defense mechanisms do damage to host tissues while the infection is being countered (e.g., cytokine storm). The virulence factors of bacteria are typically proteins or other molecules that are synthesized by enzymes. These proteins are coded for by genes in chromosomal DNA, bacteriophage DNA or plasmids. Certain bacteria employ mobile genetic elements and horizontal gene transfer. Therefore, strategies to combat certain bacterial infections by targeting these specific virulence factors and mobile genetic elements have been proposed. Bacteria use quorum sensing to synchronise release of the molecules. These are all proximate causes of morbidity in the host. Methods by which bacteria cause disease Adhesion Many bacteria must first bind to host cell surfaces. Many bacterial and host molecules that are involved in the adhesion of bacteria to host cells have been identified. Often, the host cell surface receptors for bacteria are essential proteins for other functions. Due to the presence of mucus lining and of anti-microbial substances around some host cells, it is difficult for certain pathogens to establish direct contact-adhesion. Colonization Some virulent bacteria produce special proteins that allow them to colonize parts of the host body. Helicobacter pylori is able to survive in the acidic environment of the human stomach by producing the enzyme urease. Colonization of the stomach lining by this bacterium can lead to gastric ulcers and cancer. The virulence of various strains of Helicobacter pylori tends to correlate with the level of production of urease. Invasion Some virulent bacteria produce proteins that either disrupt host cell membranes or stimulate their own endocytosis or macropinocytosis into host cells. These virulence factors allow the bacteria to enter host cells and facilitate entry into the body across epithelial tissue layers at the body surface. Immune response inhibitors Many bacteria produce virulence factors that inhibit the host's immune system defenses. For example, a common bacterial strategy is to produce proteins that bind host antibodies. The polysaccharide capsule of Streptococcus pneumoniae inhibits phagocytosis of the bacterium by host immune cells. Toxins Many virulence factors are proteins made by bacteria that poison host cells and cause tissue damage. For example, there are many food poisoning toxins produced by bacteria that can contaminate human foods. Some of these can remain in "spoiled" food even after cooking and cause illness when the contaminated food is consumed. Other bacterial toxins are chemically altered and inactivated by the heat of cooking. Virulent viruses Virus virulence factors allow it to replicate, modify host defenses, and spread within the host, and they are toxic to the host. They determine whether infection occurs and how severe the resulting viral disease symptoms are. Viruses often require receptor proteins on host cells to which they specifically bind. Typically, these host cell proteins are endocytosed and the bound virus then enters the host cell. Virulent viruses such as HIV, which causes AIDS, have mechanisms for evading host defenses. HIV infects T-helper cells, which leads to a reduction of the adaptive immune response of the host and eventually leads to an immunocompromised state. Death results from opportunistic infections secondary to disruption of the immune system caused by AIDS. Some viral virulence factors confer ability to replicate during the defensive inflammation responses of the host such as during virus-induced fever. Many viruses can exist inside a host for long periods during which little damage is done. Extremely virulent strains can eventually evolve by mutation and natural selection within the virus population inside a host. The term "neurovirulent" is used for viruses such as rabies and herpes simplex which can invade the nervous system and cause disease there. Extensively studied model organisms of virulent viruses include virus T4 and other T-even bacteriophages which infect Escherichia coli and a number of related bacteria. The lytic life cycle of virulent bacteriophages is contrasted by the temperate lifecycle of temperate bacteriophages. See also Host–pathogen interaction Membrane vesicle trafficking Bacterial effector protein Infectious disease Optimal virulence Super-spreader Verotoxin-producing Escherichia coli Virulence factor Antivirulence References Microbiology terms
0.767162
0.991584
0.760705
Mosaic (genetics)
Mosaicism or genetic mosaicism is a condition in which a multicellular organism possesses more than one genetic line as the result of genetic mutation. This means that various genetic lines resulted from a single fertilized egg. Mosaicism is one of several possible causes of chimerism, wherein a single organism is composed of cells with more than one distinct genotype. Genetic mosaicism can result from many different mechanisms including chromosome nondisjunction, anaphase lag, and endoreplication. Anaphase lagging is the most common way by which mosaicism arises in the preimplantation embryo. Mosaicism can also result from a mutation in one cell during development, in which case the mutation will be passed on only to its daughter cells (and will be present only in certain adult cells). Somatic mosaicism is not generally inheritable as it does not generally affect germ cells. History In 1929, Alfred Sturtevant studied mosaicism in Drosophila, a genus of fruit fly. Muller in 1930 demonstrated that mosaicism in Drosophila is always associated with chromosomal rearrangements and Schultz in 1936 showed that in all cases studied these rearrangements were associated with heterochromatic inert regions, several hypotheses on the nature of such mosaicism were proposed. One hypothesis assumed that mosaicism appears as the result of a break and loss of chromosome segments. Curt Stern in 1935 assumed that the structural changes in the chromosomes took place as a result of somatic crossing, as a result of which mutations or small chromosomal rearrangements in somatic cells. Thus the inert region causes an increase in mutation frequency or small chromosomal rearrangements in active segments adjacent to inert regions. In the 1930s, Stern demonstrated that genetic recombination, normal in meiosis, can also take place in mitosis. When it does, it results in somatic (body) mosaics. These organisms contain two or more genetically distinct types of tissue. The term somatic mosaicism was used by CW Cotterman in 1956 in his seminal paper on antigenic variation. In 1944, Belgovskii proposed that mosaicism could not account for certain mosaic expressions caused by chromosomal rearrangements involving heterochromatic inert regions. The associated weakening of biochemical activity led to what he called a genetic chimera. Types Germline mosaicism Germline or gonadal mosaicism is a particular form of mosaicism wherein some gametes—i.e., sperm or oocytes—carry a mutation, but the rest are normal. The cause is usually a mutation that occurred in an early stem cell that gave rise to all or part of the gametes. Somatic mosaicism Somatic mosaicism (also known as clonal mosaicism) occurs when the somatic cells of the body are of more than one genotype. In the more common mosaics, different genotypes arise from a single fertilized egg cell, due to mitotic errors at first or later cleavages. Somatic mutation leading to mosaicism is prevalent in the beginning and end stages of human life. Somatic mosaics are common in embryogenesis due to retrotransposition of long interspersed nuclear element-1 (LINE-1 or L1) and Alu transposable elements. In early development, DNA from undifferentiated cell types may be more susceptible to mobile element invasion due to long, unmethylated regions in the genome. Further, the accumulation of DNA copy errors and damage over a lifetime lead to greater occurrences of mosaic tissues in aging humans. As longevity has increased dramatically over the last century, human genome may not have had time to adapt to cumulative effects of mutagenesis. Thus, cancer research has shown that somatic mutations are increasingly present throughout a lifetime and are responsible for most leukemia, lymphomas, and solid tumors. Trisomies, monosomies, and related conditions The most common form of mosaicism found through prenatal diagnosis involves trisomies. Although most forms of trisomy are due to problems in meiosis and affect all cells of the organism, some cases occur where the trisomy occurs in only a selection of the cells. This may be caused by a nondisjunction event in an early mitosis, resulting in a loss of a chromosome from some trisomic cells. Generally, this leads to a milder phenotype than in nonmosaic patients with the same disorder. In rare cases, intersex conditions can be caused by mosaicism where some cells in the body have XX and others XY chromosomes (46, XX/XY). In the fruit fly Drosophila melanogaster, where a fly possessing two X chromosomes is a female and a fly possessing a single X chromosome is a sterile male, a loss of an X chromosome early in embryonic development can result in sexual mosaics, or gynandromorphs. Likewise, a loss of the Y chromosome can result in XY/X mosaic males. An example of this is one of the milder forms of Klinefelter syndrome, called 46,XY/47,XXY mosaic wherein some of the patient's cells contain XY chromosomes, and some contain XXY chromosomes. The 46/47 annotation indicates that the XY cells have the normal number of 46 total chromosomes, and the XXY cells have a total of 47 chromosomes. Also monosomies can present with some form of mosaicism. The only non-lethal full monosomy occurring in humans is the one causing Turner's syndrome. Around 30% of Turner's syndrome cases demonstrate mosaicism, while complete monosomy (45, X) occurs in about 50–60% of cases. Mosaicism need not necessarily be deleterious, though. Revertant somatic mosaicism is a rare recombination event with a spontaneous correction of a mutant, pathogenic allele. In revertant mosaicism, the healthy tissue formed by mitotic recombination can outcompete the original, surrounding mutant cells in tissues such as blood and epithelia that regenerate often. In the skin disorder ichthyosis with confetti, normal skin spots appear early in life and increase in number and size over time. Other endogenous factors can also lead to mosaicism, including mobile elements, DNA polymerase slippage, and unbalanced chromosome segregation. Exogenous factors include nicotine and UV radiation. Somatic mosaics have been created in Drosophila using X‑ray treatment and the use of irradiation to induce somatic mutation has been a useful technique in the study of genetics. True mosaicism should not be mistaken for the phenomenon of X-inactivation, where all cells in an organism have the same genotype, but a different copy of the X chromosome is expressed in different cells. The latter is the case in normal (XX) female mammals, although it is not always visible from the phenotype (as it is in calico cats). However, all multicellular organisms are likely to be somatic mosaics to some extent. Gonosomal mosaicism Gonosomal mosaicism is a type of somatic mosaicism that occurs very early in the organisms development and thus is present within both germline and somatic cells. Somatic mosaicism is not generally inheritable as it does not usually affect germ cells. In the instance of gonosomal mosaicism, organisms have the potential to pass the genetic alteration, including to potential offspring because the altered allele is present in both somatic and germline cells. Brain cell mosaicism A frequent type of neuronal genomic mosaicism is copy number variation. Possible sources of such variation were suggested to be incorrect repairs of DNA damage and somatic recombination. Mitotic recombination One basic mechanism that can produce mosaic tissue is mitotic recombination or somatic crossover. It was first discovered by Curt Stern in Drosophila in 1936. The amount of tissue that is mosaic depends on where in the tree of cell division the exchange takes place. A phenotypic character called "twin spot" seen in Drosophila is a result of mitotic recombination. However, it also depends on the allelic status of the genes undergoing recombination. Twin spot occurs only if the heterozygous genes are linked in repulsion, i.e. the trans phase. The recombination needs to occur between the centromeres of the adjacent gene. This gives an appearance of yellow patches on the wild-type background in Drosophila. another example of mitotic recombination is the Bloom's syndrome, which happens due to the mutation in the blm gene. The resulting BLM protein is defective. The defect in RecQ, a helicase, facilitates the defective unwinding of DNA during replication, thus is associated with the occurrence of this disease. Use in experimental biology Genetic mosaics are a particularly powerful tool when used in the commonly studied fruit fly, where specially selected strains frequently lose an X or a Y chromosome in one of the first embryonic cell divisions. These mosaics can then be used to analyze such things as courtship behavior, and female sexual attraction. More recently, the use of a transgene incorporated into the Drosophila genome has made the system far more flexible. The flip recombinase (or FLP) is a gene from the commonly studied yeast Saccharomyces cerevisiae that recognizes "flip recombinase target" (FRT) sites, which are short sequences of DNA, and induces recombination between them. FRT sites have been inserted transgenically near the centromere of each chromosome arm of D. melanogaster. The FLP gene can then be induced selectively, commonly using either the heat shock promoter or the GAL4/UAS system. The resulting clones can be identified either negatively or positively. In negatively marked clones, the fly is transheterozygous for a gene encoding a visible marker (commonly the green fluorescent protein) and an allele of a gene to be studied (both on chromosomes bearing FRT sites). After induction of FLP expression, cells that undergo recombination will have progeny homozygous for either the marker or the allele being studied. Therefore, the cells that do not carry the marker (which are dark) can be identified as carrying a mutation. Using negatively marked clones is sometimes inconvenient, especially when generating very small patches of cells, where seeing a dark spot on a bright background is more difficult than a bright spot on a dark background. Creating positively marked clones is possible using the so-called MARCM ("mosaic analysis with a repressible cell marker" system, developed by Liqun Luo, a professor at Stanford University, and his postdoctoral student Tzumin Lee, who now leads a group at Janelia Farm Research Campus. This system builds on the GAL4/UAS system, which is used to express GFP in specific cells. However, a globally expressed GAL80 gene is used to repress the action of GAL4, preventing the expression of GFP. Instead of using GFP to mark the wild-type chromosome as above, GAL80 serves this purpose, so that when it is removed by mitotic recombination, GAL4 is allowed to function, and GFP turns on. This results in the cells of interest being marked brightly in a dark background. See also 45,X/46,XY mosaicism (X0/XY mosaicism) References Further reading Genetics Cell biology
0.76385
0.995859
0.760687
Genetic recombination
Genetic recombination (also known as genetic reshuffling) is the exchange of genetic material between different organisms which leads to production of offspring with combinations of traits that differ from those found in either parent. In eukaryotes, genetic recombination during meiosis can lead to a novel set of genetic information that can be further passed on from parents to offspring. Most recombination occurs naturally and can be classified into two types: (1) interchromosomal recombination, occurring through independent assortment of alleles whose loci are on different but homologous chromosomes (random orientation of pairs of homologous chromosomes in meiosis I); & (2) intrachromosomal recombination, occurring through crossing over. During meiosis in eukaryotes, genetic recombination involves the pairing of homologous chromosomes. This may be followed by information transfer between the chromosomes. The information transfer may occur without physical exchange (a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed) (see SDSA – Synthesis Dependent Strand Annealing pathway in Figure); or by the breaking and rejoining of DNA strands, which forms new molecules of DNA (see DHJ pathway in Figure). Recombination may also occur during mitosis in eukaryotes where it ordinarily involves the two sister chromosomes formed after chromosomal replication. In this case, new combinations of alleles are not produced since the sister chromosomes are usually identical. In meiosis and mitosis, recombination occurs between similar molecules of DNA (homologous sequences). In meiosis, non-sister homologous chromosomes pair with each other so that recombination characteristically occurs between non-sister homologues. In both meiotic and mitotic cells, recombination between homologous chromosomes is a common mechanism used in DNA repair. Gene conversion – the process during which homologous sequences are made identical also falls under genetic recombination. Genetic recombination and recombinational DNA repair also occurs in bacteria and archaea, which use asexual reproduction. Recombination can be artificially induced in laboratory (in vitro) settings, producing recombinant DNA for purposes including vaccine development. V(D)J recombination in organisms with an adaptive immune system is a type of site-specific genetic recombination that helps immune cells rapidly diversify to recognize and adapt to new pathogens. Synapsis During meiosis, synapsis (the pairing of homologous chromosomes) ordinarily precedes genetic recombination. Mechanism Genetic recombination is catalyzed by many different enzymes. Recombinases are key enzymes that catalyse the strand transfer step during recombination. RecA, the chief recombinase found in Escherichia coli, is responsible for the repair of DNA double strand breaks (DSBs). In yeast and other eukaryotic organisms there are two recombinases required for repairing DSBs. The RAD51 protein is required for mitotic and meiotic recombination, whereas the DNA repair protein, DMC1, is specific to meiotic recombination. In the archaea, the ortholog of the bacterial RecA protein is RadA. Bacterial recombination In bacteria there is regular genetic recombination, as well as ineffective transfer of genetic material, expressed as unsuccessful transfer or abortive transfer, which is any bacterial DNA transfer of the donor cell to recipients which have set the incoming DNA as part of the genetic material of the recipient. Abortive transfer was registered in the following transduction and conjugation. In all cases, the transmitted fragment is diluted by the culture growth. Chromosomal crossover In eukaryotes, recombination during meiosis is facilitated by chromosomal crossover. The crossover process leads to offspring having different combinations of genes from those of their parents, and can occasionally produce new chimeric alleles. The shuffling of genes brought about by genetic recombination produces increased genetic variation. It also allows sexually reproducing organisms to avoid Muller's ratchet, in which the genomes of an asexual population tend to accumulate more deleterious mutations over time than beneficial or reversing mutations. Chromosomal crossover involves recombination between the paired chromosomes inherited from each of one's parents, generally occurring during meiosis. During prophase I (pachytene stage) the four available chromatids are in tight formation with one another. While in this formation, homologous sites on two chromatids can closely pair with one another, and may exchange genetic information. Because there is a small probability of recombination at any location along a chromosome, the frequency of recombination between two locations depends on the distance separating them. Therefore, for genes sufficiently distant on the same chromosome, the amount of crossover is high enough to destroy the correlation between alleles. Tracking the movement of genes resulting from crossovers has proven quite useful to geneticists. Because two genes that are close together are less likely to become separated than genes that are farther apart, geneticists can deduce roughly how far apart two genes are on a chromosome if they know the frequency of the crossovers. Geneticists can also use this method to infer the presence of certain genes. Genes that typically stay together during recombination are said to be linked. One gene in a linked pair can sometimes be used as a marker to deduce the presence of the other gene. This is typically used to detect the presence of a disease-causing gene. The recombination frequency between two loci observed is the crossing-over value. It is the frequency of crossing over between two linked gene loci (markers), and depends on the distance between the genetic loci observed. For any fixed set of genetic and environmental conditions, recombination in a particular region of a linkage structure (chromosome) tends to be constant, and the same is then true for the crossing-over value which is used in the production of genetic maps. Gene conversion In gene conversion, a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed. Gene conversion occurs at high frequency at the actual site of the recombination event during meiosis. It is a process by which a DNA sequence is copied from one DNA helix (which remains unchanged) to another DNA helix, whose sequence is altered. Gene conversion has often been studied in fungal crosses where the 4 products of individual meioses can be conveniently observed. Gene conversion events can be distinguished as deviations in an individual meiosis from the normal 2:2 segregation pattern (e.g. a 3:1 pattern). Nonhomologous recombination Recombination can occur between DNA sequences that contain no sequence homology. This can cause chromosomal translocations, sometimes leading to cancer. In B cells B cells of the immune system perform genetic recombination, called immunoglobulin class switching. It is a biological mechanism that changes an antibody from one class to another, for example, from an isotype called IgM to an isotype called IgG. Genetic engineering In genetic engineering, recombination can also refer to artificial and deliberate recombination of disparate pieces of DNA, often from different organisms, creating what is called recombinant DNA. A prime example of such a use of genetic recombination is gene targeting, which can be used to add, delete or otherwise change an organism's genes. This technique is important to biomedical researchers as it allows them to study the effects of specific genes. Techniques based on genetic recombination are also applied in protein engineering to develop new proteins of biological interest. Examples include Restriction enzyme mediated integration, Gibson assembly and Golden Gate Cloning. Recombinational repair DNA damages caused by a variety of exogenous agents (e.g. UV light, X-rays, chemical cross-linking agents) can be repaired by homologous recombinational repair (HRR). These findings suggest that DNA damages arising from natural processes, such as exposure to reactive oxygen species that are byproducts of normal metabolism, are also repaired by HRR. In humans, deficiencies in the gene products necessary for HRR during meiosis likely cause infertility In humans, deficiencies in gene products necessary for HRR, such as BRCA1 and BRCA2, increase the risk of cancer (see DNA repair-deficiency disorder). In bacteria, transformation is a process of gene transfer that ordinarily occurs between individual cells of the same bacterial species. Transformation involves integration of donor DNA into the recipient chromosome by recombination. This process appears to be an adaptation for repairing DNA damages in the recipient chromosome by HRR. Transformation may provide a benefit to pathogenic bacteria by allowing repair of DNA damage, particularly damages that occur in the inflammatory, oxidizing environment associated with infection of a host. When two or more viruses, each containing lethal genomic damages, infect the same host cell, the virus genomes can often pair with each other and undergo HRR to produce viable progeny. This process, referred to as multiplicity reactivation, has been studied in lambda and T4 bacteriophages, as well as in several pathogenic viruses. In the case of pathogenic viruses, multiplicity reactivation may be an adaptive benefit to the virus since it allows the repair of DNA damages caused by exposure to the oxidizing environment produced during host infection. See also reassortment. Meiotic recombination A molecular model for the mechanism of meiotic recombination presented by Anderson and Sekelsky is outlined in the first figure in this article. Two of the four chromatids present early in meiosis (prophase I) are paired with each other and able to interact. Recombination, in this model, is initiated by a double-strand break (or gap) shown in the DNA molecule (chromatid) at the top of the figure. Other types of DNA damage may also initiate recombination. For instance, an inter-strand cross-link (caused by exposure to a cross-linking agent such as mitomycin C) can be repaired by HRR. Two types of recombinant product are produced. Indicated on the right side is a "crossover" (CO) type, where the flanking regions of the chromosomes are exchanged, and on the left side, a "non-crossover" (NCO) type where the flanking regions are not exchanged. The CO type of recombination involves the intermediate formation of two "Holliday junctions" indicated in the lower right of the figure by two X-shaped structures in each of which there is an exchange of single strands between the two participating chromatids. This pathway is labeled in the figure as the DHJ (double-Holliday junction) pathway. The NCO recombinants (illustrated on the left in the figure) are produced by a process referred to as "synthesis dependent strand annealing" (SDSA). Recombination events of the NCO/SDSA type appear to be more common than the CO/DHJ type. The NCO/SDSA pathway contributes little to genetic variation, since the arms of the chromosomes flanking the recombination event remain in the parental configuration. Thus, explanations for the adaptive function of meiosis that focus exclusively on crossing-over are inadequate to explain the majority of recombination events. Achiasmy and heterochiasmy Achiasmy is the phenomenon where autosomal recombination is completely absent in one sex of a species. Achiasmatic chromosomal segregation is well documented in male Drosophila melanogaster. The "Haldane-Huxley rule" states that achiasmy usually occurs in the heterogametic sex. Heterochiasmy occurs when recombination rates differ between the sexes of a species. In humans, each oocyte has on average 41.6 ± 11.3 recombinations, 1.63-fold higher than sperms. This sexual dimorphic pattern in recombination rate has been observed in many species. In mammals, females most often have higher rates of recombination. RNA virus recombination Numerous RNA viruses are capable of genetic recombination when at least two viral genomes are present in the same host cell. Recombination is largely responsible for RNA virus diversity and immune evasion. RNA recombination appears to be a major driving force in determining genome architecture and the course of viral evolution among picornaviridae ((+)ssRNA) (e.g. poliovirus). In the retroviridae ((+)ssRNA)(e.g. HIV), damage in the RNA genome appears to be avoided during reverse transcription by strand switching, a form of recombination. Recombination also occurs in the reoviridae (dsRNA)(e.g. reovirus), orthomyxoviridae ((-)ssRNA)(e.g. influenza virus) and coronaviridae ((+)ssRNA) (e.g. SARS). Recombination in RNA viruses appears to be an adaptation for coping with genome damage. Switching between template strands during genome replication, referred to as copy-choice recombination, was originally proposed to explain the positive correlation of recombination events over short distances in organisms with a DNA genome (see first Figure, SDSA pathway). Recombination can occur infrequently between animal viruses of the same species but of divergent lineages. The resulting recombinant viruses may sometimes cause an outbreak of infection in humans. Especially in coronaviruses, recombination may also occur even among distantly related evolutionary groups (subgenera), due to their characteristic transcription mechanism, that involves subgenomic mRNAs that are formed by template switching. When replicating its (+)ssRNA genome, the poliovirus RNA-dependent RNA polymerase (RdRp) is able to carry out recombination. Recombination appears to occur by a copy choice mechanism in which the RdRp switches (+)ssRNA templates during negative strand synthesis. Recombination by RdRp strand switching also occurs in the (+)ssRNA plant carmoviruses and tombusviruses. Recombination appears to be a major driving force in determining genetic variability within coronaviruses, as well as the ability of coronavirus species to jump from one host to another and, infrequently, for the emergence of novel species, although the mechanism of recombination in is unclear. In early 2020, many genomic sequences of Australian SARS‐CoV‐2 isolates have deletions or mutations (29742G>A or 29742G>U; "G19A" or "G19U") in the s2m, suggesting that RNA recombination may have occurred in this RNA element. 29742G("G19"), 29744G("G21"), and 29751G("G28") were predicted as recombination hotspots. During the first months of the COVID-19 pandemic, such a recombination event was suggested to have been a critical step in the evolution of SARS-CoV-2's ability to infect humans. Linkage disequilibrium analysis confirmed that RNA recombination with the 11083G > T mutation also contributed to the increase of mutations among the viral progeny. The findings indicate that the 11083G > T mutation of SARS-CoV-2 spread during Diamond Princess shipboard quarantine and arose through de novo RNA recombination under positive selection pressure. In three patients on the Diamond Princess cruise, two mutations, 29736G > T and 29751G > T (G13 and G28) were located in Coronavirus 3′ stem-loop II-like motif (s2m) of SARS-CoV-2. Although s2m is considered an RNA motif highly conserved in 3' untranslated region among many coronavirus species, this result also suggests that s2m of SARS-CoV-2 is RNA recombination/mutation hotspot. SARS-CoV-2's entire receptor binding motif appeared, based on preliminary observations, to have been introduced through recombination from coronaviruses of pangolins. However, more comprehensive analyses later refuted this suggestion and showed that SARS-CoV-2 likely evolved solely within bats and with little or no recombination. Role of recombination in the origin of life Nowak and Ohtsuki noted that the origin of life (abiogenesis) is also the origin of biological evolution. They pointed out that all known life on earth is based on biopolymers and proposed that any theory for the origin of life must involve biological polymers that act as information carriers and catalysts. Lehman argued that recombination was an evolutionary development as ancient as the origins of life. Smail et al. proposed that in the primordial Earth, recombination played a key role in the expansion of the initially short informational polymers (presumed to be RNA) that were the precursors to life. See also Eukaryote hybrid genome Four-gamete test Homologous recombination Independent assortment Recombination frequency Recombination hotspot Site-specific recombinase technology Site-specific recombination Reassortment V(D)J recombination References External links Animations – homologous recombination: Animations showing several models of homologous recombination The Holliday Model of Genetic Recombination Animated guide to homologous recombination. Cellular processes Modification of genetic information Molecular genetics
0.764614
0.994854
0.760679
Plant reproduction
Plant reproduction is the production of new offspring in plants, which can be accomplished by sexual or asexual reproduction. Sexual reproduction produces offspring by the fusion of gametes, resulting in offspring genetically different from either parent. Asexual reproduction produces new individuals without the fusion of gametes, resulting in clonal plants that are genetically identical to the parent plant and each other, unless mutations occur. Asexual reproduction Asexual reproduction does not involve the production and fusion of male and female gametes. Asexual reproduction may occur through budding, fragmentation, spore formation, regeneration and vegetative propagation. Asexual reproduction is a type of reproduction where the offspring comes from one parent only, thus inheriting the characteristics of the parent. Asexual reproduction in plants occurs in two fundamental forms, vegetative reproduction and agamospermy. Vegetative reproduction involves a vegetative piece of the original plant producing new individuals by budding, tillering, etc. and is distinguished from apomixis, which is a replacement of sexual reproduction, and in some cases involves seeds. Apomixis occurs in many plant species such as dandelions (Taraxacum species) and also in some non-plant organisms. For apomixis and similar processes in non-plant organisms, see parthenogenesis. Natural vegetative reproduction is a process mostly found in perennial plants, and typically involves structural modifications of the stem or roots and in a few species leaves. Most plant species that employ vegetative reproduction do so as a means to perennialize the plants, allowing them to survive from one season to the next and often facilitating their expansion in size. A plant that persists in a location through vegetative reproduction of individuals gives rise to a clonal colony. A single ramet, or apparent individual, of a clonal colony is genetically identical to all others in the same colony. The distance that a plant can move during vegetative reproduction is limited, though some plants can produce ramets from branching rhizomes or stolons that cover a wide area, often in only a few growing seasons. In a sense, this process is not one of reproduction but one of survival and expansion of biomass of the individual. When an individual organism increases in size via cell multiplication and remains intact, the process is called vegetative growth. However, in vegetative reproduction, the new plants that result are new individuals in almost every respect except genetic. A major disadvantage of vegetative reproduction is the transmission of pathogens from parent to offspring. It is uncommon for pathogens to be transmitted from the plant to its seeds (in sexual reproduction or in apomixis), though there are occasions when it occurs. Seeds generated by apomixis are a means of asexual reproduction, involving the formation and dispersal of seeds that do not originate from the fertilization of the embryos. Hawkweeds (Hieracium), dandelions (Taraxacum), some species of Citrus and Kentucky blue grass (Poa pratensis) all use this form of asexual reproduction. Pseudogamy occurs in some plants that have apomictic seeds, where pollination is often needed to initiate embryo growth, though the pollen contributes no genetic material to the developing offspring. Other forms of apomixis occur in plants also, including the generation of a plantlet in replacement of a seed or the generation of bulbils instead of flowers, where new cloned individuals are produced. Structures A rhizome is a modified underground stem serving as an organ of vegetative reproduction; the growing tips of the rhizome can separate as new plants, e.g., polypody, iris, couch grass and nettles. Prostrate aerial stems, called runners or stolons, are important vegetative reproduction organs in some species, such as the strawberry, numerous grasses, and some ferns. Adventitious buds form on roots near the ground surface, on damaged stems (as on the stumps of cut trees), or on old roots. These develop into above-ground stems and leaves. A form of budding called suckering is the reproduction or regeneration of a plant by shoots that arise from an existing root system. Species that characteristically produce suckers include elm (Ulmus) and many members of the rose family such as Rosa, Kerria and Rubus. Bulbous plants such as onion (Allium cepa), hyacinths, narcissi and tulips reproduce vegetatively by dividing their underground bulbs into more bulbs. Other plants like potatoes (Solanum tuberosum) and dahlias reproduce vegetatively from underground tubers. Gladioli and crocuses reproduce vegetatively in a similar way with corms. Gemmae are single cells or masses of cells that detach from plants to form new clonal individuals. These are common in Liverworts and mosses and in the gametophyte generation of some filmy fern. They are also present in some Club mosses such as Huperzia lucidula . They are also found in some higher plants such as species of Drosera. Usage The most common form of plant reproduction used by people is seeds, but a number of asexual methods are used which are usually enhancements of natural processes, including: cutting, grafting, budding, layering, division, sectioning of rhizomes, roots, tubers, bulbs, stolons, tillers, etc., and artificial propagation by laboratory tissue cloning. Asexual methods are most often used to propagate cultivars with individual desirable characteristics that do not come true from seed. Fruit tree propagation is frequently performed by budding or grafting desirable cultivars (clones), onto rootstocks that are also clones, propagated by stooling. In horticulture, a cutting is a branch that has been cut off from a mother plant below an internode and then rooted, often with the help of a rooting liquid or powder containing hormones. When a full root has formed and leaves begin to sprout anew, the clone is a self-sufficient plant, genetically identical. Examples include cuttings from the stems of blackberries (Rubus occidentalis), African violets (Saintpaulia), verbenas (Verbena) to produce new plants. A related use of cuttings is grafting, where a stem or bud is joined onto a different stem. Nurseries offer for sale trees with grafted stems that can produce four or more varieties of related fruits, including apples. The most common usage of grafting is the propagation of cultivars onto already rooted plants, sometimes the rootstock is used to dwarf the plants or protect them from root damaging pathogens. Since vegetatively propagated plants are clones, they are important tools in plant research. When a clone is grown in various conditions, differences in growth can be ascribed to environmental effects instead of genetic differences. Sexual reproduction Sexual reproduction involves two fundamental processes: meiosis, which rearranges the genes and reduces the number of chromosomes, and fertilization, which restores the chromosome to a complete diploid number. In between these two processes, different types of plants and algae vary, but many of them, including all land plants, undergo alternation of generations, with two different multicellular structures (phases), a gametophyte and a sporophyte. The evolutionary origin and adaptive significance of sexual reproduction are discussed in the pages Evolution of sexual reproduction and Origin and function of meiosis. The gametophyte is the multicellular structure (plant) that is haploid, containing a single set of chromosomes in each cell. The gametophyte produces male or female gametes (or both), by a process of cell division, called mitosis. In vascular plants with separate gametophytes, female gametophytes are known as mega gametophytes (mega=large, they produce the large egg cells) and the male gametophytes are called micro gametophytes (micro=small, they produce the small sperm cells). The fusion of male and female gametes (fertilization) produces a diploid zygote, which develops by mitotic cell divisions into a multicellular sporophyte. The mature sporophyte produces spores by meiosis, sometimes referred to as reduction division because the chromosome pairs are separated once again to form single sets. In mosses and liverworts, the gametophyte is relatively large, and the sporophyte is a much smaller structure that is never separated from the gametophyte. In ferns, gymnosperms, and flowering plants (angiosperms), the gametophytes are relatively small and the sporophyte is much larger. In gymnosperms and flowering plants the megagametophyte is contained within the ovule (that may develop into a seed) and the microgametophyte is contained within a pollen grain. History of sexual reproduction of plants Unlike animals, plants are immobile, and cannot seek out sexual partners for reproduction. In the evolution of early plants, abiotic means, including water and much later, wind, transported sperm for reproduction. The first plants were aquatic, as described in the page Evolutionary history of plants, and released sperm freely into the water to be carried with the currents. Primitive land plants such as liverworts and mosses had motile sperm that swam in a thin film of water or were splashed in water droplets from the male reproduction organs onto the female organs. As taller and more complex plants evolved, modifications in the alternation of generations evolved. In the Paleozoic era progymnosperms reproduced by using spores dispersed on the wind. The seed plants including seed ferns, conifers and cordaites, which were all gymnosperms, evolved 350 million years ago. They had pollen grains that contained the male gametes for protection of the sperm during the process of transfer from the male to female parts. It is believed that insects fed on the pollen, and plants thus evolved to use insects to actively carry pollen from one plant to the next. Seed producing plants, which include the angiosperms and the gymnosperms, have a heteromorphic alternation of generations with large sporophytes containing much-reduced gametophytes. Angiosperms have distinctive reproductive organs called flowers, with carpels, and the female gametophyte is greatly reduced to a female embryo sac, with as few as eight cells. Each pollen grains contains a greatly reduced male gametophyte consisting of three or four cells. The sperm of seed plants are non-motile, except for two older groups of plants, the Cycadophyta and the Ginkgophyta, which have flagella. Flowering plants Flowering plants, the dominant plant group, reproduce both by sexual and asexual means. Their distinguishing feature is that their reproductive organs are contained in flowers. Sexual reproduction in flowering plants involves the production of separate male and female gametophytes that produce gametes. The anther produces pollen grains that contain male gametophytes. The pollen grains attach to the stigma on top of a carpel, in which the female gametophytes (inside ovules) are located. Plants may either self-pollinate or cross-pollinate. The transfer of pollen (the male gametophytes) to the female stigmas occurs is called pollination. After pollination occurs, the pollen grain germinates to form a pollen tube that grows through the carpel's style and transports male nuclei to the ovule to fertilize the egg cell and central cell within the female gametophyte in a process termed double fertilization. The resulting zygote develops into an embryo, while the triploid endosperm (one sperm cell plus a binucleate female cell) and female tissues of the ovule give rise to the surrounding tissues in the developing seed. The fertilized ovules develop into seeds within a fruit formed from the ovary. When the seeds are ripe they may be dispersed together with the fruit or freed from it by various means to germinate and grow into the next generation. Pollination Plants that use insects or other animals to move pollen from one flower to the next have developed greatly modified flower parts to attract pollinators and to facilitate the movement of pollen from one flower to the insect and from the insect to the next flower. Flowers of wind-pollinated plants tend to lack petals and or sepals; typically large amounts of pollen are produced and pollination often occurs early in the growing season before leaves can interfere with the dispersal of the pollen. Many trees and all grasses and sedges are wind-pollinated. Plants have a number of different means to attract pollinators including color, scent, heat, nectar glands, edible pollen and flower shape. Along with modifications involving the above structures two other conditions play a very important role in the sexual reproduction of flowering plants, the first is the timing of flowering and the other is the size or number of flowers produced. Often plant species have a few large, very showy flowers while others produce many small flowers, often flowers are collected together into large inflorescences to maximize their visual effect, becoming more noticeable to passing pollinators. Flowers are attraction strategies and sexual expressions are functional strategies used to produce the next generation of plants, with pollinators and plants having co-evolved, often to some extraordinary degrees, very often rendering mutual benefit. The largest family of flowering plants is the orchids (Orchidaceae), estimated by some specialists to include up to 35,000 species, which often have highly specialized flowers that attract particular insects for pollination. The stamens are modified to produce pollen in clusters called pollinia, which become attached to insects that crawl into the flower. The flower shapes may force insects to pass by the pollen, which is "glued" to the insect. Some orchids are even more highly specialized, with flower shapes that mimic the shape of insects to attract them to attempt to 'mate' with the flowers, a few even have scents that mimic insect pheromones. Another large group of flowering plants is the Asteraceae or sunflower family with close to 22,000 species, which also have highly modified inflorescences composed of many individual flowers called florets. Heads with florets of one sex, when the flowers are pistillate or functionally staminate or made up of all bisexual florets, are called homogamous and can include discoid and liguliflorous type heads. Some radiate heads may be homogamous too. Plants with heads that have florets of two or more sexual forms are called heterogamous and include radiate and disciform head forms. Ferns Ferns typically produce large diploids with stem, roots, and leaves. On fertile leaves sporangia are produced, grouped together in sori and often protected by an indusium. If the spores are deposited onto a suitable moist substrate they germinate to produce short, thin, free-living gametophytes called prothalli that are typically heart-shaped, small and green in color. The gametophytes produce both motile sperm in the antheridia and egg cells in separate archegonia. After rains or when dew deposits a film of water, the motile sperm are splashed away from the antheridia, which are normally produced on the top side of the thallus, and swim in the film of water to the antheridia where they fertilize the egg. To promote out crossing or cross-fertilization the sperm is released before the eggs are receptive of the sperm, making it more likely that the sperm will fertilize the eggs of the different thallus. A zygote is formed after fertilization, which grows into a new sporophytic plant. The condition of having separate sporophyte and gametophyte plants is called alternation of generations. Other plants with similar reproductive strategies include Psilotum, Lycopodium, Selaginella and Equisetum. Bryophytes The bryophytes, which include liverworts, hornworts and mosses, can reproduce both sexually and vegetatively. The life cycles of these plants start with haploid spores that grow into the dominant form, which is a multicellular haploid gametophyte, with thalloid or leaf-like structures that photosynthesize. The gametophyte is the most commonly known phase of the plant. Bryophytes are typically small plants that grow in moist locations and like ferns, have motile sperm which swim to the ovule using flagella and therefore need water to facilitate sexual reproduction. Bryophytes show considerable variation in their reproductive structures, and a basic outline is as follows: Haploid gametes are produced in antheridia and archegonia by mitosis. The sperm released from the antheridia respond to chemicals released by ripe archegonia and swim to them in a film of water and fertilize the egg cells, thus producing zygotes that are diploid. The zygote divides repeatedly by mitotic division and grows into a diploid sporophyte. The resulting multicellular diploid sporophyte produces spore capsules called sporangia. The spores are produced by meiosis, and when ripe, the capsules burst open to release the spores. In some species each gametophyte is one sex while other species may be monoicous, producing both antheridia and archegonia on the same gametophyte which is thus hermaphrodite. Dispersal and offspring care One of the outcomes of plant reproduction is the generation of seeds, spores, and fruits that allow plants to move to new locations or new habitats. Plants do not have nervous systems or any will for their actions. Even so, scientists are able to observe mechanisms that help their offspring thrive as they grow. All organisms have mechanisms to increase survival in offspring. Offspring care is observed in the Mammillaria hernandezii, a small cactus found in Mexico. A cactus is a type of succulent, meaning it retains water when it is available for future droughts. M. hernandezii also stores a portion of its seeds in its stem, and releases the rest to grow. This can be advantageous for many reasons. By delaying the release of some of its seeds, the cactus can protect these from potential threats from insects, herbivores, or mold caused by micro-organisms. A study found that the presence of adequate water in the environment causes M. Hernandezii to release more seeds to allow for germination. The plant was able to perceive a water potential gradient in the surroundings, and act by giving its seeds a better chance in this preferable environment. This evolutionary strategy gives a better potential outcome for seed germination. See also Asexual reproduction Meiosis References External links Simple Video Tutorial on Reproduction in Plant Reproduction Fertility Plant sexuality
0.765571
0.993606
0.760676
Protein metabolism
Protein metabolism denotes the various biochemical processes responsible for the synthesis of proteins and amino acids (anabolism), and the breakdown of proteins by catabolism. The steps of protein synthesis include transcription, translation, and post translational modifications. During transcription, RNA polymerase transcribes a coding region of the DNA in a cell producing a sequence of RNA, specifically messenger RNA (mRNA). This mRNA sequence contains codons: 3 nucleotide long segments that code for a specific amino acid. Ribosomes translate the codons to their respective amino acids. In humans, non-essential amino acids are synthesized from intermediates in major metabolic pathways such as the Citric Acid Cycle. Essential amino acids must be consumed and are made in other organisms. The amino acids are joined by peptide bonds making a polypeptide chain. This polypeptide chain then goes through post translational modifications and is sometimes joined with other polypeptide chains to form a fully functional protein. Dietary proteins are first broken down to individual amino acids by various enzymes and hydrochloric acid present in the gastrointestinal tract. These amino acids are absorbed into the bloodstream to be transported to the liver and onward to the rest of the body. Absorbed amino acids are typically used to create functional proteins, but may also be used to create energy. They can also be converted into glucose. This glucose can then be converted to triglycerides and stored in fat cells. Proteins can be broken down by enzymes known as peptidases or can break down as a result of denaturation. Proteins can denature in environmental conditions the protein is not made for. Protein synthesis Protein anabolism is the process by which proteins are formed from amino acids. It relies on five processes: amino acid synthesis, transcription, translation, post translational modifications, and protein folding. Proteins are made from amino acids. In humans, some amino acids can be synthesized using already existing intermediates. These amino acids are known as non-essential amino acids. Essential amino acids require intermediates not present in the human body. These intermediates must be ingested, mostly from eating other organisms. Amino Acid Synthesis Polypeptide synthesis Transcription In transcription, RNA polymerase reads a DNA strand and produces an mRNA strand that can be further translated. In order to initiate transcription, the DNA segment that is to be transcribed must be accessible (i.e. it cannot be tightly packed). Once the DNA segment is accessible, the RNA polymerase can begin to transcribe the coding DNA strand by pairing RNA nucleotides to the template DNA strand. During the initial transcription phase, the RNA polymerase searches for a promoter region on the DNA template strand. Once the RNA polymerase binds to this region, it begins to “read” the template DNA strand in the 3’ to 5’ direction. RNA polymerase attaches RNA bases complementary to the template DNA strand (Uracil will be used instead of Thymine). The new nucleotide bases are bonded to each other covalently. The new bases eventually dissociate from the DNA bases but stay linked to each other, forming a new mRNA strand. This mRNA strand is synthesized in the 5’ to 3’ direction. Once the RNA reaches a terminator sequence, it dissociates from the DNA template strand and terminates the mRNA sequence as well. Transcription is regulated in the cell via transcription factors. Transcription factors are proteins that bind to regulatory sequences in the DNA strand such as promoter regions or operator regions. Proteins bound to these regions can either directly halt or allow RNA polymerase to read the DNA strand or can signal other proteins to halt or allow RNA polymerase reading. Translation During translation, ribosomes convert a sequence of mRNA (messenger RNA) to an amino acid sequence. Each 3-base-pair-long segment of mRNA is a codon which corresponds to one amino acid or stop signal. Amino acids can have multiple codons that correspond to them. Ribosomes do not directly attach amino acids to mRNA codons. They must utilize tRNAs (transfer RNAs) as well. Transfer RNAs can bind to amino acids and contain an anticodon which can hydrogen bind to an mRNA codon. The process of bind an amino acid to a tRNA is known as tRNA charging. Here, the enzyme aminoacyl-tRNA-synthetase catalyzes two reactions. In the first one, it attaches an AMP molecule (cleaved from ATP) to the amino acid. The second reaction cleaves the aminoacyl-AMP producing the energy to join the amino acid to the tRNA molecule. Ribosomes have two subunits, one large and one small. These subunits surround the mRNA strand. The larger subunit contains three binding sites: A (aminoacyl), P (peptidyl), and E (exit). After translational initiation (which is different in prokaryotes and eukaryotes), the ribosome enters the elongation period which follows a repetitive cycle. First a tRNA with the correct amino acid enters the A site. The ribosome transfers the peptide from the tRNA in the P site to the new amino acid on the tRNA in the A site. The tRNA from the P site will be shifted into the E site where it will be ejected. This continually occurs until the ribosome reaches a stop codon or receives a signal to stop. A peptide bond forms between the amino acid attached to the tRNA in the P site and the amino acid attached to a tRNA in the A site. The formation of a peptide bond requires an input of energy. The two reacting molecules are the alpha amino group of one amino acid and the alpha carboxyl group of the other amino acids. A by-product of this bond formation is the release of water (the amino group donates a proton while the carboxyl group donates a hydroxyl). Translation can be downregulated by miRNAs (microRNAs). These RNA strands can cleave mRNA strands they are complementary to and will thus stop translation. Translation can also be regulated via helper proteins. For example, a protein called eukaryotic initiation factor-2 (eIF-2) can bind to the smaller subunit of the ribosome, starting translation. When elF-2 is phosphorylated, it cannot bind to the ribosome and translation is halted. Post-translational Modifications Once the peptide chain is synthesized, it still must be modified. Post-translational modifications can occur before protein folding or after. Common biological methods of modifying peptide chains after translation include methylation, phosphorylation, and disulfide bond formation. Methylation often occurs to arginine or lysine and involves adding a methyl group to a nitrogen (replacing a hydrogen). The R groups on these amino acids can be methylated multiple times as long as the bonds to nitrogen does not exceed 4. Methylation reduces the ability of these amino acids to form hydrogen bonds so arginine and lysine that are methylated have different properties than their standard counterparts. Phosphorylation often occurs to serine, threonine, and tyrosine and involves replacing a hydrogen on the alcohol group at the terminus of the R group with a phosphate group. This adds a negative charge on the R groups and will thus change how the amino acids behave in comparison to their standard counterparts. Disulfide bond formation is the creation of disulfide bridges (covalent bonds) between two cysteine amino acids in a chain which adds stability to the folded structure. Protein folding A polypeptide chain in the cell does not have to stay linear; it can become branched or fold in on itself. Polypeptide chains fold in a particular manner depending on the solution they are in. The fact that all amino acids contain R groups with different properties is the main reason proteins fold. In a hydrophilic environment such as cytosol, the hydrophobic amino acids will concentrate at the core of the protein, while the hydrophilic amino acids will be on the exterior. This is entropically favorable since water molecules can move much more freely around hydrophilic amino acids than hydrophobic amino acids. In a hydrophobic environment, the hydrophilic amino acids will concentrate at the core of the protein, while the hydrophobic amino acids will be on the exterior. Since the new interactions between the hydrophilic amino acids are stronger than hydrophobic-hydrophilic interactions, this is enthalpically favorable. Once a polypeptide chain is fully folded, it is called a protein. Often many subunits will combine to make a fully functional protein although physiological proteins do exist that contain only one polypeptide chain. Proteins may also incorporate other molecules such as the heme group in hemoglobin, a protein responsible for carrying oxygen in the blood. Protein breakdown Protein catabolism is the process by which proteins are broken down to their amino acids. This is also called proteolysis and can be followed by further amino acid degradation. Protein catabolism via enzymes Proteases Originally thought to only disrupt enzymatic reactions, proteases (also known as peptidases) actually help with catabolizing proteins through cleavage and creating new proteins that were not present before. Proteases also help to regulate metabolic pathways. One way they do this is to cleave enzymes in pathways that do not need to be running (i.e. gluconeogenesis when blood glucose concentrations are high). This helps to conserve as much energy as possible and to avoid futile cycles. Futile cycles occur when the catabolic and anabolic pathways are both in effect at the same time and rate for the same reaction. Since the intermediates being created are consumed, the body makes no net gain. Energy is lost through futile cycles. Proteases prevent this cycle from occurring by altering the rate of one of the pathways, or by cleaving a key enzyme, they can stop one of the pathways. Proteases are also nonspecific when binding to substrate, allowing for great amounts of diversity inside the cells and other proteins, as they can be cleaved much easier in an energy efficient manner. Because many proteases are nonspecific, they are highly regulated in the cell. Without regulation, proteases will destroy many proteins which are essential to physiological processes. One way the body regulates proteases is through protease inhibitors. Protease inhibitors can be other proteins, small peptides, or molecules. There are two types of protease inhibitors: reversible and irreversible. Reversible protease inhibitors form non-covalent interactions with the protease limiting its functionality. They can be competitive inhibitors, uncompetitive inhibitors, and noncompetitive inhibitors. Competitive inhibitors compete with the peptide to bind to the protease active site. Uncompetitive inhibitors bind to the protease while the peptide is bound but do not let the protease cleave the peptide bond. Noncompetitive inhibitors can do both. Irreversible protease inhibitors covalently modify the active site of the protease so it cannot cleave peptides. Exopeptidases Exopeptidases are enzymes that can cleave the end of an amino acid side chain mostly through the addition of water. Exopeptidase enzymes exist in the small intestine. These enzymes have two classes: aminopeptidases are a brush border enzyme and carboxypeptidases which is from the pancreas. Aminopeptidases are enzymes that remove amino acids from the amino terminus of protein. They are present in all lifeforms and are crucial for survival since they do many cellular tasks in order to maintain stability. This form of peptidase is a zinc metalloenzyme and it is inhibited by the transition state analog. This analog is similar to the actual transition state, so it can make the enzyme bind to it instead of the actual transition state, thus preventing substrate binding and decreasing reaction rates. Carboxypeptidases cleave at the carboxyl end of the protein. While they can catabolize proteins, they are more often used in post-transcriptional modifications. Endopeptidases Endopeptidases are enzymes that add water to an internal peptide bond in a peptide chain and break that bond. Three common endopeptidases that come from the pancreas are pepsin, trypsin, and chymotrypsin. Chymotrypsin performs a hydrolysis reaction that cleaves after aromatic residues. The main amino acids involved are serine, histidine, and aspartic acid. They all play a role in cleaving the peptide bond. These three amino acids are known as the catalytic triad which means that these three must all be present in order to properly function. Trypsin cleaves after long positively charged residues and has a negatively charged binding pocket at the active site. Both are produced as zymogens, meaning they are initially found in their inactive state and after cleavage though a hydrolysis reaction, they becomes activated. Non-covalent interactions such as hydrogen bonding between the peptide backbone and the catalytic triad help increase reaction rates, allowing these peptidases to cleave many peptides efficiently. Protein catabolism via environmental changes pH Cellular proteins are held in a relatively constant pH in order to prevent changes in the protonation state of amino acids. If the pH drops, some amino acids in the polypeptide chain can become protonated if the pka of their R groups is higher than the new pH. Protonation can change the charge these R groups have. If the pH raises, some amino acids in the chain can become deprotonated (if the pka of the R group is lower than the new pH). This also changes the R group charge. Since many amino acids interact with other amino acids based on electrostatic attraction, changing the charge can break these interactions. The loss of these interactions alters the proteins structure, but most importantly it alters the proteins function, which can be beneficial or detrimental. A significant change in pH may even disrupt many interactions the amino acids make and denature (unfold) the protein. Temperature As the temperature in the environment increases, molecules move faster. Hydrogen bonds and hydrophobic interactions are important stabilizing forces in proteins. If the temperature rises and molecules containing these interactions are moving too fast, the interactions become compromised or even break. At high temperatures, these interactions cannot form, and a functional protein is denatured. However, it relies on two factors; the type of protein used and the amount of heat applied. The amount of heat applied determines whether this change in protein is permanent or if it can be transformed back to its original form. References Protein metabolism cs:Bílkovina#Metabolismus bílkovin
0.767639
0.990914
0.760664
Biomass (energy)
In the context of energy production, biomass is matter from recently living (but now dead) organisms which is used for bioenergy production. Examples include wood, wood residues, energy crops, agricultural residues including straw, and organic waste from industry and households. Wood and wood residues is the largest biomass energy source today. Wood can be used as a fuel directly or processed into pellet fuel or other forms of fuels. Other plants can also be used as fuel, for instance maize, switchgrass, miscanthus and bamboo. The main waste feedstocks are wood waste, agricultural waste, municipal solid waste, and manufacturing waste. Upgrading raw biomass to higher grade fuels can be achieved by different methods, broadly classified as thermal, chemical, or biochemical. The climate impact of bioenergy varies considerably depending on where biomass feedstocks come from and how they are grown. For example, burning wood for energy releases carbon dioxide. Those emissions can be significantly offset if the trees that were harvested are replaced by new trees in a well-managed forest, as the new trees will remove carbon dioxide from the air as they grow. However, the farming of biomass feedstocks can reduce biodiversity, degrade soils and take land out of food production. It may also consume water for irrigation and fertilisers. Terminology Biomass (in the context of energy generation) is matter from recently living (but now dead) organisms which is used for bioenergy production. There are variations in how such biomass for energy is defined, e.g. only from plants, or from plants and algae, or from plants and animals. The vast majority of biomass used for bioenergy does come from plants. Bioenergy is a type of renewable energy with potential to assist with climate change mitigation. Some people use the terms biomass and biofuel interchangeably, but it is now more common to consider biofuel to be a liquid or gaseous fuel used for transportation, as defined by government authorities in the US and EU. From that perspective, biofuel is a subset of biomass. The European Union's Joint Research Centre defines solid biofuel as raw or processed organic matter of biological origin used for energy, such as firewood, wood chips, and wood pellets. Types and uses Different types of biomass are used for different purposes: Primary biomass sources that are appropriate for heat or electricity generation but not for transport include: wood, wood residues, wood pellets, agricultural residues, organic waste. Biomass that is processed into transport fuels can come from corn, sugar cane, and soy. Biomass is categorized either as biomass harvested directly for energy (primary biomass), or as residues and waste: (secondary biomass). Biomass harvested directly for energy The main biomass types harvested directly for energy is wood, some food crops and all perennial energy crops. One third of the global forest area of 4 billion hectares is used for wood production or other commercial purposes, and forests provide 85% of all biomass used for energy globally. In the EU, forests provide 60% of all biomass used for energy, with wood residues and waste being the largest source. Woody biomass used for energy often consists of trees and bushes harvested for traditional cooking and heating purposes, particularly in developing countries, with 25 EJ per year used globally for these purposes. This practice is highly polluting. The World Health Organization (WHO) estimates that cooking-related pollution causes 3.8 million annual deaths. The United Nations Sustainable Development Goal 7 aims for the traditional use of biomass for cooking to be phased out by 2030. Short-rotation coppices and short-rotation forests are also harvested directly for energy, providing 4 EJ of energy, and are considered sustainable. The potential for these crops and perennial energy crops to provide at least 25 EJ annually by 2050 is estimated. Food crops harvested for energy include sugar-producing crops (such as sugarcane), starch-producing crops (such as maize), and oil-producing crops (such as rapeseed). Sugarcane is a perennial crop, while corn and rapeseed are annual crops. Sugar- and starch-producing crops are used to make bioethanol, and oil-producing crops are used to make biodiesel. The United States is the largest producer of bioethanol, while the European Union is the largest producer of biodiesel. The global production of bioethanol and biodiesel provides 2.2 and 1.5 EJ of energy per year, respectively. Biofuel made from food crops harvested for energy is also known as "first-generation" or "traditional" biofuel and has relatively low emission savings. The IPCC estimates that between 0.32 and 1.4 billion hectares of marginal land are suitable for bioenergy worldwide. Biomass in the form of residues and waste Residues and waste are by-products from biological material harvested mainly for non-energy purposes. The most important by-products are wood residues, agricultural residues and municipal/industrial waste: Wood residues are by-products from forestry operations or from the wood processing industry. Had the residues not been collected and used for bioenergy, they would have decayed (and therefore produced emissions) on the forest floor or in landfills, or been burnt (and produced emissions) at the side of the road in forests or outside wood processing facilities. The by-products from forestry operations are called logging residues or forest residues, and consist of tree tops, branches, stumps, damaged or dying or dead trees, irregular or bent stem sections, thinnings (small trees that are cleared away in order to help the bigger trees grow large), and trees removed to reduce wildfire risk. The extraction level of logging residues differ from region to region, but there is an increasing interest in using this feedstock, since the sustainable potential is large (15 EJ annually). 68% of the total forest biomass in the EU consists of wood stems, and 32% consists of stumps, branches and tops. The by-products from the wood processing industry are called wood processing residues and consist of cut offs, shavings, sawdust, bark, and black liquor. Wood processing residues have a total energy content of 5.5 EJ annually. Wood pellets are mainly made from wood processing residues, and have a total energy content of 0.7 EJ. Wood chips are made from a combination of feedstocks, and have a total energy content of 0.8 EJ. The energy content in agricultural residues used for energy is approximately 2 EJ. However, agricultural residues has a large untapped potential. The energy content in the global production of agricultural residues has been estimated to 78 EJ annually, with the largest share from straw (51 EJ). Others have estimated between 18 and 82 EJ. The use of agricultural residues and waste that is both sustainable and economically feasible is expected to increase to between 37 and 66 EJ in 2030. Municipal waste produced 1.4 EJ and industrial waste 1.1 EJ. Wood waste from cities and industry also produced 1.1 EJ. The sustainable potential for wood waste has been estimated to 2–10 EJ. IEA recommends a dramatic increase in waste utilization to 45 EJ annually in 2050. Biomass conversion Raw biomass can be upgraded into better and more practical fuel simply by compacting it (e.g. wood pellets), or by different conversions broadly classified as thermal, chemical, and biochemical. Biomass conversion reduces the transport costs as it is cheaper to transport high density commodities. Thermal conversion Thermal upgrading produces solid, liquid or gaseous fuels, with heat as the dominant conversion driver. The basic alternatives are torrefaction, pyrolysis, and gasification, these are separated principally by how far the chemical reactions involved are allowed to proceed. The advancement of the chemical reactions is mainly controlled by how much oxygen is available, and the conversion temperature. Torrefaction is a mild form of pyrolysis where organic materials are heated to 400–600 °F (200–300 °C) in a no–to–low oxygen environment. The heating process removes (via gasification) the parts of the biomass that has the lowest energy content, while the parts with the highest energy content remain. That is, approximately 30% of the biomass is converted to gas during the torrefaction process, while 70% remains, usually in the form of compacted pellets or briquettes. This solid product is water resistant, easy to grind, non-corrosive, and it contains approximately 85% of the original biomass energy. Basically the mass part has shrunk more than the energy part, and the consequence is that the calorific value of torrefied biomass increases significantly, to the extent that it can compete with coals used for electricity generation (steam/thermal coals). The energy density of the most common steam coals today is 22–26 GJ/t. There are other less common, more experimental or proprietary thermal processes that may offer benefits, such as hydrothermal upgrading (sometimes called "wet" torrefaction.) The hydrothermal upgrade path can be used for both low and high moisture content biomass, e.g. aqueous slurries. Pyrolysis entails heating organic materials to 800–900 °F (400–500 °C) in the near complete absence of oxygen. Biomass pyrolysis produces fuels such as bio-oil, charcoal, methane, and hydrogen. Hydrotreating is used to process bio-oil (produced by fast pyrolysis) with hydrogen under elevated temperatures and pressures in the presence of a catalyst to produce renewable diesel, renewable gasoline, and renewable jet fuel. Gasification entails heating organic materials to 1,400–1700 °F (800–900 °C) with injections of controlled amounts of oxygen and/or steam into the vessel to produce a carbon monoxide and hydrogen rich gas called synthesis gas or syngas. Syngas can be used as a fuel for diesel engines, for heating, and for generating electricity in gas turbines. It can also be treated to separate the hydrogen from the gas, and the hydrogen can be burned or used in fuel cells. The syngas can be further processed to produce liquid fuels using the Fischer-Tropsch synthesis process. Chemical conversion A range of chemical processes may be used to convert biomass into other forms, such as to produce a fuel that is more practical to store, transport and use, or to exploit some property of the process itself. Many of these processes are based in large part on similar coal-based processes, such as the Fischer-Tropsch synthesis. A chemical conversion process known as transesterification is used for converting vegetable oils, animal fats, and greases into fatty acid methyl esters (FAME), which are used to produce biodiesel. Biochemical conversion Biochemical processes have developed in nature to break down the molecules of which biomass is composed, and many of these can be harnessed. In most cases, microorganisms are used to perform the conversion. The processes are called anaerobic digestion, fermentation, and composting. Fermentation converts biomass into bioethanol, and anaerobic digestion converts biomass into renewable natural gas (biogas). Bioethanol is used as a vehicle fuel. Renewable natural gas—also called biogas or biomethane—is produced in anaerobic digesters at sewage treatment plants and at dairy and livestock operations. It also forms in and may be captured from solid waste landfills. Properly treated renewable natural gas has the same uses as fossil fuel natural gas. Climate impacts Short-term vs long-term climate benefits Regarding the issue of climate consequences for modern bioenergy, IPCC states: "Life-cycle GHG emissions of modern bioenergy alternatives are usually lower than those for fossil fuels." Consequently, most of IPCC's GHG mitigation pathways include substantial deployment of bioenergy technologies. Some research groups state that even if the European and North American forest carbon stock is increasing, it simply takes too long for harvested trees to grow back. Bioenergy from sources with high payback and parity times take a long time to have an impact on climate change mitigation. They therefore suggest that the EU should adjust its sustainability criteria so that only renewable energy with carbon payback times of less than 10 years is defined as sustainable, for instance wind, solar, biomass from wood residues and tree thinnings that would otherwise be burnt or decompose relatively fast, and biomass from short rotation coppicing (SRC). The IPCC states: "While individual stands in a forest may be either sources or sinks, the forest carbon balance is determined by the sum of the net balance of all stands." IPCC also state that the only universally applicable approach to carbon accounting is the one that accounts for both carbon emissions and carbon removals (absorption) for managed lands (e.g. forest landscapes.) When the total is calculated, natural disturbances like fires and insect infestations are subtracted, and what remains is the human influence. IEA Bioenergy state that an exclusive focus on the short-term make it harder to achieve efficient carbon mitigation in the long term, and compare investments in new bioenergy technologies with investments in other renewable energy technologies that only provide emission reductions after 2030, for instance the scaling-up of battery manufacturing or the development of rail infrastructure. Forest carbon emission avoidance strategies give a short-term mitigation benefit, but the long-term benefits from sustainable forestry activities provide ongoing forest product and energy resources. Most of IPCC's GHG mitigation pathways include substantial deployment of bioenergy technologies. Limited or no bioenergy pathways leads to increased climate change or shifting bioenergy's mitigation load to other sectors. In addition, mitigation cost increases. Carbon accounting system boundaries Carbon positive scenarios are likely to be net emitters of CO2, carbon negative projects are net absorbers of CO2, while carbon neutral projects balance emissions and absorption equally. It is common to include alternative scenarios (also called "reference scenarios" or "counterfactuals") for comparison. The alternative scenarios range from scenarios with only modest changes compared to the existing project, all the way to radically different ones (i.e. forest protection or "no-bioenergy" counterfactuals.) Generally, the difference between scenarios is seen as the actual carbon mitigation potential of the scenarios. In addition to the choice of alternative scenario, other choices has to be made as well. The so-called "system boundaries" determine which carbon emissions/absorptions that will be included in the actual calculation, and which that will be excluded. System boundaries include temporal, spatial, efficiency-related and economic boundaries. For example, the actual carbon intensity of bioenergy varies with biomass production techniques and transportation lengths. Temporal system boundaries The temporal boundaries define when to start and end carbon counting. Sometimes "early" events are included in the calculation, for instance carbon absorption going on in the forest before the initial harvest. Sometimes "late" events are included as well, for instance emissions caused by end-of-life activities for the infrastructure involved, e.g. demolition of factories. Since the emission and absorption of carbon related to a project or scenario changes with time, the net carbon emission can either be presented as time-dependent (for instance a curve which moves along a time axis), or as a static value; this shows average emissions calculated over a defined time period. The time-dependent net emission curve will typically show high emissions at the beginning (if the counting starts when the biomass is harvested.) Alternatively, the starting point can be moved back to the planting event; in this case the curve can potentially move below zero (into carbon negative territory) if there is no carbon debt from land use change to pay back, and in addition more and more carbon is absorbed by the planted trees. The emission curve then spikes upward at harvest. The harvested carbon is then being distributed into other carbon pools, and the curve moves in tandem with the amount of carbon that is moved into these new pools (Y axis), and the time it takes for the carbon to move out of the pools and return to the forest via the atmosphere (X axis). As described above, the carbon payback time is the time it takes for the harvested carbon to be returned to the forest, and the carbon parity time is the time it takes for the carbon stored in two competing scenarios to reach the same level. The static carbon emission value is produced by calculating the average annual net emission for a specific time period. The specific time period can be the expected lifetime of the infrastructure involved (typical for life cycle assessments; LCA's), policy relevant time horizons inspired by the Paris agreement (for instance remaining time until 2030, 2050 or 2100), time spans based on different global warming potentials (GWP; typically 20 or 100 years), or other time spans. In the EU, a time span of 20 years is used when quantifying the net carbon effects of a land use change. Generally in legislation, the static number approach is preferred over the dynamic, time-dependent curve approach. The number is expressed as a so-called "emission factor" (net emission per produced energy unit, for instance kg CO2e per GJ), or even simpler as an average greenhouse gas savings percentage for specific bioenergy pathways. The EU's published greenhouse gas savings percentages for specific bioenergy pathways used in the Renewable Energy Directive (RED) and other legal documents are based on life cycle assessments (LCA's). Spatial system boundaries The spatial boundaries define "geographical" borders for carbon emission/absorption calculations. The two most common spatial boundaries for CO2 absorption and emission in forests are 1.) along the edges of a particular forest stand and 2.) along the edges of a whole forest landscape, which include many forest stands of increasing age (the forest stands are harvested and replanted, one after the other, over as many years as there are stands.) A third option is the so-called increasing stand level carbon accounting method. The researcher has to decide whether to focus on the individual stand, an increasing number of stands, or the whole forest landscape. The IPCC recommends landscape-level carbon accounting. Further, the researcher has to decide whether emissions from direct/indirect land use change should be included in the calculation. Most researchers include emissions from direct land use change, for instance the emissions caused by cutting down a forest in order to start some agricultural project there instead. The inclusion of indirect land use change effects is more controversial, as they are difficult to quantify accurately. Other choices involve defining the likely spatial boundaries of forests in the future. Efficiency-related system boundaries The efficiency-related boundaries define a range of fuel substitution efficiencies for different biomass-combustion pathways. Different supply chains emit different amounts of carbon per supplied energy unit, and different combustion facilities convert the chemical energy stored in different fuels to heat or electrical energy with different efficiencies. The researcher has to know about this and choose a realistic efficiency range for the different biomass-combustion paths under consideration. The chosen efficiencies are used to calculate so-called "displacement factors" – single numbers that shows how efficient fossil carbon is substituted by biogenic carbon. If for instance 10 tonnes of carbon are combusted with an efficiency half that of a modern coal plant, only 5 tonnes of coal would actually be counted as displaced (displacement factor 0.5). Generally, fuel burned in inefficient (old or small) combustion facilities gets assigned lower displacement factors than fuel burned in efficient (new or large) facilities, since more fuel has to be burned (and therefore more CO2 released) in order to produce the same amount of energy. The displacement factor varies with the carbon intensity of both the biomass fuel and the displaced fossil fuel. If or when bioenergy can achieve negative emissions (e.g. from afforestation, energy grass plantations and/or bioenergy with carbon capture and storage (BECCS), or if fossil fuel energy sources with higher emissions in the supply chain start to come online (e.g. because of fracking, or increased use of shale gas), the displacement factor will start to rise. On the other hand, if or when new baseload energy sources with lower emissions than fossil fuels start to come online, the displacement factor will start to drop. Whether a displacement factor change is included in the calculation or not, depends on whether or not it is expected to take place within the time period covered by the relevant scenario's temporal system boundaries. Economic system boundaries The economic boundaries define which market effects to include in the calculation, if any. Changed market conditions can lead to small or large changes in carbon emissions and absorptions from supply chains and forests, for instance changes in forest area as a response to changes in demand. Macroeconomic events/policy changes can have impacts on forest carbon stock. Like with indirect land use changes, economic changes can be difficult to quantify however, so some researchers prefer to leave them out of the calculation. System boundary impacts The chosen system boundaries are very important for the calculated results. Shorter payback/parity times are calculated when fossil carbon intensity, forest growth rate and biomass conversion efficiency increases, or when the initial forest carbon stock and/or harvest level decreases. Shorter payback/parity times are also calculated when the researcher choose landscape level over stand level carbon accounting (if carbon accounting starts at the harvest rather than at the planting event.) Conversely, longer payback/parity times are calculated when carbon intensity, growth rate and conversion efficiency decreases, or when the initial carbon stock and/or harvest level increases, or the researcher choose stand level over landscape level carbon accounting. Critics argue that unrealistic system boundary choices are made, or that narrow system boundaries lead to misleading conclusions. Others argue that the wide range of results shows that there is too much leeway available and that the calculations therefore are useless for policy development. EU's Join Research Center agrees that different methodologies produce different results, but also argue that this is to be expected, since different researchers consciously or unconsciously choose different alternative scenarios/methodologies as a result of their ethical ideals regarding man's optimal relationship with nature. The ethical core of the sustainability debate should be made explicit by researchers, rather than hidden away. Comparisons of GHG emissions at the point of combustion GHG emissions per produced energy unit at the point of combustion depend on moisture content in the fuel, chemical differences between fuels and conversion efficiencies. For example, raw biomass can have higher moisture content compared to some common coal types. When this is the case, more of the wood's inherent energy must be spent solely on evaporating moisture, compared to the drier coal, which means that the amount of CO2 emitted per unit of produced heat will be higher. Many biomass-only combustion facilities are relatively small and inefficient, compared to the typically much larger coal plants. Further, raw biomass (for instance wood chips) can have higher moisture content than coal (especially if the coal has been dried). When this is the case, more of the wood's inherent energy must be spent solely on evaporating moisture, compared to the drier coal, which means that the amount of CO2 emitted per unit produced heat will be higher. This moisture problem can be mitigated by modern combustion facilities. Forest biomass on average produces 10-16% more CO2 than coal. However, focusing on gross emissions misses the point, what counts is the net climate effect from emissions and absorption, taken together. IEA Bioenergy concludes that the additional CO2 from biomass "[...] is irrelevant if the biomass is derived from sustainably managed forests." Climate impacts expressed as varying with time The use of boreal stemwood harvested exclusively for bioenergy have a positive climate impact only in the long term, while the use of wood residues have a positive climate impact also in the short to medium term. Short carbon payback/parity times are produced when the most realistic no-bioenergy scenario is a traditional forestry scenario where "good" wood stems are harvested for lumber production, and residues are burned or left behind in the forest or in landfills. The collection of such residues provides material which "[...] would have released its carbon (via decay or burning) back to the atmosphere anyway (over time spans defined by the biome's decay rate) [...]." In other words, payback and parity times depend on the decay speed. The decay speed depends on a.) location (because decay speed is "[...] roughly proportional to temperature and rainfall [...]"), and b.) the thickness of the residues. Residues decay faster in warm and wet areas, and thin residues decay faster than thick residues. Thin residues in warm and wet temperate forests therefore have the fastest decay, while thick residues in cold and dry boreal forests have the slowest decay. If the residues instead are burned in the no-bioenergy scenario, e.g. outside the factories or at roadside in the forests, emissions are instant. In this case, parity times approach zero. Like other scientists, the JRC staff note the high variability in carbon accounting results, and attribute this to different methodologies. In the studies examined, the JRC found carbon parity times of 0 to 400 years for stemwood harvested exclusively for bioenergy, depending on different characteristics and assumptions for both the forest/bioenergy system and the alternative fossil system, with the emission intensity of the displaced fossil fuels seen as the most important factor, followed by conversion efficiency and biomass growth rate/rotation time. Other factors relevant for the carbon parity time are the initial carbon stock and the existing harvest level; both higher initial carbon stock and higher harvest level means longer parity times. Liquid biofuels have high parity times because about half of the energy content of the biomass is lost in the processing. Climate impacts expressed as static numbers EU's Joint Research Centre has examined a number of bioenergy emission estimates found in literature, and calculated greenhouse gas savings percentages for bioenergy pathways in heat production, transportation fuel production and electricity production, based on those studies. The calculations are based on the attributional LCA accounting principle. It includes all supply chain emissions, from raw material extraction, through energy and material production and manufacturing, to end-of-life treatment and final disposal. It also includes emissions related to the production of the fossil fuels used in the supply chain. It excludes emission/absorption effects that takes place outside its system boundaries, for instance market related, biogeophysical (e.g. albedo), and time-dependent effects. The authors conclude that "[m]ost bio-based commodities release less GHG than fossil products along their supply chain; but the magnitude of GHG emissions vary greatly with logistics, type of feedstocks, land and ecosystem management, resource efficiency, and technology." Because of the varied climate mitigation potential for different biofuel pathways, governments and organizations set up different certification schemes to ensure that biomass use is sustainable, for instance the RED (Renewable Energy Directive) in the EU and the ISO standard 13065 by the International Organization for Standardization. In the US, the RFS (Renewables Fuel Standard) limit the use of traditional biofuels and defines the minimum life-cycle GHG emissions that are acceptable. Biofuels are considered traditional if they achieve up to 20% GHG emission reduction compared to the petrochemical equivalent, advanced if they save at least 50%, and cellulosic if the save more than 60%. The EU's Renewable Energy Directive (RED) states that the typical greenhouse gas emissions savings when replacing fossil fuels with wood pellets from forest residues for heat production varies between 69% and 77%, depending on transport distance: When the distance is between 0 and 2500 km, emission savings is 77%. Emission savings drop to 75% when the distance is between 2500 and 10 000 km, and to 69% when the distance is above 10 000 km. When stemwood is used, emission savings varies between 70% and 77%, depending on transport distance. When wood industry residues are used, savings varies between 79% and 87%. Since the long payback and parity times calculated for some forestry projects is seen as a non-issue for energy crops (except in the cases mentioned above), researchers instead calculate static climate mitigation potentials for these crops, using LCA-based carbon accounting methods. A particular energy crop-based bioenergy project is considered carbon positive, carbon neutral or carbon negative based on the total amount of CO2 equivalent emissions and absorptions accumulated throughout its entire lifetime: If emissions during agriculture, processing, transport and combustion are higher than what is absorbed (and stored) by the plants, both above and below ground, during the project's lifetime, the project is carbon positive. Likewise, if total absorption is higher than total emissions, the project is carbon negative. In other words, carbon negativity is possible when net carbon accumulation more than compensates for net lifecycle greenhouse gas emissions. Typically, perennial crops sequester more carbon than annual crops because the root buildup is allowed to continue undisturbed over many years. Also, perennial crops avoid the yearly tillage procedures (plowing, digging) associated with growing annual crops. Tilling helps the soil microbe populations to decompose the available carbon, producing CO2. There is now (2018) consensus in the scientific community that "[...] the GHG [greenhouse gas] balance of perennial bioenergy crop cultivation will often be favourable [...]", also when considering the implicit direct and indirect land use changes. Albedo and evapotranspiration Environmental impacts The environmental impacts of biomass production need to be taken into account. For instance in 2022, IEA stated that "bioenergy is an important pillar of decarbonisation in the energy transition as a near zero-emission fuel", and that "more efforts are needed to accelerate modern bioenergy deployment to get on track with the Net Zero Scenario [....] while simultaneously ensuring that bioenergy production does not incur negative social and environmental consequences." Sustainable forestry and forest protection IPCC states that there is disagreement about whether the global forest is shrinking or not, and quote research indicating that tree cover has increased 7.1% between 1982 and 2016. The IPCC writes: "While above-ground biomass carbon stocks are estimated to be declining in the tropics, they are increasing globally due to increasing stocks in temperate and boreal forests [...]." Old trees have a very high carbon absorption rate, and felling old trees means that this large potential for future carbon absorption is lost. There is also a loss of soil carbon due to the harvest operations. Old trees absorb more CO2 than young trees, because of the larger leaf area in full grown trees. However, the old forest (as a whole) will eventually stop absorbing CO2 because CO2 emissions from dead trees cancel out the remaining living trees' CO2 absorption. The old forest (or forest stands) are also vulnerable for natural disturbances that produces CO2. The IPCC found that "[...] landscapes with older forests have accumulated more carbon but their sink strength is diminishing, while landscapes with younger forests contain less carbon but they are removing CO2 from the atmosphere at a much higher rate [...]." The IPCC states that the net climate effect from conversion of unmanaged to managed forest can be positive or negative, depending on circumstances. The carbon stock is reduced, but since managed forests grow faster than unmanaged forests, more carbon is absorbed. Positive climate effects are produced if the harvested biomass is used efficiently. There is a tradeoff between the benefits of having a maximized forest carbon stock, not absorbing any more carbon, and the benefits of having a portion of that carbon stock "unlocked", and instead working as a renewable fossil fuel replacement tool, for instance in sectors which are difficult or expensive to decarbonize. The "competition" between locked-away and unlocked forest carbon might be won by the unlocked carbon: "In the long term, using sustainably produced forest biomass as a substitute for carbon-intensive products and fossil fuels provides greater permanent reductions in atmospheric CO2 than preservation does." IEA Bioenergy writes: "forests managed for producing sawn timber, bioenergy and other wood products can make a greater contribution to climate change mitigation than forests managed for conservation alone." Three reasons are given: reducing ability to act as a carbon sink when the forest matures. Wood products can replace other materials that emitted more GHGs during production. "Carbon in forests is vulnerable to loss through natural events such as insect infestations or wildfires" Data from FAO show that most wood pellets are produced in regions dominated by sustainably managed forests, such as Europe and North America. Europe (including Russia) produced 54% of the world's wood pellets in 2019, and the forest carbon stock in this area increased from 158.7 to 172.4 Gt between 1990 and 2020. In the EU, above-ground forest biomass increases with 1.3% per year on average, however the increase is slowing down because the forests are maturing. United Kingdom Emissions Trading System allows operators of CO2 generating installations to apply zero emissions factor for the fraction used for non-energy purposes, while energy purposes (electricity generation, heating) require additional sustainability certification on the biomass used. Biodiversity Biomass production for bioenergy can have negative impacts on biodiversity. Oil palm and sugar cane are examples of crops that have been linked to reduced biodiversity. In addition, changes in biodiversity also impacts primary production which naturally effects decomposition and soil heterotrophic organisms. Win-win scenarios (good for climate, good for biodiversity) include: Increased use of whole trees from coppice forests, increased use of thin forest residues from boreal forests with slow decay rates, and increased use of all kinds of residues from temperate forests with faster decay rates; Multi-functional bioenergy landscapes, instead of expansion of monoculture plantations; Afforestation of former agricultural land with mixed or naturally regenerating forests. Win-lose scenarios (good for the climate, bad for biodiversity) include afforestation on ancient, biodiversity-rich grassland ecosystems which were never forests, and afforestation of former agricultural land with monoculture plantations. Lose-win scenarios (bad for the climate, good for biodiversity) include natural forest expansion on former agricultural land. Lose-lose scenarios include increased use of thick forest residues like stumps from some boreal forests with slow decay rates, and conversion of natural forests into forest plantations. Pollution Other problems are pollution of soil and water from fertiliser/pesticide use, and emission of ambient air pollutants, mainly from open field burning of residues. The traditional use of wood in cook stoves and open fires produces pollutants, which can lead to severe health and environmental consequences. However, a shift to modern bioenergy contribute to improved livelihoods and can reduce land degradation and impacts on ecosystem services. According to the IPCC, there is strong evidence that modern bioenergy have "large positive impacts" on air quality. Traditional bioenergy is inefficient and the phasing out of this energy source has both large health benefits and large economic benefits. When combusted in industrial facilities, most of the pollutants originating from woody biomass reduce by 97-99%, compared to open burning. Combustion of woody biomass produces lower amounts of particulate matter than coal for the same amount of electricity generated. See also Bioenergetics Bioenergy Action Plan Bioenergy with carbon capture and storage Biomass heating system Biomass to liquid Bioproducts Biorefinery Biochar Cogeneration Carbon footprint Energy forestry Pellet fuel Solid fuel Renewable energy transition World Bioenergy Association References Sources IPCC reports IEA reports Other sources Quotes and comments External links Biomass explained (U.S. Energy Information Administration) Biomass Energy (National Geographic) Bioenergy Renewable energy Sustainable energy
0.763513
0.996263
0.76066
Introduction to electromagnetism
Electromagnetism is one of the fundamental forces of nature. Early on, electricity and magnetism were studied separately and regarded as separate phenomena. Hans Christian Ørsted discovered that the two were related – electric currents give rise to magnetism. Michael Faraday discovered the converse, that magnetism could induce electric currents, and James Clerk Maxwell put the whole thing together in a unified theory of electromagnetism. Maxwell's equations further indicated that electromagnetic waves existed, and the experiments of Heinrich Hertz confirmed this, making radio possible. Maxwell also postulated, correctly, that light was a form of electromagnetic wave, thus making all of optics a branch of electromagnetism. Radio waves differ from light only in that the wavelength of the former is much longer than the latter. Albert Einstein showed that the magnetic field arises through the relativistic motion of the electric field and thus magnetism is merely a side effect of electricity. The modern theoretical treatment of electromagnetism is as a quantum field in quantum electrodynamics. In many situations of interest to electrical engineering, it is not necessary to apply quantum theory to get correct results. Classical physics is still an accurate approximation in most situations involving macroscopic objects. With few exceptions, quantum theory is only necessary at the atomic scale and a simpler classical treatment can be applied. Further simplifications of treatment are possible in limited situations. Electrostatics deals only with stationary electric charges so magnetic fields do not arise and are not considered. Permanent magnets can be described without reference to electricity or electromagnetism. Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, even Maxwell's equations can be dispensed with and simpler formulations used. On the other hand, a quantum treatment of electromagnetism is important in chemistry. Chemical reactions and chemical bonding are the result of quantum mechanical interactions of electrons around atoms. Quantum considerations are also necessary to explain the behaviour of many electronic devices, for instance the tunnel diode. Electric charge Electromagnetism is one of the fundamental forces of nature alongside gravity, the strong force and the weak force. Whereas gravity acts on all things that have mass, electromagnetism acts on all things that have electric charge. Furthermore, as there is the conservation of mass according to which mass cannot be created or destroyed, there is also the conservation of charge which means that the charge in a closed system (where no charges are leaving or entering) must remain constant. The fundamental law that describes the gravitational force on a massive object in classical physics is Newton's law of gravity. Analogously, Coulomb's law is the fundamental law that describes the force that charged objects exert on one another. It is given by the formula where F is the force, ke is the Coulomb constant, q1 and q2 are the magnitudes of the two charges, and r2 is the square of the distance between them. It describes the fact that like charges repel one another whereas opposite charges attract one another and that the stronger the charges of the particles, the stronger the force they exert on one another. The law is also an inverse square law which means that as the distance between two particles is doubled, the force on them is reduced by a factor of four. Electric and magnetic fields In physics, fields are entities that interact with matter and can be described mathematically by assigning a value to each point in space and time. Vector fields are fields which are assigned both a numerical value and a direction at each point in space and time. Electric charges produce a vector field called the electric field. The numerical value of the electric field, also called the electric field strength, determines the strength of the electric force that a charged particle will feel in the field and the direction of the field determines which direction the force will be in. By convention, the direction of the electric field is the same as the direction of the force on positive charges and opposite to the direction of the force on negative charges. Because positive charges are repelled by other positive charges and are attracted to negative charges, this means the electric fields point away from positive charges and towards negative charges. These properties of the electric field are encapsulated in the equation for the electric force on a charge written in terms of the electric field: where F is the force on a charge q in an electric field E. As well as producing an electric field, charged particles will produce a magnetic field when they are in a state of motion that will be felt by other charges that are in motion (as well as permanent magnets). The direction of the force on a moving charge from a magnetic field is perpendicular to both the direction of motion and the direction of the magnetic field lines and can be found using the right-hand rule. The strength of the force is given by the equation where F is the force on a charge q with speed v in a magnetic field B which is pointing in a direction of angle θ from the direction of motion of the charge. The combination of the electric and magnetic forces on a charged particle is called the Lorentz force. Classical electromagnetism is fully described by the Lorentz force alongside a set of equations called Maxwell's equations. The first of these equations is known as Gauss's law. It describes the electric field produced by charged particles and by charge distributions. According to Gauss's law, the flux (or flow) of electric field through any closed surface is proportional to the amount of charge that is enclosed by that surface. This means that the greater the charge, the greater the electric field that is produced. It also has other important implications. For example, this law means that if there is no charge enclosed by the surface, then either there is no electric field at all or, if there is a charge near to but outside of the closed surface, the flow of electric field into the surface must exactly cancel with the flow out of the surface. The second of Maxwell's equations is known as Gauss's law for magnetism and, similarly to the first Gauss's law, it describes flux, but instead of electric flux, it describes magnetic flux. According to Gauss's law for magnetism, the flow of magnetic field through a closed surface is always zero. This means that if there is a magnetic field, the flow into the closed surface will always cancel out with the flow out of the closed surface. This law has also been called "no magnetic monopoles" because it means that any magnetic flux flowing out of a closed surface must flow back into it, meaning that positive and negative magnetic poles must come together as a magnetic dipole and can never be separated into magnetic monopoles. This is in contrast to electric charges which can exist as separate positive and negative charges. The third of Maxwell's equations is called the Ampère–Maxwell law. It states that a magnetic field can be generated by an electric current. The direction of the magnetic field is given by Ampère's right-hand grip rule. If the wire is straight, then the magnetic field is curled around it like the gripped fingers in the right-hand rule. If the wire is wrapped into coils, then the magnetic field inside the coils points in a straight line like the outstretched thumb in the right-hand grip rule. When electric currents are used to produce a magnet in this way, it is called an electromagnet. Electromagnets often use a wire curled up into solenoid around an iron core which strengthens the magnetic field produced because the iron core becomes magnetised. Maxwell's extension to the law states that a time-varying electric field can also generate a magnetic field. Similarly, Faraday's law of induction states that a magnetic field can produce an electric current. For example, a magnet pushed in and out of a coil of wires can produce an electric current in the coils which is proportional to the strength of the magnet as well as the number of coils and the speed at which the magnet is inserted and extracted from the coils. This principle is essential for transformers which are used to transform currents from high voltage to low voltage, and vice versa. They are needed to convert high voltage mains electricity into low voltage electricity which can be safely used in homes. Maxwell's formulation of the law is given in the Maxwell–Faraday equation—the fourth and final of Maxwell's equations—which states that a time-varying magnetic field produces an electric field. Together, Maxwell's equations provide a single uniform theory of the electric and magnetic fields and Maxwell's work in creating this theory has been called "the second great unification in physics" after the first great unification of Newton's law of universal gravitation. The solution to Maxwell's equations in free space (where there are no charges or currents) produces wave equations corresponding to electromagnetic waves (with both electric and magnetic components) travelling at the speed of light. The observation that these wave solutions had a wave speed exactly equal to the speed of light led Maxwell to hypothesise that light is a form of electromagnetic radiation and to posit that other electromagnetic radiation could exist with different wavelengths. The existence of electromagnetic radiation was proved by Heinrich Hertz in a series of experiments ranging from 1886 to 1889 in which he discovered the existence of radio waves. The full electromagnetic spectrum (in order of increasing frequency) consists of radio waves, microwaves, infrared radiation, visible light, ultraviolet light, X-rays and gamma rays. A further unification of electromagnetism came with Einstein's special theory of relativity. According to special relativity, observers moving at different speeds relative to one another occupy different observational frames of reference. If one observer is in motion relative to another observer then they experience length contraction where unmoving objects appear closer together to the observer in motion than to the observer at rest. Therefore, if an electron is moving at the same speed as the current in a neutral wire, then they experience the flowing electrons in the wire as standing still relative to it and the positive charges as contracted together. In the lab frame, the electron is moving and so feels a magnetic force from the current in the wire but because the wire is neutral it feels no electric force. But in the electron's rest frame, the positive charges seem closer together compared to the flowing electrons and so the wire seems positively charged. Therefore, in the electron's rest frame it feels no magnetic force (because it is not moving in its own frame) but it does feel an electric force due to the positively charged wire. This result from relativity proves that magnetic fields are just electric fields in a different reference frame (and vice versa) and so the two are different manifestations of the same underlying electromagnetic field. Conductors, insulators and circuits Conductors A conductor is a material that allows electrons to flow easily. The most effective conductors are usually metals because they can be described fairly accurately by the free electron model in which electrons delocalize from the atomic nuclei, leaving positive ions surrounded by a cloud of free electrons. Examples of good conductors include copper, aluminum, and silver. Wires in electronics are often made of copper. The main properties of conductors are: The electric field is zero inside a perfect conductor. Because charges are free to move in a conductor, when they are disturbed by an external electric field they rearrange themselves such that the field that their configuration produces exactly cancels the external electric field inside the conductor. The electric potential is the same everywhere inside the conductor and is constant across the surface of the conductor. This follows from the first statement because the field is zero everywhere inside the conductor and therefore the potential is constant within the conductor too. The electric field is perpendicular to the surface of a conductor. If this were not the case, the field would have a nonzero component on the surface of the conductor, which would cause the charges in the conductor to move around until that component of the field is zero. The net electric flux through a surface is proportional to the charge enclosed by the surface. This is a restatement of Gauss' law. In some materials, the electrons are bound to the atomic nuclei and so are not free to move around but the energy required to set them free is low. In these materials, called semiconductors, the conductivity is low at low temperatures but as the temperature is increased the electrons gain more thermal energy and the conductivity increases. Silicon is an example of a semiconductors that can be used to create solar cells which become more conductive the more energy they receive from photons from the sun. Superconductors are materials that exhibit little to no resistance to the flow of electrons when cooled below a certain critical temperature. Superconductivity can only be explained by the quantum mechanical Pauli exclusion principle which states that no two fermions (an electron is a type of fermion) can occupy exactly the same quantum state. In superconductors, below a certain temperature the electrons form boson bound pairs which do not follow this principle and this means that all the electrons can fall to the same energy level and move together uniformly in a current. Insulators Insulators are material which are highly resistive to the flow of electrons and so are often used to cover conducting wires for safety. In insulators, electrons are tightly bound to atomic nuclei and the energy to free them is very high so they are not free to move and are resistive to induced movement by an external electric field. However, some insulators, called dielectrics, can be polarised under the influence of an external electric field so that the charges are minutely displaced forming dipoles that create a positive and negative side. Dielectrics are used in capacitors to allow them to store more electric potential energy in the electric field between the capacitor plates. Capacitors A capacitor is an electronic component that stores electrical potential energy in an electric field between two oppositely charged conducting plates. If one of the conducting plates has a charge density of +Q/A and the other has a charge of -Q/A where A is the area of the plates, then there will be an electric field between them. The potential difference between two parallel plates V can be derived mathematically as where d is the plate separation and is the permittivity of free space. The ability of the capacitor to store electrical potential energy is measured by the capacitance which is defined as and for a parallel plate capacitor this is If a dielectric is placed between the plates then the permittivity of free space is multiplied by the relative permittivity of the dielectric and the capacitance increases. The maximum energy that can be stored by a capacitor is proportional to the capacitance and the square of the potential difference between the plates Inductors An inductor is an electronic component that stores energy in a magnetic field inside a coil of wire. A current-carrying coil of wire induces a magnetic field according to Ampère's circuital law. The greater the current I, the greater the energy stored in the magnetic field and the lower the inductance which is defined where is the magnetic flux produced by the coil of wire. The inductance is a measure of the circuit's resistance to a change in current and so inductors with high inductances can also be used to oppose alternating current. Other circuit components Circuit laws Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, simple circuit laws can be used instead of deriving all the behaviour of the circuits directly from electromagnetic laws. Ohm's law states the relationship between the current I and the voltage V of a circuit by introducing the quantity known as resistance R Ohm's law: Power is defined as so Ohm's law can be used to tell us the power of the circuit in terms of other quantities Kirchhoff's junction rule states that the current going into a junction (or node) must equal the current that leaves the node. This comes from charge conservation, as current is defined as the flow of charge over time. If a current splits as it exits a junction, the sum of the resultant split currents is equal to the incoming circuit. Kirchhoff's loop rule states that the sum of the voltage in a closed loop around a circuit equals zero. This comes from the fact that the electric field is conservative which means that no matter the path taken, the potential at a point does not change when you get back there. Rules can also tell us how to add up quantities such as the current and voltage in series and parallel circuits. For series circuits, the current remains the same for each component and the voltages and resistances add up: For parallel circuits, the voltage remains the same for each component and the currents and resistances are related as shown: See also List of textbooks on electromagnetism References Electromagnetism electromagnetism
0.77791
0.977782
0.760627
Biophilic design
Biophilic design is a concept used within the building industry to increase occupant connectivity to the natural environment through the use of direct nature, indirect nature, and space and place conditions. Used at both the building and city-scale, it is argued that this idea has health, environmental, and economic benefits for building occupants and urban environments, with few drawbacks. Although its name was coined in recent history, indicators of biophilic design have been seen in architecture from as far back as the Hanging Gardens of Babylon. While the design features that characterize Biophilic design were all traceable in preceding sustainable design guidelines, the new term sparked wider interest and lent academic credibility. Biophilia hypothesis The word "Biophilia" was first introduced by a psychoanalyst named Erich Fromm who stated that biophilia is the "passionate love of life and of all that is alive...whether in a person, a plant, an idea, or a social group" in his book The Anatomy of Human Destructiveness in 1973. Fromm's approach was that of a psychoanalyst (a person who studies the unconscious mind) and presented a broad spectrum as he called biophilia a biologically normal instinct. The term has been used since by many scientists, and philosophers overall being adapted to several different areas of study. Some notable mentions of biophilia include Edward O. Wilson's book Biophilia (1984) where he took a biologist's approach and first coined the "Biophilia hypothesis" and popularized the notion. Wilson defined biophilia as "the innate tendency to focus on life and lifelike processes", claiming a link with nature is not only physiological (as Fromm suggested) but has a genetic basis. The biophilia hypothesis is the idea that humans have an inherited need to connect to nature and other biotic forms due to our evolutionary dependence on it for survival and personal fulfillment. This idea is relevant in daily life – humans travel and spend money to sightsee in national parks and nature preserves, relax on beaches, hike mountains, and explore jungles. Further, many sports revolve around nature such as skiing, mountain biking, and surfing. From a home perspective, people are more likely to spend more on houses that have views of nature; buyers are willing to spend 7% more on homes with excellent landscaping, 58% more on properties that look at water, and 127% more on those that are waterfront. Humans also value companionship with animals. In America 60.2 million people own dogs and 47.1 million own cats. Biophobia While biophilia refers to the inherent need to experience and love nature, biophobia is human's inherited fear of nature and animals. In the case of modern life, humans urge to separate themselves from nature and move towards technology; a cultural drive where people tend to associate with human artifacts, interests, and managed activities. Some anxieties of the natural environment are inherited from threats seen in anthropocentric evolution: this includes fear of snakes, spiders, and blood. In relation to buildings, biophobia can be induced through the use of bright colors, heights, enclosed spaces, darkness, and large open spaces are major contributors to occupant discomfort. Dimensions Considered as one of the pioneers of biophilic design, Stephen Kellert has created a framework where nature in the built environment is used in a way that satisfies human needs – his principles are meant to celebrate and show respect for nature, and provide an enriching urban environment that is multisensory. The dimensions and attributes that define Kellert's biophilic framework are below. Direct experience of nature Direct experience refers to tangible contact with natural features: Light: Allows orientation of time of day and season, and is attributed to wayfinding and comfort; light can also cause natural patterns and form, movements and shadows. In design, this can be applied through clerestories, reflective materials, skylights, glass, and atriums. This provides well-being and interest from occupants. Air: Ventilation, temperature, and humidity are felt through air. Such conditions can be applied through the use of windows and other passive strategies, but most importantly the variation in these elements can promote occupant comfort and productivity. Water: Water is multisensory and can be used in buildings to provide movement, sounds, touch, and sight. In design it can be incorporated through water bodies, fountains, wetlands, and aquariums; people have a strong connection to water and when used, it can decrease stress and increase health, performance, and overall satisfaction. Plants: Bringing vegetation to the exterior and interior spaces of the building provides a direct relationship to nature. This should be abundant (i.e., make use of green walls or many potted plants) and some vegetation should flower; plants have been proven to increase physical health, performance, and productivity and reduce stress. Animals: While hard to achieve, it can be done through aquariums, gardens, animal feeders, and green roofs. This interaction with promotes interest, mental stimulation, and pleasure. Weather: Weather can be observed directly through windows and transitional spaces, but it can also be simulated through the manipulation of air within the space; awareness of weather signified human fitness and survival in ancient times and now promotes awareness and mental stimulation. Natural landscapes: This is done through creating self-sustaining ecosystems into the built environment. Given human evolution and history, people tend to enjoy savannah-like landscapes as they depict spaciousness and an abundance of natural life. Contact with these types of environments can be done through vistas and or direct interactions such as gardens. Such landscapes are known to increase occupant satisfaction. Fire: This natural element is hard to incorporate, however when implemented correctly into the building, it provides color, warmth, and movement, all of which are appealing and pleasing to occupants. Indirect experience of nature Indirect experience refers to contact with images and or representations of nature: Images of Nature: This has been proven to be emotionally and intellectually satisfying to occupants; images of nature can be implemented through paintings, photos, sculptures, murals, videos, etcetera. Natural Materials: People prefer natural materials as they can be mentally stimulating. Natural materials are susceptible to the patina of time; this change invokes responses from people. These materials can be incorporated into buildings through the use of wood and stone. Interior design can use natural fabrics and furnishings. Leather has often been included as recommended Biophilic material however with the awareness of animal agriculture (leather being a co-product of the meat industry) as a major contributor to climate change faux, or plant-based, leathers created from mushroom, pineapple skin, or cactus are now seen as viable alternatives. It is also seen that to feel, and be, closer to nature and animals to destroy them in the pursuit of this is counter-productive and in conflict with the philosophy of Biophilia. Natural Colors: Natural colors or "earth-tones", are those that are commonly found in nature and are often subdued tones of brown, green, and blue. When using colors in buildings, they should represent these natural tones. Brighter colors should only be used sparingly – one study found that red flowers on plants were found to be fatiguing and distracting by occupants. Simulations of Natural Light and Air: In areas where natural forms of ventilation and light cannot be achieved, creative use of interior lighting and mechanical ventilation can be used to mimic these natural features. Designers can do this through variations in lighting through different lighting types, reflective mediums, and natural geometries that the fixture can shine through; natural airflow can be imitated through mild changes in temperature, humidity, and air velocity. Naturalistic Shapes: Natural shapes and forms can be achieved in architectural design through columns and nature-based patterns on facades - including these different elements into spaces can change a static space into an intriguing and appealing complex area. Evoking Nature: This uses characteristics found in nature to influence the structural design of the project. These may be things that may not occur in nature, rather elements that represent natural landscapes such as mimicking different plant heights found in ecosystems, and or mimicking particular animal, water, or plant features. Information Richness: This can be achieved by providing complex, yet not noisy environments that invoke occupant curiosity and thought. Many ecosystems are complex and filled with different abiotic and biotic elements – in such the goal of this attribute is to include these elements into the environment of the building. Change and the Patina of Time: People are intrigued by nature and how it changes, adapts, and ages over time, much like ourselves. In buildings, this can be accomplished by using organic materials that are susceptible to weathering and color change – this allows for us to observe slight changes in our built environment over time. Natural Geometries: The design of facades or structural components can include the use of repetitive, varied patterns that are seen in nature (fractals). These geometries can also have hierarchically organized scales and winding flow rather than be straight with harsh angles. For instance, commonly used natural geometries are the honeycomb pattern and ripples found in water. Biomimicry: This is a design strategy that imitates uses found in nature as solutions for human and technical problems. Using these natural functions in construction can entice human creativity and consideration of nature. Experience of space and place The experience of space and place uses spatial relationships to enhance well-being: Prospect and Refuge: Refuge refers to the building's ability to provide comfortable and nurturing interiors (alcoves, dimmer lighting), while prospect emphasizes horizons, movement, and sources of danger. Examples of design elements include balconies, alcoves, lighting changes, and areas spaciousness (savannah environment). Organized Complexity: This principle is meant to simulate the need for controlled variability; this is done in design through repetition, change, and detail of the building's architecture. Integration of Parts: When different parts comprise a whole, it provides satisfaction for occupants: design elements include interior spaces using clear boundaries and or the integration of a central focal point. Transitional Spaces: This element aims to connect interior spaces with the outside or create comfort by providing access from one space to another environment through the use of porches, decks, atriums, doors, bridges, fenestrations, and foyers. Mobility: The ability for people to comfortably move between spaces, even when complex; it provides the feeling of security for occupants and can be done through making clear points of entry and egress. Cultural and Ecological Attachment to Place: Creating a cultural sense of place in the built environment creates human connection and identity. This is done by incorporating the area's geography and history into the design. Ecological identity is done through the creation of ecosystems that promote the use of native flora and fauna. Each of these experiences are meant to be considered individually when using biophilia in projects, as there is no one right answer for one building type. Each building's architect(s) and project owner(s) must collaborate to include the biophilic principles they believe fit within their scope and most effectively reach their occupants. City-scale Timothy Beatley believes the key objective of biophilic cities is to create an environment where the residents want to actively participate in, preserve, and connect with the natural landscape that surrounds them. He established ways to achieve this through a framework of infrastructure, governance, knowledge, and behavior; these dimensions can also be indicators of existing biophilic attributes that already exist in current cities. Biophilic Conditions and Infrastructure: The idea that a certain number of people at any given time should be near a green space or park. This can be done through the creation of integrated ecological networks and walking trails throughout the city, the designation of certain portions of land area for vegetation and forests, green and biophilic building design features, and the use of flora and fauna throughout the city. Biophilic Activities: This refers to the increased amount of time spent outside and visiting parks, longer outdoor periods at schools, improved foot traffic across the city, improved participation in community gardens and conservatory clubs, larger participation in local volunteer efforts. Biophilic Attitudes and Knowledge: In areas with urban biophilic design elements, there will be an improved number of residents who care about nature and can identify local native species; resident curiosity of their local ecosystems also increases. Biophilic Institutions and Governance: Local government bodies allocate part of the budget to nature and biophilic activities. Indicators of this include increased regulation that requires more green and biophilic design principles, grant programs that promote the use of nature and biophilia, the inclusion of natural history museums and educational programs, and increased number of nature non-governmental organizations and community groups. Based on Kellert's dimensions, biophilic product design dimensions have also been presented. Benefits Biophilic design is argued to have a wealth of benefits for building occupants and urban environments through improving connections to nature. For cities, many believe the biggest proponent of the concept is its ability to make the city more resilient to any environmental stressor it may face. Health benefits Catherine Ryan, et al. found that elements such as nature sounds, improved mental health 37% faster than traditional urban noise after stressor exposure; the same study found that when surgery patients were exposed to aromatherapy, 45% used less morphine and 56% used fewer painkillers overall. Another study by Kaitlyn Gillis and Birgitta Gatersleben found that the inclusion of plants in interior environments reduce stress and increase pain tolerance; the use of water elements and incorporating views of nature are also mentally restorative for occupants. When researching the effects of biophilia on hospital patients, Peter Newman and Jana Soderlund found that by increasing vista quality in hospital rooms depression and pain in patients is reduced, which in turn shortened hospital stays from 3.67 days to 2.6 days. In biophilic cities, Andrew Dannenberg, et al. indicated that there are higher levels of social connectivity and better capability to handle life crises; this has resulted in lower crime rate levels of violence and aggression. The same study found that implementing outdoor facilities such as impromptu gymnasiums like the "Green Gym" in the United Kingdom, allow people to help clear overgrown vegetation, build walking paths, plant foliage, and more readily exercise (walking, running, climbing, etc.); this has been proven to build social capital, increase physical activity, better mental health and quality of life. Further, Dannenberg, et al. also found that children growing up in green neighborhoods are seen to have lower levels of asthma; decreased mortality rates and health disparities between the wealthy and poor were also observed in greener neighborhoods. Mental health benefits Highly prevalent in nature, fractal patterns, biophilic patterns possess self-similar components that repeat at varying size scales. The perceptual experience of human-made environments can be impacted with inclusion of these natural patterns. Previous work has demonstrated consistent trends in preference for and complexity estimates of fractal patterns. However, limited information has been gathered on the impact of other visual judgments. Here we examine the aesthetic and perceptual experience of fractal 'global-forest' designs already installed in humanmade spaces and demonstrate how fractal pattern components are associated with positive psychological experiences that can be utilized to promote occupant wellbeing. These designs are composite fractal patterns consisting of individual fractal 'tree-seeds' which combine to create a 'global fractal forest.' The local 'tree-seed' patterns, global configuration of tree-seed locations, and overall resulting 'global-forest' patterns have fractal qualities. These designs span multiple mediums yet are all intended to lower occupant stress without detracting from the function and overall design of the space. In this series of studies, we first establish divergent relationships between various visual attributes, with pattern complexity, preference, and engagement ratings increasing with fractal complexity compared to ratings of refreshment and relaxation which stay the same or decrease with complexity. Subsequently, we determine that the local constituent fractal ('tree-seed') patterns contribute to the perception of the overall fractal design, and address how to balance aesthetic and psychological effects (such as individual experiences of perceived engagement and relaxation) in fractal design installations. This set of studies demonstrates that fractal preference is driven by a balance between increased arousal (desire for engagement and complexity) and decreased tension (desire for relaxation or refreshment). Installations of these composite mid-high complexity 'global-forest' patterns consisting of 'tree-seed' components balance these contrasting needs, and can serve as a practical implementation of biophilic patterns in human-made environments to promote occupant wellbeing. Environmental benefits Some argue that by adding physical natural elements, such as plants, trees, rain gardens, and green roofs, to the built environment, buildings and cities can manage stormwater runoff better as there are fewer impervious surfaces and better infiltration. To maintain these natural systems in a cost-effective way, excess greywater can be reused to water the plants and greenery; vegetative walls and roofs also decrease polluted water as the plants act as biofilters. Adding greenery also reduces carbon emissions, the heat island effect, and increases biodiversity. Carbon is reduced through carbon sequestration in the plant's roots during photosynthesis. Green and high albedo rooftops and facades, and shading of streets and structures using vegetation can reduce the amount of heat absorption normally found in asphalt or dark surfaces – this can reduce heating and cooling needs by 25% and reduce temperature fluctuations by 50%. Further, adding green facades can increase the biodiversity of an area if native species are planted - the Khoo Teck Puat Hospital in Singapore has seen a resurgence of 103 species of butterflies onsite, thanks to their use of vegetation throughout the exterior of the building. Economic benefits Biophilia may have slightly higher costs due to the addition of natural elements that require maintenance, higher-priced organic items, etc., however, the perceived health and environmental benefits are believed to negate this. Peter Newman found that by adding biophilic design and landscapes, cities like New York City can see savings nearing $470 million due to increased worker productivity and $1.7 billion from reduced crime expenses. They also found that storefronts on heavily vegetated streets increased foot traffic and attracted consumers that were likely to spend 25% more; the same study showed that increasing daylighting through skylights in a store increase sales by 40% +/- 7%. Properties with biophilic design also benefit from higher selling prices, with many selling at 16% more than conventional buildings. Sustainability and resilience On the urban scale, Timothy Beatley believes that biophilic design will allow cities to better adapt to stresses that occur from changes in climate and thus, local environments. To better show this, he created a biophilic cities framework, where pathways can be taken to increase the resilience and sustainability of cities. This includes three sections: Biophilic Urbanism - the physical biophilic and green measures that can be taken to increase the resilience of the city, Adaptive Capacity - how the community's behaviors will adapt as a result of these physical changes, and Resilient Outcomes - what can happen if both of these steps are achieved. Under the Biophilic Urbanism section, one of the ways a city can increase resilience is by pursuing the biophysical pathway – by safeguarding and promoting the inclusion of natural systems, the natural protective barrier of the city is increased. For example, New Orleans is a city that has built over its natural wet plains and has exposed themselves to flooding. It is estimated that if they kept the bayous intact, the city could save $23 billion yearly in storm protection. In the Adaptive Capacity section, Beatley states that the commitment to place and home pathway creates stimulating and interesting nature environments for residents – this will create stronger bonds to home, which will increase the likelihood that citizens will take care of where they live. He goes further in saying that in times of shock or stress, these people are more likely to rebuild and or support the community instead of fleeing. This may also increase governmental action to protect the city from future disasters. By achieving Biophilic Urbanism and Adaptive Capacity, Beatley believes that one of the biggest resilient outcomes of this framework will be increased adaptability of the residents. Because the steps leading to resilience encourage people to be outside walking and participating in activities, the citizens become healthier and more physically fit; it has been found that those who take walks in nature experience decreased depression, anger, and increased vigor, versus those who walk in interior environments. Use in building standards Given the increased information supporting the benefits of biophilic design, organizations are beginning to incorporate the concept into their standards and rating systems to encourage building professionals to use biophilia in their projects. As of now, the most prominent supporters of biophilic design are the WELL Building Standard and the Living Building Challenge. WELL Building Standard The International WELL Building Institute uses biophilic design in their WELL Standard as a qualitative and quantitive metric. The qualitative metric must incorporate nature (environmental elements, natural lighting, and spatial qualities), natural patterns, and nature interaction within and outside the building; these efforts must be documented through professional narrative to be considered for certification. For the quantitative portion, projects must have outdoor biophilia (25% of the project must have accessible landscaped grounds and or rooftop gardens and 70% of that 25% must have plantings), indoor biophilia (plant beds and pots must cover 1% of the floor area and plant walls must cover 2% of the floor area), and water features (projects over 100,000 sqft must have a water feature that is either 1.8 m in height or 4 m2 in floor area). Verification is enforced through assurance letters by the architects and owners, and by on-site spot checks. Generally, both metric types can be applied to every building type the WELL Standard addresses, with two exceptions: core and shell construction does not need to include quantitative interior biophilia and existing interiors do not need to include qualitative nature interaction. Living Building Challenge The International Living Future Institute is the creator of the living building challenge – a rigorous building standard that aims to maximize building performance. This standard classifies the use of a biophilic environment as an imperative element in their health and happiness section. The living building challenge requires that a framework be created that shows the following: how the project will incorporate nature through environmental features, light, and space, natural shapes and forms, natural patterns, and place-based relationships. The challenge also requires that the occupants be able to connect to nature directly through interaction within the interior and exterior of the building. These are then verified through a preliminary audit procedure. Criticisms Biophilic, or sustainable design more generally, is slowly being accepted by major developers and green building certification companies. However, the benefits of nature were not given scientific credence. Hence, until recently, there was little funding for research that explored the long-term challenges, negatives, and even benefits of biophilic elements in buildings and cities. Some have noted that biophilic design has focused primarily on benefits to humans and few projects that claim to be biophilic contribute positively to nature or biodiversity, an omission that could be easily corrected. Other concerns are the initial and maintenance costs of projects that implement expensive biophilic design principles. Building-scale examples of application Church of Mary Magdalene The Church of Mary Magdalene is in Jerusalem and was consecrated in 1888. This church's architecture is biophilic in that it contains natural geometries, organized complexity, information richness, and organic forms (onion-shaped domes) and materials. On the exterior, complexity and order are shown through the repetitive use of domes, their scale, and placement. Inside, the church experiences symmetry and a savannah-like environment through its vaulting and domes – the columns also have leaf-like fronds, which represents images of nature. Prospect is explored through raised ceilings that have balconies and increased lighting; refuge is experienced in lower areas, where there are reduced lighting and alcoves and throughout, where small windows are encased by thick walls. Fallingwater Fallingwater, one of Frank Lloyd Wright's most famous buildings, exemplifies many biophilic features. The home has human-nature connectivity through the integrative use of the waterfall and stream in its architecture - the sound from these water features can be heard throughout the inside of the home. This allows visitors to feel like they are "participating" in nature rather than "spectating" it like they would be if the waterfall were downstream. In addition, the structure is built around existing foliage and encompasses the local geology by incorporating a large rock in the center of the living room. There are also many glass walls to connect the occupants to the surrounding woods and nature that is outdoors. To better the flow of the space, Wright included many transitional spaces in the home (porches and decks); he also enhanced the direct and indirect experiences of nature by using multiple fireplaces and a wealth of organic shapes, colors, and materials. His use of Kellert's biophilic design principles are prominent throughout the structure, even though this home was constructed before these ideas were developed. Khoo Teck Puat Hospital Referred to as a "garden hospital", KTP has an abundance of native plants and water features that surround its exterior. This inclusion of vegetation has increased the biodiversity of the local ecosystem, bringing butterflies and bird species; the rooftop of the hospital is also used by local residents to grow produce. Unlike many other hospitals, 15% of visitors come to Khoo Teck Puat for recreational reasons such as gardening or relaxing. The design behind this hospital was to increase the productivity of its doctors, wellbeing of its visitors, and increase the healing speed and pain resilience of its patients. To do this, the designers incorporated greenery from the hospital's courtyard to its upper floors, where patients have balconies that are covered in scented foliage. The hospital is centered on the Yishun pond, and like Frank Lloyd Wright's Fallingwater, the architects made this natural feature part of the hospital by having water stream through its courtyard, creating the illusion that the water was "drawn" from the pond. The hospital also utilizes natural ventilation as much as possible in common areas and corridors by orienting them in the direction the north and southeast prevailing winds; this has reduced energy consumption by 60% and increased airflow by 20-30%. This creates thermally adequate environments for patients and medical staff alike. Using Kellert strategies above, it is apparent that most of the strategies used for Khoo Teck Puat are direct nature experiences. The hospital also uses transitional spaces to make occupants more connected to the outdoors and has organized complexity throughout its overall architectural design. KTP has created a sense of place for occupants and neighbors, as it acts as a communal place for both those who work there and live nearby. Sandy Hook Elementary School After the disaster that struck Sandy Hook Elementary in 2012, a new school was built to help heal the community and provide a new sense of security for those occupying the space. Major biophilic design parameters that Svigals + Partners included in this project are animal feeders, wetlands, courtyards, natural shapes and patterns, natural materials, transitional spaces, images of nature, natural colors, and use of natural light. The school has incorporated a victory garden that is meant to act as a way of healing for children after the tragedy. The architects wanted the children to feel as if they are learning in the trees so they set the school back at the edge of the woods and surrounded the space with large windows; there are also metaphoric metal trees in the lobby that have reflective metal leaves that refract light onto colored glass. Using Kellert's biophilic framework, it is prevalent that the school utilizes many different nature experiences. The use of wood planks and stone on the outside of the building help enforce indirect experiences of nature because these are natural materials. Further, the interior environment of the school experiences information richness through the architects' use of light reflection and color. Naturalistic shapes are brought into the interior environment through the metal trees and leaves. For experiences of space and place, Svigals + Partners bring nature into the classroom and school through the placement of windows that act as transitional spaces. The school also has a variety of breezeways, bridges, and pathways for students as they move from one space to another. Direct experiences of nature are enjoyed through water features, large rain gardens, and courtyards found on the property. The animal feeders also act as a way to bring fauna into the area. City-scale examples of application Singapore, Singapore Nicknamed a "city in a garden", Singapore has dedicated many resources to make a system of nature preserves, parks and connectors (ex. Southern Ridges), and tree-lined streets that promote the return of wildlife and reduce the heat island effect that is often seen in dense city centers; local governments agree with Kellert and Beatley that daily doses of nature enhance the wellbeing of its citizens. To manage stormwater, Singaporean governments have implemented the Bishan-Ang Mo Kio Park Project, where the old concrete water drains were excavated for the reconstruction of the Kallang River; this allowed residents in the area to enjoy the physiological and physical health benefits of having a green space with water. The reimagining of the park has increased the biodiversity of the local ecosystem, with dragonflies, butterflies, hornbills, and smooth-coated otters returning to the Singaporean region - the river also acts as a natural stormwater management system by increasing infiltration and movement of excess water. To increase the immediate presence of nature in the city, Singapore provides subsidies (up to half the installation cost) for those who include vegetative walls, green roofs, sky parks, etc. in their building designs. The city-state also has an impressive number of biophilic buildings and structures. For example, their Gardens by the Bay Project has an installation called the "Supertree Grove". This urban nature installation has over 160,000 plants that stem from 200 different species installed in the 16 supertrees; many of these urban trees have sky walkways, observatories, and or solar panels. Lastly, Singapore has implemented efforts to increase community engagement through the creation of over 1,000 community gardens for resident use. Oslo, Norway Oslo is sandwiched between the Oslo Fjord and wooded areas. Woods serve as an important feature to this municipality. More than two-thirds of the city is protected forests; in recent surveys over 81% of Oslo residents said they have gone to these forests at least once in the last year. These forests are protected, as Oslo adheres to ISO14001 for its forest management – the trees are controlled under "living forest" standards, which means that limited harvesting is acceptable. In addition to its extensive forest system, the city compounds its exposure to nature by bringing the natural environment into the urban setting. Being an already compact city (after all, two-thirds is forest) the city allocates around 20% of its urban land to green spaces; the local government is in the process of creating a network of paths to connect these green areas so that citizens can walk and ride their bikes undisturbed. In addition to the expanding park accessibility, the city has also restored the city's river the Akerselva, which runs through Oslo's center. Because the water feature is near sets of dense housing, the city made the river more appealing and accessible to residents by adding waterfalls and nature trails; altogether the city has 365 kilometers worth of nature trails. To connect the city with its fjords, Oslo's government has started the process of putting its roadways underground in tunnels. This, combined with the construction of aesthetically creative architecture (Barcode Project) on the waterfront and promenade foot trails, is transforming this area into a place where residents can experience enjoyment from the unobstructed views of the fjord. Lastly, Oslo has a Noise Action Plan to help alleviate urban noise levels – some of these areas (mostly recreational) have noise levels as low as 50 dB. Sydney, Australia One Central Park in Sydney is a residential development known for its innovative biophilic design. Completed in 2014, the project was designed by Ateliers Jean Nouvel and features two towers with a distinctive vertical garden set on a common retail podium. Hydroponic walls with different native and exotic plants serve as natural sun shade that varies with the seasons, protecting the apartments from direct sun in the summer and letting in maximum sunlight in the winter. See also Biomimetic architecture Building-integrated agriculture Ecological design Folkewall Green architecture Green building and wood Green building Green roof Greening Log house Natural building Roof garden Sustainable city Thorncrown Chapel References Biophilia hypothesis Sustainable architecture Urban design
0.765041
0.994225
0.760623
Biometrics
Biometrics are body measurements and calculations related to human characteristics and features. Biometric authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological characteristics which are related to the shape of the body. Examples include, but are not limited to fingerprint, palm veins, face recognition, DNA, palm print, hand geometry, iris recognition, retina, odor/scent, voice, shape of ears and gait. Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to mouse movement, typing rhythm, gait, signature, voice, and behavioral profiling. Some researchers have coined the term behaviometrics (behavioral biometrics) to describe the latter class of biometrics. More traditional means of access control include token-based identification systems, such as a driver's license or passport, and knowledge-based identification systems, such as a password or personal identification number. Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based methods; however, the collection of biometric identifiers raises privacy concerns. Biometric functionality Many different aspects of human physiology, chemistry or behavior can be used for biometric authentication. The selection of a particular biometric for use in a specific application involves a weighting of several factors. Jain et al. (1999) identified seven such factors to be used when assessing the suitability of any trait for use in biometric authentication. Biometric authentication is based upon biometric recognition which is an advanced method of recognising biological and behavioural characteristics of an Individual. Universality means that every person using a system should possess the trait. Uniqueness means the trait should be sufficiently different for individuals in the relevant population such that they can be distinguished from one another. Permanence relates to the manner in which a trait varies over time. More specifically, a trait with good permanence will be reasonably invariant over time with respect to the specific matching algorithm. Measurability (collectability) relates to the ease of acquisition or measurement of the trait. In addition, acquired data should be in a form that permits subsequent processing and extraction of the relevant feature sets. Performance relates to the accuracy, speed, and robustness of technology used (see performance section for more details). Acceptability relates to how well individuals in the relevant population accept the technology such that they are willing to have their biometric trait captured and assessed. Circumvention relates to the ease with which a trait might be imitated using an artifact or substitute. Proper biometric use is very application dependent. Certain biometrics will be better than others based on the required levels of convenience and security. No single biometric will meet all the requirements of every possible application. The block diagram illustrates the two basic modes of a biometric system. First, in verification (or authentication) mode the system performs a one-to-one comparison of a captured biometric with a specific template stored in a biometric database in order to verify the individual is the person they claim to be. Three steps are involved in the verification of a person. In the first step, reference models for all the users are generated and stored in the model database. In the second step, some samples are matched with reference models to generate the genuine and impostor scores and calculate the threshold. The third step is the testing step. This process may use a smart card, username, or ID number (e.g. PIN) to indicate which template should be used for comparison. Positive recognition is a common use of the verification mode, "where the aim is to prevent multiple people from using the same identity". Second, in identification mode the system performs a one-to-many comparison against a biometric database in an attempt to establish the identity of an unknown individual. The system will succeed in identifying the individual if the comparison of the biometric sample to a template in the database falls within a previously set threshold. Identification mode can be used either for positive recognition (so that the user does not have to provide any information about the template to be used) or for negative recognition of the person "where the system establishes whether the person is who she (implicitly or explicitly) denies to be". The latter function can only be achieved through biometrics since other methods of personal recognition, such as passwords, PINs, or keys, are ineffective. The first time an individual uses a biometric system is called enrollment. During enrollment, biometric information from an individual is captured and stored. In subsequent uses, biometric information is detected and compared with the information stored at the time of enrollment. Note that it is crucial that storage and retrieval of such systems themselves be secure if the biometric system is to be robust. The first block (sensor) is the interface between the real world and the system; it has to acquire all the necessary data. Most of the times it is an image acquisition system, but it can change according to the characteristics desired. The second block performs all the necessary pre-processing: it has to remove artifacts from the sensor, to enhance the input (e.g. removing background noise), to use some kind of normalization, etc. In the third block, necessary features are extracted. This step is an important step as the correct features need to be extracted in an optimal way. A vector of numbers or an image with particular properties is used to create a template. A template is a synthesis of the relevant characteristics extracted from the source. Elements of the biometric measurement that are not used in the comparison algorithm are discarded in the template to reduce the file size and to protect the identity of the enrollee. However, depending on the scope of the biometric system, original biometric image sources may be retained, such as the PIV-cards used in the Federal Information Processing Standard Personal Identity Verification (PIV) of Federal Employees and Contractors (FIPS 201). During the enrollment phase, the template is simply stored somewhere (on a card or within a database or both). During the matching phase, the obtained template is passed to a matcher that compares it with other existing templates, estimating the distance between them using any algorithm (e.g. Hamming distance). The matching program will analyze the template with the input. This will then be output for a specified use or purpose (e.g. entrance in a restricted area), though it is a fear that the use of biometric data may face mission creep. Selection of biometrics in any practical application depending upon the characteristic measurements and user requirements. In selecting a particular biometric, factors to consider include, performance, social acceptability, ease of circumvention and/or spoofing, robustness, population coverage, size of equipment needed and identity theft deterrence. The selection of a biometric is based on user requirements and considers sensor and device availability, computational time and reliability, cost, sensor size, and power consumption. Multimodal biometric system Multimodal biometric systems use multiple sensors or biometrics to overcome the limitations of unimodal biometric systems. For instance iris recognition systems can be compromised by aging irises and electronic fingerprint recognition can be worsened by worn-out or cut fingerprints. While unimodal biometric systems are limited by the integrity of their identifier, it is unlikely that several unimodal systems will suffer from identical limitations. Multimodal biometric systems can obtain sets of information from the same marker (i.e., multiple images of an iris, or scans of the same finger) or information from different biometrics (requiring fingerprint scans and, using voice recognition, a spoken passcode). Multimodal biometric systems can fuse these unimodal systems sequentially, simultaneously, a combination thereof, or in series, which refer to sequential, parallel, hierarchical and serial integration modes, respectively. Fusion of the biometrics information can occur at different stages of a recognition system. In case of feature level fusion, the data itself or the features extracted from multiple biometrics are fused. Matching-score level fusion consolidates the scores generated by multiple classifiers pertaining to different modalities. Finally, in case of decision level fusion the final results of multiple classifiers are combined via techniques such as majority voting. Feature level fusion is believed to be more effective than the other levels of fusion because the feature set contains richer information about the input biometric data than the matching score or the output decision of a classifier. Therefore, fusion at the feature level is expected to provide better recognition results. Furthermore, the evolving biometric market trends underscore the importance of technological integration, showcasing a shift towards combining multiple biometric modalities for enhanced security and identity verification, aligning with the advancements in multimodal biometric systems. Spoof attacks consist in submitting fake biometric traits to biometric systems, and are a major threat that can curtail their security. Multi-modal biometric systems are commonly believed to be intrinsically more robust to spoof attacks, but recent studies have shown that they can be evaded by spoofing even a single biometric trait. One such proposed system of Multimodal Biometric Cryptosystem Involving the Face, Fingerprint, and Palm Vein by Prasanalakshmi The Cryptosystem Integration combines biometrics with cryptography, where the palm vein acts as a cryptographic key, offering a high level of security since palm veins are unique and difficult to forge. The Fingerprint Involves minutiae extraction (terminations and bifurcations) and matching techniques. Steps include image enhancement, binarization, ROI extraction, and minutiae thinning. The Face system uses class-based scatter matrices to calculate features for recognition, and the Palm Vein acts as an unbreakable cryptographic key, ensuring only the correct user can access the system. The cancelable Biometrics concept allows biometric traits to be altered slightly to ensure privacy and avoid theft. If compromised, new variations of biometric data can be issued. The Encryption fingerprint template is encrypted using the palm vein key via XOR operations. This encrypted Fingerprint is hidden within the face image using steganographic techniques. Enrollment and Verification for the Biometric data (Fingerprint, palm vein, face) are captured, encrypted, and embedded into a face image. The system extracts the biometric data and compares it with stored values for Verification. The system was tested with fingerprint databases, achieving 75% verification accuracy at an equal error rate of 25% and processing time approximately 50 seconds for enrollment and 22 seconds for Verification. High security due to palm vein encryption, effective against biometric spoofing, and the multimodal approach ensures reliability if one biometric fails. Potential for integration with smart cards or on-card systems, enhancing security in personal identification systems. Performance The discriminating powers of all biometric technologies depend on the amount of entropy they are able to encode and use in matching. The following are used as performance metrics for biometric systems: False match rate (FMR, also called FAR = False Accept Rate): the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs that are incorrectly accepted. In case of similarity scale, if the person is an imposter in reality, but the matching score is higher than the threshold, then he is treated as genuine. This increases the FMR, which thus also depends upon the threshold value. False non-match rate (FNMR, also called FRR = False Reject Rate): the probability that the system fails to detect a match between the input pattern and a matching template in the database. It measures the percent of valid inputs that are incorrectly rejected. Receiver operating characteristic or relative operating characteristic (ROC): The ROC plot is a visual characterization of the trade-off between the FMR and the FNMR. In general, the matching algorithm performs a decision based on a threshold that determines how close to a template the input needs to be for it to be considered a match. If the threshold is reduced, there will be fewer false non-matches but more false accepts. Conversely, a higher threshold will reduce the FMR but increase the FNMR. A common variation is the Detection error trade-off (DET), which is obtained using normal deviation scales on both axes. This more linear graph illuminates the differences for higher performances (rarer errors). Equal error rate or crossover error rate (EER or CER): the rate at which both acceptance and rejection errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest EER is the most accurate. Failure to enroll rate (FTE or FER): the rate at which attempts to create a template from an input is unsuccessful. This is most commonly caused by low-quality inputs. Failure to capture rate (FTC): Within automatic systems, the probability that the system fails to detect a biometric input when presented correctly. Template capacity: the maximum number of sets of data that can be stored in the system. History An early cataloguing of fingerprints dates back to 1885 when Juan Vucetich started a collection of fingerprints of criminals in Argentina. Josh Ellenbogen and Nitzan Lebovic argued that Biometrics originated in the identification systems of criminal activity developed by Alphonse Bertillon (1853–1914) and by Francis Galton's theory of fingerprints and physiognomy. According to Lebovic, Galton's work "led to the application of mathematical models to fingerprints, phrenology, and facial characteristics", as part of "absolute identification" and "a key to both inclusion and exclusion" of populations. Accordingly, "the biometric system is the absolute political weapon of our era" and a form of "soft control". The theoretician David Lyon showed that during the past two decades biometric systems have penetrated the civilian market, and blurred the lines between governmental forms of control and private corporate control. Kelly A. Gates identified 9/11 as the turning point for the cultural language of our present: "in the language of cultural studies, the aftermath of 9/11 was a moment of articulation, where objects or events that have no necessary connection come together and a new discourse formation is established: automated facial recognition as a homeland security technology." Adaptive biometric systems Adaptive biometric systems aim to auto-update the templates or model to the intra-class variation of the operational data. The two-fold advantages of these systems are solving the problem of limited training data and tracking the temporal variations of the input data through adaptation. Recently, adaptive biometrics have received a significant attention from the research community. This research direction is expected to gain momentum because of their key promulgated advantages. First, with an adaptive biometric system, one no longer needs to collect a large number of biometric samples during the enrollment process. Second, it is no longer necessary to enroll again or retrain the system from scratch in order to cope with the changing environment. This convenience can significantly reduce the cost of maintaining a biometric system. Despite these advantages, there are several open issues involved with these systems. For mis-classification error (false acceptance) by the biometric system, cause adaptation using impostor sample. However, continuous research efforts are directed to resolve the open issues associated to the field of adaptive biometrics. More information about adaptive biometric systems can be found in the critical review by Rattani et al. Recent advances in emerging biometrics In recent times, biometrics based on brain (electroencephalogram) and heart (electrocardiogram) signals have emerged. An example is finger vein recognition, using pattern-recognition techniques, based on images of human vascular patterns. The advantage of this newer technology is that it is more fraud resistant compared to conventional biometrics like fingerprints. However, such technology is generally more cumbersome and still has issues such as lower accuracy and poor reproducibility over time. On the portability side of biometric products, more and more vendors are embracing significantly miniaturized biometric authentication systems (BAS) thereby driving elaborate cost savings, especially for large-scale deployments. Operator signatures An operator signature is a biometric mode where the manner in which a person using a device or complex system is recorded as a verification template. One potential use for this type of biometric signature is to distinguish among remote users of telerobotic surgery systems that utilize public networks for communication. Proposed requirement for certain public networks John Michael (Mike) McConnell, a former vice admiral in the United States Navy, a former director of U.S. National Intelligence, and senior vice president of Booz Allen Hamilton, promoted the development of a future capability to require biometric authentication to access certain public networks in his keynote speech at the 2009 Biometric Consortium Conference. A basic premise in the above proposal is that the person that has uniquely authenticated themselves using biometrics with the computer is in fact also the agent performing potentially malicious actions from that computer. However, if control of the computer has been subverted, for example in which the computer is part of a botnet controlled by a hacker, then knowledge of the identity of the user at the terminal does not materially improve network security or aid law enforcement activities. Animal biometrics Rather than tags or tattoos, biometric techniques may be used to identify individual animals: zebra stripes, blood vessel patterns in rodent ears, muzzle prints, bat wing patterns, primate facial recognition and koala spots have all been tried. Issues and concerns Human dignity Biometrics have been considered also instrumental to the development of state authority (to put it in Foucauldian terms, of discipline and biopower). By turning the human subject into a collection of biometric parameters, biometrics would dehumanize the person, infringe bodily integrity, and, ultimately, offend human dignity. In a well-known case, Italian philosopher Giorgio Agamben refused to enter the United States in protest at the United States Visitor and Immigrant Status Indicator (US-VISIT) program's requirement for visitors to be fingerprinted and photographed. Agamben argued that gathering of biometric data is a form of bio-political tattooing, akin to the tattooing of Jews during the Holocaust. According to Agamben, biometrics turn the human persona into a bare body. Agamben refers to the two words used by Ancient Greeks for indicating "life", zoe, which is the life common to animals and humans, just life; and bios, which is life in the human context, with meanings and purposes. Agamben envisages the reduction to bare bodies for the whole humanity. For him, a new bio-political relationship between citizens and the state is turning citizens into pure biological life (zoe) depriving them from their humanity (bios); and biometrics would herald this new world. In Dark Matters: On the Surveillance of Blackness, surveillance scholar Simone Browne formulates a similar critique as Agamben, citing a recent study relating to biometrics R&D that found that the gender classification system being researched "is inclined to classify Africans as males and Mongoloids as females." Consequently, Browne argues that the conception of an objective biometric technology is difficult if such systems are subjectively designed, and are vulnerable to cause errors as described in the study above. The stark expansion of biometric technologies in both the public and private sector magnifies this concern. The increasing commodification of biometrics by the private sector adds to this danger of loss of human value. Indeed, corporations value the biometric characteristics more than the individuals value them. Browne goes on to suggest that modern society should incorporate a "biometric consciousness" that "entails informed public debate around these technologies and their application, and accountability by the state and the private sector, where the ownership of and access to one's own body data and other intellectual property that is generated from one's body data must be understood as a right." Other scholars have emphasized, however, that the globalized world is confronted with a huge mass of people with weak or absent civil identities. Most developing countries have weak and unreliable documents and the poorer people in these countries do not have even those unreliable documents. Without certified personal identities, there is no certainty of right, no civil liberty. One can claim his rights, including the right to refuse to be identified, only if he is an identifiable subject, if he has a public identity. In such a sense, biometrics could play a pivotal role in supporting and promoting respect for human dignity and fundamental rights. Privacy and discrimination It is possible that data obtained during biometric enrollment may be used in ways for which the enrolled individual has not consented. For example, most biometric features could disclose physiological and/or pathological medical conditions (e.g., some fingerprint patterns are related to chromosomal diseases, iris patterns could reveal sex, hand vein patterns could reveal vascular diseases, most behavioral biometrics could reveal neurological diseases, etc.). Moreover, second generation biometrics, notably behavioral and electro-physiologic biometrics (e.g., based on electrocardiography, electroencephalography, electromyography), could be also used for emotion detection. There are three categories of privacy concerns: Unintended functional scope: The authentication goes further than authentication, such as finding a tumor. Unintended application scope: The authentication process correctly identifies the subject when the subject did not wish to be identified. Covert identification: The subject is identified without seeking identification or authentication, i.e. a subject's face is identified in a crowd. Danger to owners of secured items When thieves cannot get access to secure properties, there is a chance that the thieves will stalk and assault the property owner to gain access. If the item is secured with a biometric device, the damage to the owner could be irreversible, and potentially cost more than the secured property. For example, in 2005, Malaysian car thieves cut off a man's finger when attempting to steal his Mercedes-Benz S-Class. Attacks at presentation In the context of biometric systems, presentation attacks may also be called "spoofing attacks". As per the recent ISO/IEC 30107 standard, presentation attacks are defined as "presentation to the biometric capture subsystem with the goal of interfering with the operation of the biometric system". These attacks can be either impersonation or obfuscation attacks. Impersonation attacks try to gain access by pretending to be someone else. Obfuscation attacks may, for example, try to evade face detection and face recognition systems. Several methods have been proposed to counteract presentation attacks. Surveillance humanitarianism in times of crisis Biometrics are employed by many aid programs in times of crisis in order to prevent fraud and ensure that resources are properly available to those in need. Humanitarian efforts are motivated by promoting the welfare of individuals in need, however the use of biometrics as a form of surveillance humanitarianism can create conflict due to varying interests of the groups involved in the particular situation. Disputes over the use of biometrics between aid programs and party officials stalls the distribution of resources to people that need help the most. In July 2019, the United Nations World Food Program and Houthi Rebels were involved in a large dispute over the use of biometrics to ensure resources are provided to the hundreds of thousands of civilians in Yemen whose lives are threatened. The refusal to cooperate with the interests of the United Nations World Food Program resulted in the suspension of food aid to the Yemen population. The use of biometrics may provide aid programs with valuable information, however its potential solutions may not be best suited for chaotic times of crisis. Conflicts that are caused by deep-rooted political problems, in which the implementation of biometrics may not provide a long-term solution. Cancelable biometrics One advantage of passwords over biometrics is that they can be re-issued. If a token or a password is lost or stolen, it can be cancelled and replaced by a newer version. This is not naturally available in biometrics. If someone's face is compromised from a database, they cannot cancel or reissue it. If the electronic biometric identifier is stolen, it is nearly impossible to change a biometric feature. This renders the person's biometric feature questionable for future use in authentication, such as the case with the hacking of security-clearance-related background information from the Office of Personnel Management (OPM) in the United States. Cancelable biometrics is a way in which to incorporate protection and the replacement features into biometrics to create a more secure system. It was first proposed by Ratha et al. "Cancelable biometrics refers to the intentional and systematically repeatable distortion of biometric features in order to protect sensitive user-specific data. If a cancelable feature is compromised, the distortion characteristics are changed, and the same biometrics is mapped to a new template, which is used subsequently. Cancelable biometrics is one of the major categories for biometric template protection purpose besides biometric cryptosystem." In biometric cryptosystem, "the error-correcting coding techniques are employed to handle intraclass variations." This ensures a high level of security but has limitations such as specific input format of only small intraclass variations. Several methods for generating new exclusive biometrics have been proposed. The first fingerprint-based cancelable biometric system was designed and developed by Tulyakov et al. Essentially, cancelable biometrics perform a distortion of the biometric image or features before matching. The variability in the distortion parameters provides the cancelable nature of the scheme. Some of the proposed techniques operate using their own recognition engines, such as Teoh et al. and Savvides et al., whereas other methods, such as Dabbah et al., take the advantage of the advancement of the well-established biometric research for their recognition front-end to conduct recognition. Although this increases the restrictions on the protection system, it makes the cancellable templates more accessible for available biometric technologies Proposed soft biometrics Soft biometrics are understood as not strict biometrical recognition practices that are proposed in favour of identity cheaters and stealers. Traits are physical, behavioral or adhered human characteristics that have been derived from the way human beings normally distinguish their peers (e.g. height, gender, hair color). They are used to complement the identity information provided by the primary biometric identifiers. Although soft biometric characteristics lack the distinctiveness and permanence to recognize an individual uniquely and reliably, and can be easily faked, they provide some evidence about the users identity that could be beneficial. In other words, despite the fact they are unable to individualize a subject, they are effective in distinguishing between people. Combinations of personal attributes like gender, race, eye color, height and other visible identification marks can be used to improve the performance of traditional biometric systems. Most soft biometrics can be easily collected and are actually collected during enrollment. Two main ethical issues are raised by soft biometrics. First, some of soft biometric traits are strongly cultural based; e.g., skin colors for determining ethnicity risk to support racist approaches, biometric sex recognition at the best recognizes gender from tertiary sexual characters, being unable to determine genetic and chromosomal sexes; soft biometrics for aging recognition are often deeply influenced by ageist stereotypes, etc. Second, soft biometrics have strong potential for categorizing and profiling people, so risking of supporting processes of stigmatization and exclusion. Data protection of biometric data in international law Many countries, including the United States, are planning to share biometric data with other nations. In testimony before the US House Appropriations Committee, Subcommittee on Homeland Security on "biometric identification" in 2009, Kathleen Kraninger and Robert A Mocny commented on international cooperation and collaboration with respect to biometric data, as follows: According to an article written in 2009 by S. Magnuson in the National Defense Magazine entitled "Defense Department Under Pressure to Share Biometric Data" the United States has bilateral agreements with other nations aimed at sharing biometric data. To quote that article: Likelihood of full governmental disclosure Certain members of the civilian community are worried about how biometric data is used but full disclosure may not be forthcoming. In particular, the Unclassified Report of the United States' Defense Science Board Task Force on Defense Biometrics states that it is wise to protect, and sometimes even to disguise, the true and total extent of national capabilities in areas related directly to the conduct of security-related activities. This also potentially applies to Biometrics. It goes on to say that this is a classic feature of intelligence and military operations. In short, the goal is to preserve the security of 'sources and methods'. Countries applying biometrics Countries using biometrics include Australia, Brazil, Bulgaria, Canada, Cyprus, Greece, China, Gambia, Germany, India, Iraq, Ireland, Israel, Italy, Malaysia, Netherlands, New Zealand, Nigeria, Norway, Pakistan, Poland, South Africa, Saudi Arabia, Tanzania, Turkey, Ukraine, United Arab Emirates, United Kingdom, United States and Venezuela. Among low to middle income countries, roughly 1.2 billion people have already received identification through a biometric identification program. There are also numerous countries applying biometrics for voter registration and similar electoral purposes. According to the International IDEA's ICTs in Elections Database, some of the countries using (2017) Biometric Voter Registration (BVR) are Armenia, Angola, Bangladesh, Bhutan, Bolivia, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Colombia, Comoros, Congo (Democratic Republic of), Costa Rica, Ivory Coast, Dominican Republic, Fiji, Gambia, Ghana, Guatemala, India, Iraq, Kenya, Lesotho, Liberia, Malawi, Mali, Mauritania, Mexico, Morocco, Mozambique, Namibia, Nepal, Nicaragua, Nigeria, Panama, Peru, Philippines, Senegal, Sierra Leone, Solomon Islands, Somaliland, Swaziland, Tanzania, Uganda, Uruguay, Venezuela, Yemen, Zambia, and Zimbabwe. India's national ID program India's national ID program called Aadhaar is the largest biometric database in the world. It is a biometrics-based digital identity assigned for a person's lifetime, verifiable online instantly in the public domain, at any time, from anywhere, in a paperless way. It is designed to enable government agencies to deliver a retail public service, securely based on biometric data (fingerprint, iris scan and face photo), along with demographic data (name, age, gender, address, parent/spouse name, mobile phone number) of a person. The data is transmitted in encrypted form over the internet for authentication, aiming to free it from the limitations of physical presence of a person at a given place. About 550 million residents have been enrolled and assigned 480 million Aadhaar national identification numbers as of 7 November 2013. It aims to cover the entire population of 1.2 billion in a few years. However, it is being challenged by critics over privacy concerns and possible transformation of the state into a surveillance state, or into a Banana republic.§ The project was also met with mistrust regarding the safety of the social protection infrastructures. To tackle the fear amongst the people, India's supreme court put a new ruling into action that stated that privacy from then on was seen as a fundamental right. On 24 August 2017 this new law was established. Malaysia's MyKad national ID program The current identity card, known as MyKad, was introduced by the National Registration Department of Malaysia on 5 September 2001 with Malaysia becoming the first country in the world to use an identification card that incorporates both photo identification and fingerprint biometric data on a built-in computer chip embedded in a piece of plastic. Besides the main purpose of the card as a validation tool and proof of citizenship other than the birth certificate, MyKad also serves as a valid driver's license, an ATM card, an electronic purse, and a public key, among other applications, as part of the Malaysian Government Multipurpose Card (GMPC) initiative, if the bearer chooses to activate the functions. See also Access control AFIS AssureSign BioAPI Biometrics in schools European Association for Biometrics Fingerprint recognition Fuzzy extractor Gait analysis Government database Handwritten biometric recognition Identity Cards Act 2006 International Identity Federation Keystroke dynamics Multiple Biometric Grand Challenge Private biometrics Retinal scan Signature recognition Smart city Speaker recognition Vein matching Voice analysis Notes References Further reading Biometrics Glossary – Glossary of Biometric Terms based on information derived from the National Science and Technology Council (NSTC) Subcommittee on Biometrics. Published by Fulcrum Biometrics, LLC, July 2013 Biometrics Institute - Explanatory Dictionary of Biometrics A glossary of biometrics terms, offering detailed definitions to supplement existing resources. Published May 2023. Delac, K., Grgic, M. (2004). A Survey of Biometric Recognition Methods. "Fingerprints Pay For School Lunch". (2001). Retrieved 2008-03-02. "Germany to phase-in biometric passports from November 2005". (2005). E-Government News. Retrieved 2006-06-11. Oezcan, V. (2003). "Germany Weighs Biometric Registration Options for Visa Applicants", Humboldt University Berlin. Retrieved 2006-06-11. Ulrich Hottelet: Hidden champion – Biometrics between boom and big brother, German Times, January 2007. Dunstone, T. and Yager, N., 2008. Biometric system and data analysis. 1st ed. New York: Springer. External links Surveillance Authentication methods Identification
0.762601
0.997356
0.760584
Endosymbiont
An endosymbiont or endobiont is an organism that lives within the body or cells of another organism. Typically the two organisms are in a mutualistic relationship. Examples are nitrogen-fixing bacteria (called rhizobia), which live in the root nodules of legumes, single-cell algae inside reef-building corals, and bacterial endosymbionts that provide essential nutrients to insects. Endosymbiosis played key roles in the development of eukaryotes and plants. Roughly 2.2 billion years ago an archaeon absorbed a bacterium through phagocytosis, that eventually became the mitochondria that provide energy to almost all living eukaryotic cells. Approximately 1 billion years ago, some of those cells absorbed cyanobacteria that eventually became chloroplasts, organelles that produce energy from sunlight. Approximately 100 million years ago, a lineage of amoeba in the genus Paulinella independently engulfed a cyanobacteria that evolved to be functionally synonymous with traditional chloroplasts, called chromatophores. Some 100 million years ago, UCYN-A, a nitrogen-fixing bacterium, became an endosymbiont of the marine alga Braarudosphaera bigelowii, eventually evolving into a nitroplast, which fixes nitrogen. Similarly, diatoms in the family Rhopalodiaceae have cyanobacterial endosymbionts, called spheroid bodies or diazoplasts, which have been proposed to be in the early stages of organelle evolution. Symbionts are either obligate (require their host to survive) or facultative (can survive independently). The most common examples of obligate endosymbiosis are mitochondria and chloroplasts, which reproduce via mitosis in tandem with their host cells. Some human parasites, e.g. Wuchereria bancrofti and Mansonella perstans, thrive in their intermediate insect hosts because of an obligate endosymbiosis with Wolbachia spp. They can both be eliminated by treatments that target their bacterial host. Etymology Endosymbiosis comes from the Greek: ἔνδον endon "within", σύν syn "together" and βίωσις biosis "living". Symbiogenesis Symbiogenesis theory holds that eukaryotes evolved via absorbing prokaryotes. Typically, one organism envelopes a bacterium and the two evolve a mutualistic relationship. The absorbed bacteria (the endosymbiont) eventually lives exclusively within the host cells. This fits the concept of observed organelle development. Typically the endosymbiont's genome shrinks, discarding genes whose roles are displaced by the host. For example, the Hodgkinia genome of Magicicada cicadas is much different from the prior freestanding bacteria. The cicada life cycle involves years of stasis underground. The symbiont produces many generations during this phase, experiencing little selection pressure, allowing their genomes to diversify. Selection is episodic (when the cicadas reproduce). The original Hodgkinia genome split into three much simpler endosymbionts, each encoding only a few genes—an instance of punctuated equilibrium producing distinct lineages. The host requires all three symbionts. Transmission Symbiont transmission is the process where the host acquires its symbiont. Since symbionts are not produced by host cells, they must find their own way to reproduce and populate daughter cells as host cells divide. Horizontal, vertical, and mixed-mode (hybrid of horizonal and vertical) transmission are the three paths for symbiont transfer. Horizontal Horizontal symbiont transfer (horizontal transmission) is a process where a host acquires a facultative symbiont from the environment or another host. The Rhizobia-Legume symbiosis (bacteria-plant endosymbiosis) is a prime example of this modality. The Rhizobia-legume symbiotic relationship is important for processes such as the formation of root nodules. It starts with flavonoids released by the legume host, which causes the rhizobia species (endosymbiont) to activate its Nod genes. These Nod genes generate lipooligosaccharide signals that the legume detects, leading to root nodule formation. This process bleeds into other processes such as nitrogen fixation in plants. The evolutionary advantage of such an interaction allows genetic exchange between both organisms involved to increase the propensity for novel functions as seen in the plant-bacterium interaction (holobiont formation). Vertical Vertical transmission takes place when the symbiont moves directly from parent to offspring. In horizontal transmission each generation acquires symbionts from the environment. An example is nitrogen-fixing bacteria in certain plant roots, such as pea aphid symbionts. A third type is mixed-mode transmission, where symbionts move horizontally for some generations, after which they are acquired vertically. Wigglesworthia, a tsetse fly symbiont, is vertically transmitted (via mother's milk). When a symbiont reaches this stage, it resembles a cellular organelle, similar to mitochondria or chloroplasts. In vertical transmission, the symbionts do not need to survive independently, often leading them to have a reduced genome. For instance, pea aphid symbionts have lost genes for essential molecules and rely on the host to supply them. In return, the symbionts synthesize essential amino acids for the aphid host. When a symbiont reaches this stage, it begins to resemble a cellular organelle, similar to mitochondria or chloroplasts. Such dependent hosts and symbionts form a holobiont. In the event of a bottleneck, a decrease in symbiont diversity could compromise host-symbiont interactions, as deleterious mutations accumulate. Hosts Invertebrates The best-studied examples of endosymbiosis are in invertebrates. These symbioses affect organisms with global impact, including Symbiodinium (corals), or Wolbachia (insects). Many insect agricultural pests and human disease vectors have intimate relationships with primary endosymbionts. Insects Scientists classify insect endosymbionts as Primary or Secondary. Primary endosymbionts (P-endosymbionts) have been associated with their insect hosts for millions of years (from ten to several hundred million years). They form obligate associations and display cospeciation with their insect hosts. Secondary endosymbionts more recently associated with their hosts, may be horizontally transferred, live in the hemolymph of the insects (not specialized bacteriocytes, see below), and are not obligate. Primary Among primary endosymbionts of insects, the best-studied are the pea aphid (Acyrthosiphon pisum) and its endosymbiont Buchnera sp. APS, the tsetse fly Glossina morsitans morsitans and its endosymbiont Wigglesworthia glossinidia brevipalpis and the endosymbiotic protists in lower termites. As with endosymbiosis in other insects, the symbiosis is obligate. Nutritionally-enhanced diets allow symbiont-free specimens to survive, but they are unhealthy, and at best survive only a few generations. In some insect groups, these endosymbionts live in specialized insect cells called bacteriocytes (also called mycetocytes), and are maternally-transmitted, i.e. the mother transmits her endosymbionts to her offspring. In some cases, the bacteria are transmitted in the egg, as in Buchnera; in others like Wigglesworthia, they are transmitted via milk to the embryo. In termites, the endosymbionts reside within the hindguts and are transmitted through trophallaxis among colony members. Primary endosymbionts are thought to help the host either by providing essential nutrients or by metabolizing insect waste products into safer forms. For example, the putative primary role of Buchnera is to synthesize essential amino acids that the aphid cannot acquire from its diet of plant sap. The primary role of Wigglesworthia is to synthesize vitamins that the tsetse fly does not get from the blood that it eats. In lower termites, the endosymbiotic protists play a major role in the digestion of lignocellulosic materials that constitute a bulk of the termites' diet. Bacteria benefit from the reduced exposure to predators and competition from other bacterial species, the ample supply of nutrients and relative environmental stability inside the host. Primary endosymbionts of insects have among the smallest of known bacterial genomes and have lost many genes commonly found in closely related bacteria. One theory claimed that some of these genes are not needed in the environment of the host insect cell. A complementary theory suggests that the relatively small numbers of bacteria inside each insect decrease the efficiency of natural selection in 'purging' deleterious mutations and small mutations from the population, resulting in a loss of genes over many millions of years. Research in which a parallel phylogeny of bacteria and insects was inferred supports the assumption hat primary endosymbionts are transferred only vertically. Attacking obligate bacterial endosymbionts may present a way to control their hosts, many of which are pests or human disease carriers. For example, aphids are crop pests and the tsetse fly carries the organism Trypanosoma brucei that causes African sleeping sickness. Studying insect endosymbionts can aid understanding the origins of symbioses in general, as a proxy for understanding endosymbiosis in other species. The best-studied ant endosymbionts are Blochmannia bacteria, which are the primary endosymbiont of Camponotus ants. In 2018 a new ant-associated symbiont, Candidatus Westeberhardia Cardiocondylae, was discovered in Cardiocondyla. It is reported to be a primary symbiont. Secondary The pea aphid (Acyrthosiphon pisum) contains at least three secondary endosymbionts, Hamiltonella defensa, Regiella insecticola, and Serratia symbiotica. Hamiltonella defensa defends its aphid host from parasitoid wasps. This symbiosis replaces lost elements of the insect's immune response. One of the best-understood defensive symbionts is the spiral bacteria Spiroplasma poulsonii. Spiroplasma sp. can be reproductive manipulators, but also defensive symbionts of Drosophila flies. In Drosophila neotestacea, S. poulsonii has spread across North America owing to its ability to defend its fly host against nematode parasites. This defence is mediated by toxins called "ribosome-inactivating proteins" that attack the molecular machinery of invading parasites. These toxins represent one of the first understood examples of a defensive symbiosis with a mechanistic understanding for defensive symbiosis between an insect endosymbiont and its host. Sodalis glossinidius is a secondary endosymbiont of tsetse flies that lives inter- and intracellularly in various host tissues, including the midgut and hemolymph. Phylogenetic studies do not report a correlation between evolution of Sodalis and tsetse. Unlike Wigglesworthia, Sodalis has been cultured in vitro. Cardinium and many other insects have secondary endosymbionts. Marine Extracellular endosymbionts are represented in all four extant classes of Echinodermata (Crinoidea, Ophiuroidea, Echinoidea, and Holothuroidea). Little is known of the nature of the association (mode of infection, transmission, metabolic requirements, etc.) but phylogenetic analysis indicates that these symbionts belong to the class Alphaproteobacteria, relating them to Rhizobium and Thiobacillus. Other studies indicate that these subcuticular bacteria may be both abundant within their hosts and widely distributed among the Echinoderms. Some marine oligochaeta (e.g., Olavius algarvensis and Inanidrillus spp.) have obligate extracellular endosymbionts that fill the entire body of their host. These marine worms are nutritionally dependent on their symbiotic chemoautotrophic bacteria lacking any digestive or excretory system (no gut, mouth, or nephridia). The sea slug Elysia chlorotica's endosymbiont is the algae Vaucheria litorea. The jellyfish Mastigias have a similar relationship with an algae. Elysia chlorotica forms this relationship intracellularly with the algae's chloroplasts. These chloroplasts retain their photosynthetic capabilities and structures for several months after entering the slug's cells. Trichoplax have two bacterial endosymbionts. Ruthmannia lives inside the animal's digestive cells. Grellia lives permanently inside the endoplasmic reticulum (ER), the first known symbiont to do so. Paracatenula is a flatworm which have lived in symbiosis with an endosymbiotic bacteria for 500 million years. The bacteria produce numerous small, droplet-like vesicles that provide the host with needed nutrients. Dinoflagellates Dinoflagellate endosymbionts of the genus Symbiodinium, commonly known as zooxanthellae, are found in corals, mollusks (esp. giant clams, the Tridacna), sponges, and the unicellular foraminifera. These endosymbionts capture sunlight and provide their hosts with energy via carbonate deposition. Previously thought to be a single species, molecular phylogenetic evidence reported diversity in Symbiodinium. In some cases, the host requires a specific Symbiodinium clade. More often, however, the distribution is ecological, with symbionts switching among hosts with ease. When reefs become environmentally stressed, this distribution is related to the observed pattern of coral bleaching and recovery. Thus, the distribution of Symbiodinium on coral reefs and its role in coral bleaching is an important in coral reef ecology. Phytoplankton In marine environments, endosymbiont relationships are especially prevalent in oligotrophic or nutrient-poor regions of the ocean like that of the North Atlantic. In such waters, cell growth of larger phytoplankton such as diatoms is limited by (insufficient) nitrate concentrations.  Endosymbiotic bacteria fix nitrogen for their hosts and in turn receive organic carbon from photosynthesis. These symbioses play an important role in global carbon cycling. One known symbiosis between the diatom Hemialus spp. and the cyanobacterium Richelia intracellularis has been reported in North Atlantic, Mediterranean, and Pacific waters. Richelia is found within the diatom frustule of Hemiaulus spp., and has a reduced genome. A 2011 study measured nitrogen fixation by the cyanobacterial host Richelia intracellularis well above intracellular requirements, and found the cyanobacterium was likely fixing nitrogen for its host. Additionally, both host and symbiont cell growth were much greater than free-living Richelia intracellularis or symbiont-free Hemiaulus spp. The Hemaiulus-Richelia symbiosis is not obligatory, especially in nitrogen-replete areas. Richelia intracellularis is also found in Rhizosolenia spp., a diatom found in oligotrophic oceans. Compared to the Hemaiulus host, the endosymbiosis with Rhizosolenia is much more consistent, and Richelia intracellularis is generally found in Rhizosolenia. There are some asymbiotic (occurs without an endosymbiont) Rhizosolenia, however there appears to be mechanisms limiting growth of these organisms in low nutrient conditions. Cell division for both the diatom host and cyanobacterial symbiont can be uncoupled and mechanisms for passing bacterial symbionts to daughter cells during cell division are still relatively unknown. Other endosymbiosis with nitrogen fixers in open oceans include Calothrix in Chaetoceros spp. and UNCY-A in prymnesiophyte microalga.  The Chaetoceros-Calothrix endosymbiosis is hypothesized to be more recent, as the Calothrix genome is generally intact. While other species like that of the UNCY-A symbiont and Richelia have reduced genomes. This reduction in genome size occurs within nitrogen metabolism pathways indicating endosymbiont species are generating nitrogen for their hosts and losing the ability to use this nitrogen independently. This endosymbiont reduction in genome size, might be a step that occurred in the evolution of organelles (above). Protists Mixotricha paradoxa is a protozoan that lacks mitochondria. However, spherical bacteria live inside the cell and serve the function of the mitochondria. Mixotricha has three other species of symbionts that live on the surface of the cell. Paramecium bursaria, a species of ciliate, has a mutualistic symbiotic relationship with green alga called Zoochlorella. The algae live in its cytoplasm. Platyophrya chlorelligera is a freshwater ciliate that harbors Chlorella that perform photosynthesis. Strombidium purpureum is a marine ciliate that uses endosymbiotic, purple, non-sulphur bacteria for anoxygenic photosynthesis. Paulinella chromatophora is a freshwater amoeboid that has a cyanobacterium endosymbiont. Many foraminifera are hosts to several types of algae, such as red algae, diatoms, dinoflagellates and chlorophyta. These endosymbionts can be transmitted vertically to the next generation via asexual reproduction of the host, but because the endosymbionts are larger than the foraminiferal gametes, they need to acquire algae horizontally following sexual reproduction. Several species of radiolaria have photosynthetic symbionts. In some species the host digests algae to keep the population at a constant level. Hatena arenicola is a flagellate protist with a complicated feeding apparatus that feeds on other microbes. When it engulfs a green Nephroselmis alga, the feeding apparatus disappears and it becomes photosynthetic. During mitosis the algae is transferred to only one of the daughter cells, while the other cell restarts the cycle. In 1966, biologist Kwang W. Jeon found that a lab strain of Amoeba proteus had been infected by bacteria that lived inside the cytoplasmic vacuoles. This infection killed almost all of the infected protists. After the equivalent of 40 host generations, the two organisms become mutually interdependent. A genetic exchange between the prokaryotes and protists occurred. Vertebrates The spotted salamander (Ambystoma maculatum) lives in a relationship with the algae Oophila amblystomatis, which grows in its egg cases. Plants All vascular plants harbor endosymbionts or endophytes in this context. They include bacteria, fungi, viruses, protozoa and even microalgae. Endophytes aid in processes such as growth and development, nutrient uptake, and defense against biotic and abiotic stresses like drought, salinity, heat, and herbivores. Plant symbionts can be categorized into epiphytic, endophytic, and mycorrhizal. These relations can also be categorized as beneficial, mutualistic, neutral, and pathogenic. Microorganisms living as endosymbionts in plants can enhance their host's primary productivity either by producing or capturing important resources. These endosymbionts can also enhance plant productivity by producing toxic metabolites that aid plant defenses against herbivores. Plants are dependent on plastid or chloroplast organelles. The chloroplast is derived from a cyanobacterial primary endosymbiosis that began over one billion years ago. An oxygenic, photosynthetic free-living cyanobacterium was engulfed and kept by a heterotrophic protist and eventually evolved into the present intracellular organelle.   Mycorrhizal endosymbionts appear only in fungi. Typically, plant endosymbiosis studies focus on a single category or species to better understand their individual biological processes and functions. Fungal endophytes Fungal endophytes can be found in all plant tissues. Fungi living below the ground amidst plant roots are known as mycorrhiza, but are further categorized based on their location inside the root, with prefixes such as ecto, endo, arbuscular, ericoid, etc. Fungal endosymbionts that live in the roots and extend their extraradical hyphae into the outer rhizosphere are known as ectendosymbionts. Arbuscular Mycorrhizal Fungi (AMF) Arbuscular mycorrhizal fungi or AMF are the most diverse plant microbial endosymbionts. With exceptions such as the Ericaceae family, almost all vascular plants harbor AMF endosymbionts as endo and ecto as well. AMF plant endosymbionts systematically colonize plant roots and help the plant host acquire soil nutrients such as nitrogen. In return it absorbs plant organic carbon products. Plant root exudates contain diverse secondary metabolites, especially flavonoids and strigolactones that act as chemical signals and attracts the AMF. AMF Gigaspora margarita lives as a plant endosymbiont and also harbors further endosymbiont intracytoplasmic bacterium-like organisms. AMF generally promote plant health and growth and alleviate abiotic stresses such as salinity, drought, heat, poor nutrition, and metal toxicity. Individual AMF species have different effects in different hosts – introducing the AMF of one plant to another plant can reduce the latter's growth. Endophytic fungi Endophytic fungi in mutualistic relations directly benefit and benefit from their host plants. They also can help their hosts succeed in polluted environments such as those contaminated with toxic metals. Fungal endophytes are taxonomically diverse and are divided into categories based on mode of transmission, biodiversity, in planta colonization and host plant type. Clavicipitaceous fungi systematically colonize temperate season grasses. Non-clavicipitaceous fungi colonize higher plants and even roots and divide into subcategories. Aureobasidium and preussia species of endophytic fungi isolated from Boswellia sacra produce indole acetic acid hormone to promote plant health and development. Aphids can be found in most plants. Carnivorous ladybirds are aphid predators and are used in pest control. Plant endophytic fungus Neotyphodium lolii produces alkaloid mycotoxins in response to aphid invasions. In response, ladybird predators exhibited reduced fertility and abnormal reproduction, suggesting that the mycotoxins are transmitted along the food chain and affect the predators. Endophytic bacteria Endophytic bacteria belong to a diverse group of plant endosymbionts characterized by systematic colonization of plant tissues. The most common genera include Pseudomonas, Bacillus, Acinetobacter, Actinobacteria, Sphingomonas. Some endophytic bacteria, such as Bacillus amyloliquefaciens, a seed-born endophytic bacteria, produce plant growth by producing gibberellins, which are potent plant growth hormones. Bacillus amyloliquefaciens promotes the taller height of transgenic dwarf rice plants. Some endophytic bacteria genera additionally belong to the Enterobacteriaceae family. Endophytic bacteria typically colonize the leaf tissues from plant roots, but can also enter the plant through the leaves through leaf stomata. Generally, the endophytic bacteria are isolated from the plant tissues by surface sterilization of the plant tissue in a sterile environment. Passenger endophytic bacteria eventually colonize inner tissue of plant by stochastic events while True endophytes possess adaptive traits because of which they live strictly in association with plants. The in vitro-cultivated endophytic bacteria association with plants is considered a more intimate relationship that helps plants acclimatize to conditions and promotes health and growth. Endophytic bacteria are considered to be plant's essential endosymbionts because virtually all plants harbor them, and these endosymbionts play essential roles in host survival. This endosymbiotic relation is important in terms of ecology, evolution and diversity. Endophytic bacteria such as Sphingomonas sp. and Serratia sp. that are isolated from arid land plants regulate endogenous hormone content and promote growth. Archaea endosymbionts Archaea are members of most microbiomes. While archaea are abundant in extreme environments, they are less abundant and diverse in association with eukaryotic hosts. Nevertheless, archaea are a substantial constituent of plant-associated ecosystems in the aboveground and belowground phytobiome, and play a role in host plant's health, growth and survival amid biotic and abiotic stresses. However, few studies have investigated the role of archaea in plant health and its symbiotic relationships. Most plant endosymbiosis studies focus on fungal or bacteria using metagenomic approaches. The characterization of archaea includes crop plants such as rice and maize, but also aquatic plants. The abundance of archaea varies by tissue type; for example archaea are more abundant in the rhizosphere than the phyllosphere and endosphere. This archaeal abundance is associated with plant species type, environment and the plant's developmental stage. In a study on plant genotype-specific archaeal and bacterial endophytes, 35% of archaeal sequences were detected in overall sequences (achieved using amplicon sequencing and verified by real time-PCR). The archaeal sequences belong to the phyla Thaumarchaeota, Crenarchaeota, and Euryarchaeota. Bacteria Some Betaproteobacteria have Gammaproteobacteria endosymbionts. Fungi Fungi host endohyphal bacteria; the effects of the bacteria are not well studied. Many such fungi in turn live within plants. These fungi are otherwise known as fungal endophytes. It is hypothesized that the fungi offers a safe haven for the bacteria, and the diverse bacteria that they attract create a micro-ecosystem. These interactions may impact the way that fungi interact with the environment by modulating their phenotypes. The bacteria do this by altering the fungi's gene expression. For example, Luteibacter sp. has been shown to naturally infect the ascomycetous endophyte Pestalotiopsis sp. isolated from Platycladus orientalis. The Luteibacter sp. influences the auxin and enzyme production within its host, which, in turn, may influence the effect the fungus has on its plant host. Another interesting example of a bacterium living in symbiosis with a fungus is the fungus Mortierella. This soil-dwelling fungus lives in close association with a toxin-producing bacteria, Mycoavidus, which helps the fungus defend against nematodes. Virus endosymbionts The human genome project found several thousand endogenous retroviruses, endogenous viral elements in the genome that closely resemble and can be derived from retroviruses, organized into 24 families. See also Epibiont, organism living on the surface of another organism Anagenesis Endophyte Ectosymbiosis List of symbiotic organisms List of symbiotic relationships Multigenomic organism Nitroplast Protocell Fungal-bacterial endosymbiosis Cyanobiont References Symbiosis Microbial population biology Environmental microbiology Endosymbiotic events
0.766997
0.991626
0.760574
Evolutionary pressure
Evolutionary pressure, selective pressure or selection pressure is exerted by factors that reduce or increase reproductive success in a portion of a population, driving natural selection. It is a quantitative description of the amount of change occurring in processes investigated by evolutionary biology, but the formal concept is often extended to other areas of research. In population genetics, selective pressure is usually expressed as a selection coefficient. Amino acids selective pressure It has been shown that putting an amino acid bio-synthesizing gene like HIS4 gene under amino acid selective pressure in yeast causes enhancement of expression of adjacent genes which is due to the transcriptional co-regulation of two adjacent genes in Eukaryota. Antibiotic resistance Drug resistance in bacteria is an example of an outcome of natural selection. When a drug is used on a species of bacteria, those that cannot resist die and do not produce offspring, while those that survive potentially pass on the resistance gene to the next generation (vertical gene transmission). The resistance gene can also be passed on to one bacterium by another of a different species (horizontal gene transmission). Because of this, the drug resistance increases over generations. For example, in hospitals, environments are created where pathogens such as C. difficile have developed a resistance to antibiotics. Antibiotic resistance is made worse by the misuse of antibiotics. Antibiotic resistance is encouraged when antibiotics are used to treat non-bacterial diseases, and when antibiotics are not used for the prescribed amount of time or in the prescribed dose. Antibiotic resistance may arise out of standing genetic variation in a population or de novo mutations in the population. Either pathway could lead to antibiotic resistance, which may be a form of evolutionary rescue. Nosocomial infections Clostridioides difficile, gram-positive bacteria species that inhabits the gut of mammals, exemplifies one type of bacteria that is a major cause of death by nosocomial infections. When symbiotic gut flora populations are disrupted (e.g., by antibiotics), one becomes more vulnerable to pathogens. The rapid evolution of antibiotic resistance places an enormous selective pressure on the advantageous alleles of resistance passed down to future generations. The Red Queen hypothesis shows that the evolutionary arms race between pathogenic bacteria and humans is a constant battle for evolutionary advantages in outcompeting each other. The evolutionary arms race between the rapidly evolving virulence factors of the bacteria and the treatment practices of modern medicine requires evolutionary biologists to understand the mechanisms of resistance in these pathogenic bacteria, especially considering the growing number of infected hospitalized patients. The evolved virulence factors pose a threat to patients in hospitals, who are immunocompromised from illness or antibiotic treatment. Virulence factors are the characteristics that the evolved bacteria have developed to increase pathogenicity. One of the virulence factors of C. difficile that largely constitutes its resistance to antibiotics is its toxins: enterotoxin TcdA and cytotoxin TcdB. Toxins produce spores that are difficult to inactivate and remove from the environment. This is especially true in hospitals where an infected patient's room may contain spores for up to 20 weeks. Combating the threat of the rapid spread of CDIs is therefore dependent on hospital sanitation practices removing spores from the environment. A study published in the American Journal of Gastroenterology found that to control the spread of CDIs glove use, hand hygiene, disposable thermometers and disinfection of the environment are necessary practices in health facilities. The virulence of this pathogen is remarkable and may take a radical change at sanitation approaches used in hospitals to control CDI outbreaks. Natural selection in humans The malaria parasite can exert a selective pressure on human populations. This pressure has led to natural selection for erythrocytes carrying the sickle cell hemoglobin gene mutation (Hb S)—causing sickle cell anaemia—in areas where malaria is a major health concern, because the condition grants some resistance to this infectious disease. Resistance to herbicides and pesticides Just as with the development of antibiotic resistance in bacteria, resistance to pesticides and herbicides has begun to appear with commonly used agricultural chemicals. For example: In the US, studies have shown that fruit flies that infest orange groves were becoming resistant to malathion, a pesticide used to kill them. In Hawaii and Japan, the diamondback moth developed a resistance to Bacillus thuringiensis, which is used in several commercial crops including Bt corn, about three years after it began to be used heavily. In England, rats in certain areas have developed such a strong resistance to rat poison that they can consume up to five times as much of it as normal rats without dying. DDT is no longer effective in controlling mosquitoes that transmit malaria in some places, a fact that contributed to a resurgence of the disease. In the southern United States, the weed Amaranthus palmeri, which interferes with production of cotton, has developed widespread resistance to the herbicide glyphosate. In the Baltic Sea, decreases in salinity has encouraged the emergence of a new species of brown seaweed, Fucus radicans. Humans exerting evolutionary pressure Human activity can lead to unintended changes in the environment. The human activity will have a possible negative effect on a certain population, causing many individuals from said population to die due to not being adapted to this new pressure. The individuals that are better adapted to this new pressure will survive and reproduce at a higher rate than those who are at a disadvantage. This occurs over many generations until the population as a whole is better adapted to the pressure. This is natural selection at work, but the pressure is coming from man-made activity such as building roads or hunting. This is seen in the below examples of cliff swallows and elk. However, not all human activity that causes an evolutionary pressure happens unintentionally. This is demonstrated in dog domestication and the subsequent selective breeding that resulted in the various breeds known today. Rattlesnakes In more heavily (human) populated and trafficked areas, reports have been increasing of rattlesnakes that do not rattle. This phenomenon is commonly attributed to selective pressure by humans, who often kill the snakes when they are discovered. Non-rattling snakes are more likely to go unnoticed, so survive to reproduce offspring that, like themselves, are less likely to rattle. Cliff swallows Populations of cliff swallows in Nebraska have displayed morphological changes in their wings after many years of living next to roads. Collecting data for over 30 years, researchers noticed a decline in wingspan of living swallow populations, while also noting a decrease in the number of cliff swallows killed by passing cars. Those cliff swallows that were killed by passing cars showed a larger wingspan than the population as a whole. Confounding effects such as road usage, car size, and population size were shown to have no impact on the study. Elk Evolutionary pressure imposed by humans is also seen in elk populations. These studies do not look at morphological differences, but behavioral differences. Faster and more mobile male elk were shown to be more likely to fall prey to hunters. The hunters create an environment where the more active animals are more likely to succumb to predation than less active animals. Female elk who survived past two years, would decrease their activity as each year passed, leaving more shy female elk that were more likely to survive. Female elk in a separate study also showed behavioral differences, with older females displaying the timid behavior that one would expect from this selection. Dog domestication Since the domestication of dogs, they have evolved alongside humans due to pressure from humans and the environment. This began by humans and wolves sharing the same area, with a pressure to coexist eventually leading to their domestication. Evolutionary pressure from humans led to many different breeds that paralleled the needs of the time, whether it was a need for protecting livestock or assisting in the hunt. Hunting and herding were a couple of the first reasons for humans artificially selecting for traits they deemed beneficial. This selective breeding does not stop there, but extends to humans selecting for certain traits deemed desirable in their domesticated dogs, such as size and color, even if they are not necessarily beneficial to the human in a tangible way. An unintended consequence of this selection is that domesticated dogs also tend to have heritable diseases depending on what specific breed they encompass. See also Notes Evolutionary biology
0.772658
0.984312
0.760536
Photoheterotroph
Photoheterotrophs (Gk: photo = light, hetero = (an)other, troph = nourishment) are heterotrophic phototrophs—that is, they are organisms that use light for energy, but cannot use carbon dioxide as their sole carbon source. Consequently, they use organic compounds from the environment to satisfy their carbon requirements; these compounds include carbohydrates, fatty acids, and alcohols. Examples of photoheterotrophic organisms include purple non-sulfur bacteria, green non-sulfur bacteria, and heliobacteria. These microorganisms are ubiquitous in aquatic habitats, occupy unique niche-spaces, and contribute to global biogeochemical cycling. Recent research has also indicated that the oriental hornet and some aphids may be able to use light to supplement their energy supply. Research Studies have shown that mammalian mitochondria can also capture light and synthesize ATP when mixed with pheophorbide, a light-capturing metabolite of chlorophyll. Research demonstrated that the same metabolite when fed to the worm Caenorhabditis elegans leads to increase in ATP synthesis upon light exposure, along with an increase in life span. Furthermore, inoculation experiments suggest that mixotrophic Ochromonas danica (i.e., Golden algae)—and comparable eukaryotes—favor photoheterotrophy in oligotrophic (i.e., nutrient-limited) aquatic habitats. This preference may increase energy-use efficiency and growth by reducing investment in inorganic carbon fixation (e.g., production of autotrophic machineries such as RuBisCo and PSII). Metabolism Photoheterotrophs generate ATP using light, in one of two ways: they use a bacteriochlorophyll-based reaction center, or they use a bacteriorhodopsin. The chlorophyll-based mechanism is similar to that used in photosynthesis, where light excites the molecules in a reaction center and causes a flow of electrons through an electron transport chain (ETS). This flow of electrons through the proteins causes hydrogen ions to be pumped across a membrane. The energy stored in this proton gradient is used to drive ATP synthesis. Unlike in photoautotrophs, the electrons flow only in a cyclic pathway: electrons released from the reaction center flow through the ETS and return to the reaction center. They are not utilized to reduce any organic compounds. Purple non-sulfur bacteria, green non-sulfur bacteria, and heliobacteria are examples of bacteria that carry out this scheme of photoheterotrophy. Other organisms, including halobacteria and flavobacteria and vibrios have purple-rhodopsin-based proton pumps that supplement their energy supply. The archaeal version is called bacteriorhodopsin, while the eubacterial version is called proteorhodopsin. The pump consists of a single protein bound to a Vitamin A derivative, retinal. The pump may have accessory pigments (e.g., carotenoids) associated with the protein. When light is absorbed by the retinal molecule, the molecule isomerises. This drives the protein to change shape and pump a proton across the membrane. The hydrogen ion gradient can then be used to generate ATP, transport solutes across the membrane, or drive a flagellar motor. One particular flavobacterium cannot reduce carbon dioxide using light, but uses the energy from its rhodopsin system to fix carbon dioxide through anaplerotic fixation. The flavobacterium is still a heterotroph as it needs reduced carbon compounds to live and cannot subsist on only light and CO2. It cannot carry out reactions in the form of n CO2 + 2n H2D + photons → (CH2O)n + 2n D + n H2O, where H2D may be water, H2S or another compound/compounds providing the reducing electrons and protons; the 2D + H2O pair represents an oxidized form. However, it can fix carbon in reactions like: CO2 + pyruvate + ATP (from photons) → malate + ADP +Pi where malate or other useful molecules are otherwise obtained by breaking down other compounds by carbohydrate + O2 → malate + CO2 + energy. This method of carbon fixation is useful when reduced carbon compounds are scarce and cannot be wasted as CO2 during interconversions, but energy is plentiful in the form of sunlight. Ecology Distribution and niche partitioning Photoheterotrophs—either 1) cyanobacteria (i.e. facultative heterotrophs in nutrient-limited environments like Synechococcus and Prochlorococcus), 2) aerobic anoxygenic photoheterotrophic bacteria (AAP; employing bacteriochlorophyll-based reaction centers), 3) proteorhodopsin (PR)-containing bacteria and archaea, and 4) heliobacteria (i.e., the only phototroph with bacteriochlorophyll g pigments, or Gram-positive membrane) are found in various aquatic habitats including oceans, stratified lakes, rice fields, and environmental extremes. In oceans' photic zones, up to 10% of bacterial cells are capable of AAP, whereas greater than 50% of net marine microorganisms house PR—reaching up to 90% in coastal biomes. As demonstrated in inoculation experiments, photoheterotrophy may provide these planktonic microbes competitive advantages 1) relative to chemoheterotrophs in oligotrophic (i.e., nutrient-poor) environments via increased nutrient use-efficiency (i.e., organic carbon fuels biosynthesis, excessively, versus energy production) and 2) by eliminating investment in physiologically costly autotrophic enzymes/complexes (RuBisCo and PSII). Furthermore, in Arctic oceans, AAP and PR photoheterotrophs are prominent in ice-covered regions during wintertime per light scarcity. Lastly, seasonal turnover has been observed in marine AAPs as ecotypes (i.e., genetically similar taxa with differing functional trait and/or environmental preferences) segregate into temporal niches. In stratified (i.e., euxinic) lakes, photoheterotrophs—alongside other anoxygenic phototrophs (e.g., purple/green sulfur bacteria fixing carbon dioxide via electron donors such as ferrous iron, sulfide, and hydrogen gas)—often occupy the chemocline in the water column and/or sediments. In this zone, dissolved oxygen is reduced, light is limited to long wavelengths (e.g., red and infrared) left-over by oxygenic phototrophs (e.g., cyanobacteria), and anaerobic metabolisms (i.e., those occurring in the absence of oxygen) begin introducing sulfide and bioavailable nutrients (e.g., organic carbon, phosphate, and ammonia) through upward diffusion. Heliobacteria are obligate anaerobes primarily located in rice fields, where low sulfide concentrations prevent competitive exclusion of purple/green sulfur bacteria. These waterlogged environments may facilitate symbiotic relationships between heliobacteria and rice plants as fixed nitrogen—from the former—is exchanged for carbon-rich root exudates. Observation studies have characterized photoheterotrophs (e.g., Green non-sulfur bacteria such as Chloroflexi and AAPs) within photosynthetic mats at environmental extremes (e.g., hot springs and hypersaline lagoons). Notably, temperature and pH drive anoxygenic phototroph community composition in Yellowstone National Park's geothermal features. In addition, various, light-dependent niches in the Great Salt Lake's hypersaline mats support phototrophic diversity as microbes optimize energy production and combat osmotic stress. Biogeochemical cycling Photoheterotrophs influence global carbon cycling by assimilating dissolved organic carbon (DOC). Therefore, when harvesting light-energy, carbon is maintained in the microbial loop without corresponding respiration (i.e., carbon dioxide release to the atmosphere as DOC is oxidized to fuel energy production). This disconnect, the discovery of facultative photoheterotrophs (e.g., AAPs with flexible energy sources), and previous measurements taken in the dark (i.e., to avoid skewed oxygen consumption values due to photooxidation, UV light, and oxygenic photosynthesis) lead to overestimated aquatic emissions. For example, a 15.2% decrease in community respiration was observed in Cep Lake, Czechia—alongside preferential glucose and pyruvate uptake—is attributed to facultative photoheterotrophs preferring light-energy during the daytime, given fitness benefits mentioned previously. Flowchart See also Primary nutritional groups References Sources Biology terminology Trophic ecology Microbial growth and nutrition
0.773719
0.982925
0.760508
Natural resource
Natural resources are resources that are drawn from nature and used with few modifications. This includes the sources of valued characteristics such as commercial and industrial use, aesthetic value, scientific interest, and cultural value. On Earth, it includes sunlight, atmosphere, water, land, all minerals along with all vegetation, and wildlife. Natural resources are part of humanity's natural heritage or protected in nature reserves. Particular areas (such as the rainforest in Fatu-Hiva) often feature biodiversity and geodiversity in their ecosystems. Natural resources may be classified in different ways. Natural resources are materials and components (something that can be used) found within the environment. Every man-made product is composed of natural resources (at its fundamental level). A natural resource may exist as a separate entity such as freshwater, air, or any living organism such as a fish, or it may be transformed by extractivist industries into an economically useful form that must be processed to obtain the resource such as metal ores, rare-earth elements, petroleum, timber and most forms of energy. Some resources are renewable, which means that they can be used at a certain rate and natural processes will restore them. In contrast, many extractive industries rely heavily on non-renewable resources that can only be extracted once. Natural resource allocations can be at the centre of many economic and political confrontations both within and between countries. This is particularly true during periods of increasing scarcity and shortages (depletion and overconsumption of resources). Resource extraction is also a major source of human rights violations and environmental damage. The Sustainable Development Goals and other international development agendas frequently focus on creating more sustainable resource extraction, with some scholars and researchers focused on creating economic models, such as circular economy, that rely less on resource extraction, and more on reuse, recycling and renewable resources that can be sustainably managed. Classification There are various criteria for classifying natural resources. These include the source of origin, stages of development, renewability and ownership. Origin Biotic: Resources that originate from the biosphere and have life such as flora and fauna, fisheries, livestock, etc. Fossil fuels such as coal and petroleum are also included in this category because they are formed from decayed organic matter. Abiotic: Resources that originate from non-living and inorganic material. These include land, fresh water, air, rare-earth elements, and heavy metals including ores, such as gold, iron, copper, silver, etc. Stage of development Potential resources: Resources that are known to exist, but have not been utilized yet. These may be used in the future. For example, petroleum in sedimentary rocks that, until extracted and put to use, remains a potential resource. Actual resources: Resources that have been surveyed, quantified and qualified, and are currently used in development. These are typically dependent on technology and the level of their feasibility, wood processing for example. Reserves: The part of an actual resource that can be developed profitably in the future. Stocks: Resources that have been surveyed, but cannot be used due to lack of technology, hydrogen vehicles for example. Renewability/exhaustibility Renewable resources: These resources can be replenished naturally. Some of these resources, like solar energy, air, wind, water, etc. are continuously available and their quantities are not noticeably affected by human consumption. Though many renewable resources do not have such a rapid recovery rate, these resources are susceptible to depletion by over-use. Resources from a human use perspective are classified as renewable so long as the rate of replenishment/recovery exceeds that of the rate of consumption. They replenish easily compared to non-renewable resources. Non-renewable resources: These resources are formed over a long geological time period in the environment and cannot be renewed easily. Minerals are the most common resource included in this category. From the human perspective, resources are non-renewable when their rate of consumption exceeds the rate of replenishment/recovery; a good example of this is fossil fuels, which are in this category because their rate of formation is extremely slow (potentially millions of years), meaning they are considered non-renewable. Some resources naturally deplete in amount without human interference, the most notable of these being radio-active elements such as uranium, which naturally decay into heavy metals. Of these, the metallic minerals can be re-used by recycling them, but coal and petroleum cannot be recycled. Ownership Individual resources: Resources owned privately by individuals. These include plots, houses, plantations, pastures, ponds, etc. Community resources: Resources which are accessible to all the members of a community. E.g.: Cemeteries National resources: Resources that belong to the nation. The nation has legal powers to acquire them for public welfare. These also include minerals, forests and wildlife within the political boundaries and Exclusive economic zone. International resources: These resources are regulated by international organizations. E.g.: International waters. Extraction Resource extraction involves any activity that withdraws resources from nature. This can range in scale from the traditional use of preindustrial societies to global industry. Extractive industries are, along with agriculture, the basis of the primary sector of the economy. Extraction produces raw material, which is then processed to add value. Examples of extractive industries are hunting, trapping, mining, oil and gas drilling, and forestry. Natural resources can add substantial amounts to a country's wealth; however, a sudden inflow of money caused by a resource boom can create social problems including inflation harming other industries ("Dutch disease") and corruption, leading to inequality and underdevelopment, this is known as the "resource curse". Extractive industries represent a large growing activity in many less-developed countries but the wealth generated does not always lead to sustainable and inclusive growth. People often accuse extractive industry businesses as acting only to maximize short-term value, implying that less-developed countries are vulnerable to powerful corporations. Alternatively, host governments are often assumed to be only maximizing immediate revenue. Researchers argue there are areas of common interest where development goals and business cross. These present opportunities for international governmental agencies to engage with the private sector and host governments through revenue management and expenditure accountability, infrastructure development, employment creation, skills and enterprise development, and impacts on children, especially girls and women. A strong civil society can play an important role in ensuring the effective management of natural resources. Norway can serve as a role model in this regard as it has good institutions and open and dynamic public debate with strong civil society actors that provide an effective checks and balances system for the government's management of extractive industries, such as the Extractive Industries Transparency Initiative (EITI), a global standard for the good governance of oil, gas and mineral resources. It seeks to address the key governance issues in the extractive sectors. However, in countries that do not have a very strong and unified society, meaning that there are dissidents who are not as happy with the government as in Norway's case, natural resources can actually be a factor in whether a civil war starts and how long the war lasts. Depletion In recent years, the depletion of natural resources has become a major focus of governments and organizations such as the United Nations (UN). This is evident in the UN's Agenda 21 Section Two, which outlines the necessary steps for countries to take to sustain their natural resources. The depletion of natural resources is considered a sustainable development issue. The term sustainable development has many interpretations, most notably the Brundtland Commission's 'to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs'; however, in broad terms it is balancing the needs of the planet's people and species now and in the future. In regards to natural resources, depletion is of concern for sustainable development as it has the ability to degrade current environments and the potential to impact the needs of future generations. Depletion of natural resources is associated with social inequity. Considering most biodiversity are located in developing countries, depletion of this resource could result in losses of ecosystem services for these countries. Some view this depletion as a major source of social unrest and conflicts in developing nations. At present, there is a particular concern for rainforest regions that hold most of the Earth's biodiversity. According to Nelson, deforestation and degradation affect 8.5% of the world's forests with 30% of the Earth's surface already cropped. If we consider that 80% of people rely on medicines obtained from plants and of the world's prescription medicines have ingredients taken from plants, loss of the world's rainforests could result in a loss of finding more potential life-saving medicines. The depletion of natural resources is caused by 'direct drivers of change' such as mining, petroleum extraction, fishing, and forestry as well as 'indirect drivers of change' such as demography (e.g. population growth), economy, society, politics, and technology. The current practice of agriculture is another factor causing depletion of natural resources. For example, the depletion of nutrients in the soil due to excessive use of nitrogen and desertification. The depletion of natural resources is a continuing concern for society. This is seen in the cited quote given by Theodore Roosevelt, a well-known conservationist and former United States president, who was opposed to unregulated natural resource extraction. Protection In 1982, the United Nations developed the World Charter for Nature, which recognized the need to protect nature from further depletion due to human activity. It states that measures must be taken at all societal levels, from international to individual, to protect nature. It outlines the need for sustainable use of natural resources and suggests that the protection of resources should be incorporated into national and international systems of law. To look at the importance of protecting natural resources further, the World Ethic of Sustainability, developed by the IUCN, WWF and the UNEP in 1990, set out eight values for sustainability, including the need to protect natural resources from depletion. Since the development of these documents, many measures have been taken to protect natural resources including establishment of the scientific field and practice of conservation biology and habitat conservation, respectively. Conservation biology is the scientific study of the nature and status of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction. It is an interdisciplinary subject drawing on science, economics and the practice of natural resource management. The term conservation biology was introduced as the title of a conference held at the University of California, San Diego, in La Jolla, California, in 1978, organized by biologists Bruce A. Wilcox and Michael E. Soulé. Habitat conservation is a type of land management that seeks to conserve, protect and restore habitat areas for wild plants and animals, especially conservation reliant species, and prevent their extinction, fragmentation or reduction in range. Management Natural resource management is a discipline in the management of natural resources such as land, water, soil, plants, and animals—with a particular focus on how management affects quality of life for present and future generations. Hence, sustainable development is followed according to the judicious use of resources to supply present and future generations. The disciplines of fisheries, forestry, and wildlife are examples of large subdisciplines of natural resource management. Management of natural resources involves identifying who has the right to use the resources and who does not to define the management boundaries of the resource. The resources may be managed by the users according to the rules governing when and how the resource is used depending on local condition or the resources may be managed by a governmental organization or other central authority. A "...successful management of natural resources depends on freedom of speech, a dynamic and wide-ranging public debate through multiple independent media channels and an active civil society engaged in natural resource issues..." because of the nature of the shared resources, the individuals who are affected by the rules can participate in setting or changing them. The users have rights to devise their own management institutions and plans under the recognition by the government. The right to resources includes land, water, fisheries, and pastoral rights. The users or parties accountable to the users have to actively monitor and ensure the utilisation of the resource compliance with the rules and impose penalties on those people who violate the rules. These conflicts are resolved quickly and efficiently by the local institution according to the seriousness and context of the offense. The global science-based platform to discuss natural resources management is the World Resources Forum, based in Switzerland. See also Asteroid mining Citizen's dividend Conservation (ethic) Cultural resources Environmental movement Land (economics) Lunar resources Mining Nature-based solutions Resource nationalism Sustainable development United Nations Framework Classification for Resources United Nations Resource Management System References External links Natural resource, britannica.com Natural resources, encyclopedia.com Environmental social science concepts Supply chain management
0.761347
0.998893
0.760504
Systems design
The basic study of system design is the understanding of component parts and their subsequent interaction with one another. Systems design has appeared in a variety of fields, including sustainability, computer/software architecture, and sociology. Product Development If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured. Thus in product development, systems design involves the process of defining and developing systems, such as interfaces and data, for an electronic control system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development. There is some overlap with the disciplines of systems analysis, systems architecture and systems engineering. Physical design The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed. In physical design, the following requirements about the system are decided. Input requirement, Output requirements, Storage requirements, Processing requirements, System control and backup or recovery. Put another way, the physical portion of system design can generally be broken down into three sub-tasks: User Interface Design Data Design Process Design Web System design Online websites, such as Google, Twitter, Facebook, Amazon and Netflix are used by millions of users worldwide. A scalable, highly available system must be designed to accommodate an increasing number of users. Here are the things to consider in designing the system: Functional and non functional requirements Capacity estimation Database to use, Relational or NoSQL Vertical scaling, Horizontal scaling, Shard Load Balancing Primary-secondary Replication Cache and CDN Stateless and Stateful servers Datacenter georouting Message Queue, Publish-Subscribe Architecture Performance Metrics Monitoring and Logging Build, test, configure deploy automation Finding single point of failure API Rate Limiting Service Level Agreement See also Arcadia (engineering) Architectural pattern (computer science) Configuration design Electronic design automation (EDA) Electronic system-level (ESL) Embedded system Graphical system design Hypersystems Modular design Morphological analysis (problem-solving) Systems analysis and design SCSD (School Construction Systems Development) project System information modelling System development life cycle (SDLC) System engineering System thinking TRIZ References Further reading External links Interactive System Design. Course by Chris Johnson, 1993 Course by Prof. Birgit Weller, 2020 Computer systems Electronic design automation Software design
0.768367
0.989763
0.760501
Water aeration
Water aeration is the process of increasing or maintaining the oxygen saturation of water in both natural and artificial environments. Aeration techniques are commonly used in pond, lake, and reservoir management to address low oxygen levels or algal blooms. Water quality Water aeration is often required in water bodies that suffer from hypoxic or anoxic conditions, often caused by upstream human activities such as sewage discharges, agricultural run-off, or over-baiting a fishing lake. Aeration can be achieved through the infusion of air into the bottom of the lake, lagoon or pond or by surface agitation from a fountain or spray-like device to allow for oxygen exchange at the surface and the release of gasses such as carbon dioxide, methane or hydrogen sulfide. Decreased levels of dissolved oxygen (DO) is a major contributor to poor water quality. Not only do fish and most other aquatic animals need oxygen, aerobic bacteria help decompose organic matter. When oxygen concentrations become low, anoxic conditions may develop which can decrease the ability of the water body to support life. Aeration methods Any procedure by which oxygen is added to water can be considered a type of water aeration. There are many ways to aerate water, but these all fall into two broad areas – surface aeration and subsurface aeration. A variety of techniques and technologies are available for both approaches. Natural aeration Natural aeration is a type of both sub-surface and surface aeration. It can occur through sub-surface aquatic plants. Through the natural process of photosynthesis, water plants release oxygen into the water providing it with the oxygen necessary for fish to live and aerobic bacteria to break down excess nutrients. Oxygen can be driven into the water when the wind disturbs the surface of the water body and natural aeration can also occur through a movement of water caused by an incoming stream, waterfall, or even a strong flood. In large water bodies in temperate climates, autumn turn-over can introduce oxygen rich water into the oxygen-poor hypolimnion. Surface aeration Low speed surface aerator The low speed surface aerator is a device for biology aeration with high efficiency. These devices are often in steel protected by epoxy coating and generate high torque. The mixing of water volume is excellent. The common power is going from 1 up to 250kw per unit with an efficiency (SOE) around 2 kgO2/kw. Low speed aerator are used mostly for biology plant aeration for water purification. The higher the diameter, the higher the SOE and mixing. Fountains A fountain consists of a means of squirting the water upwards into the air. Typically this may be done using a motor that powers a rotating impeller. The impeller pumps water from the first few feet of the water and expels it into the air. This process utilizes air-water contact to transfer oxygen. As the water is propelled into the air, it breaks into small droplets. Collectively, these small droplets have a large surface area through which oxygen can be transferred. Upon return, these droplets mix with the rest of the water and thus transfer their oxygen back to the ecosystem. Fountains are a popular method of providing surface aeration because of the aesthetic appearance that they offer. However, most fountains are unable to produce a large area of oxygenated water. Also, running electricity through the water to the fountain can be a safety hazard. Floating surface aerators Floating surface aerators work in a similar manner to fountains, but they do not offer the same aesthetic appearance. They extract water from the top 1–2 feet of the water body and utilize air-water contact to transfer oxygen. Instead of propelling water into the air, they disrupt the water at the water surface. Floating surface aerators are also powered by on-shore electricity. The effectiveness of a surface aerator is limited to a small area as they are unable to add circulation or oxygen to much more than a 3-metre radius. This circulation and oxygenating is then limited to the uppermost portion of the water column, often leaving the bottom portions unaffected. Low speed surface aerators can also be installed on floats. Paddlewheel aerators Paddlewheel aerators also utilize air-to-water contact to transfer oxygen from the air in the atmosphere to the water body. They are most often used in the aquaculture (rearing aquatic animals or cultivating aquatic plants for food) field. Constructed of a hub with attached paddles, these aerators are usually powered by a tractor power take-off (PTO), a gas engine, or an electric motor. They tend to be mounted on floats. Electricity forces the paddles to turn, churning the water and allowing oxygen transfer through air-water contact. As each new section of water is churned, it absorbs oxygen from the air and then, upon its return to the water, restores it to the water. In this regard paddlewheel aeration works very similarly to floating surface aerators. Subsurface aeration Subsurface aeration seeks to release bubbles at the bottom of the water body and allow them to rise by the force of buoyancy. Diffused aeration systems utilize bubbles to aerate as well as mix the water. Water displacement from the expulsion of bubbles will cause a mixing action to occur, and the contact between the water and the bubble will result in an oxygen transfer. Jet aeration Subsurface aeration can be accomplished by the use of jet aerators, which aspirate air, by means of the Venturi principle, and inject the air into the liquid. Coarse bubble aeration Coarse bubble aeration is a type of subsurface aeration wherein air is pumped from an on-shore air compressor. through a hose to a unit placed at the bottom of the water body. The unit expels coarse bubbles (more than 2mm in diameter), which release oxygen when they come into contact with the water, which also contributes to a mixing of the lake's stratified layers. With the release of large bubbles from the system, a turbulent displacement of water occurs which results in a mixing of the water. In comparison to other aeration techniques, coarse bubble aeration is very inefficient in the way of transferring oxygen. This is due to the large diameter and relatively small collective surface area of its bubbles. Fine bubble aeration Fine bubble aeration is an efficient way to transfer oxygen to a water body. A compressor on shore pumps air through a hose, which is connected to an underwater aeration unit. Attached to the unit are a number of diffusers. These diffusers come in the shape of discs, plates, tubes or hoses constructed from glass-bonded silica, porous ceramic plastic, PVC or perforated membranes made from EPDM (ethylene propylene diene Monomer) rubber. Air pumped through the diffuser membranes is released into the water. These bubbles are known as fine bubbles. The EPA defines a fine bubble as anything smaller than 2mm in diameter. This type of aeration has a very high oxygen transfer efficiency (OTE), sometimes as high as 15 pounds of oxygen / (horsepower * hour) (9.1 kilograms of oxygen / (kilowatt * hour)). On average, diffused air aeration diffuses approximately 2–4 cfm (cubic feet of air per minute) (56.6-113.3 liters of air per minute), but some operate at levels as low as 1 cfm (28.3 L/min) or as high as 10 cfm (283 L/min). Fine bubble diffused aeration is able to maximize the surface area of the bubbles and thus transfer more oxygen to the water per bubble volume. Additionally, smaller bubbles take more time to reach the surface so not only is the surface area maximized but so are the time each bubble spends in the water, allowing it more opportunity to transfer oxygen to the water. As a general rule, smaller bubbles and a deeper release point will generate a greater oxygen transfer rate. One of the drawbacks to fine bubble aeration is that the membranes of ceramic diffusers can sometimes clog and must be cleaned in order to keep them working at their optimum efficiency. Also, they do not possess the ability to mix the water column as well as other aeration techniques, such as coarse bubble aeration. Lake destratification (See also Lake de-stratification) Circulators are commonly used to mix a pond or lake and thus reduce thermal stratification. Once circulated water reaches the surface, the air-water interface facilitates the transfer of oxygen to the lake water. Natural resource and environmental managers have long been challenged by problems caused by thermal stratification of lakes. Fish die-offs have been directly associated with thermal gradients, stagnation, and ice cover. Excessive growth of plankton may limit the recreational use of lakes and the commercial use of lake water. With severe thermal stratification in a lake, the quality of drinking water also can be adversely affected. For fisheries managers, the spatial distribution of fish within a lake is often adversely affected by thermal stratification and in some cases may indirectly cause large die-offs of recreationally important fish. One commonly-used tool to reduce the severity of these lake management problems is to eliminate or lessen thermal stratification through aeration. Many types of aeration equipment have been used to reduce or eliminate thermal stratification. Aeration has met with some success, although it has rarely proved to be a panacea. Large-scale projects Thames oxygenation barges During heavy rain, London's sewage storm pipes overflow into the River Thames, sending dissolved oxygen levels plummeting and threatening the species it supports. Two dedicated McTay Marine vessels, oxygenation barges Thames Bubbler and Thames Vitality are used to replenish oxygen levels, as part of an ongoing battle to clean up the river, which now supports 115 species of fish and hundreds more invertebrates, plants and birds. Cardiff Bay The dissolved oxygen concentration within Cardiff Bay are maintained at or above 5 mg/L. Compressed air is pumped, from five sites around the Bay, through a series of steel reinforced rubber pipelines, laid on the beds of the Bay and Rivers Taff and Ely. These are connected to approximately 800 diffusers. At times this is insufficient and the Harbour Authority uses a mobile oxygenation barge built by McTay Marine with liquid oxygen stored in a tank. Liquid oxygen is passed through an electrically heated vapouriser and the gas is injected into a stream of water which is pumped from, and returned to, the bay. The barge is capable of dissolving up to 5 tonnes of oxygen in 24 hours. Chesapeake Bay Similar options have been proposed to help rehabilitate the Chesapeake Bay where the principal problem is lack of filter-feeding organisms such as oysters responsible for keeping the water clean. Historically the Bay's oyster population was in the tens of billions, and they circulated the entire Bay volume in a matter of days. Due to pollution, disease and over-harvesting their population is a fraction of historic levels. Water that was once clear for meters is now so turbid and sediment-ridden that a wader may lose sight of their feet before their knees are wet. Oxygen is normally supplied by submerged aquatic vegetation via photosynthesis but pollution and sediments have reduced the plant populations , resulting in a reduction of dissolved oxygen levels, rendering areas of the bay unsuitable for aerobic aquatic life. In a symbiotic relation the plants provide the oxygen needed for underwater organisms to proliferate, and in exchange the filter feeders keep the water clean and thus clear enough for plants to have sufficient access to sunlight. Researchers have proposed oxygenation through artificial means as a solution to help improve water quality. Aeration of hypoxic water-bodies seems an appealing solution and it has been tried successfully many times on freshwater ponds and small lakes. However no one has undertaken an aeration project as large as an estuary. A 353-hectare portion of the bay connected to the Rock Creek has been aerated using pipes since 2016. The system started as a large-bubble system intended mainly for de-stratification, creating a 74-ha oxic zone. It was upgraded in 2019 to fine-bubble injectors to provide more oxygen directly. Water treatment aeration Many water treatment processes use a variety of forms of aeration to support biological oxidative processes. A typical example is activated sludge which can use fine or coarse bubble aeration or mechanical aeration cones which draw up mixed liquor from the base of a treatment tank and eject it through the air where oxygen is entrained in the liquor. See also Aerated lagoon Lake ecosystem Limnology References Aquatic ecology
0.762676
0.997143
0.760497
Global catastrophe scenarios
Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic (caused by humans), such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history. Anthropogenic Experts at the Future of Humanity Institute at the University of Oxford and the Centre for the Study of Existential Risk at the University of Cambridge prioritize anthropogenic over natural risks due to their much greater estimated likelihood. They are especially concerned by, and consequently focus on, risks posed by advanced technology, such as artificial intelligence and biotechnology. Artificial intelligence The creators of a superintelligent entity could inadvertently give it goals that lead it to annihilate the human race. It has been suggested that if AI systems rapidly become super-intelligent, they may take unforeseen actions or out-compete humanity. According to philosopher Nick Bostrom, it is possible that the first super-intelligence to emerge would be able to bring about almost any possible outcome it valued, as well as to foil virtually any attempt to prevent it from achieving its objectives. Thus, even a super-intelligence indifferent to humanity could be dangerous if it perceived humans as an obstacle to unrelated goals. In Bostrom's book Superintelligence, he defines this as the control problem. Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have echoed these concerns, with Hawking theorizing that such an AI could "spell the end of the human race". In 2009, the Association for the Advancement of Artificial Intelligence (AAAI) hosted a conference to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness, as depicted in science-fiction, is probably unlikely, but there are other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns. A survey of AI experts estimated that the chance of human-level machine learning having an "extremely bad (e.g., human extinction)" long-term effect on humanity is 5%. A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by super-intelligence by 2100. Eliezer Yudkowsky believes risks from artificial intelligence are harder to predict than any other known risks due to bias from anthropomorphism. Since people base their judgments of artificial intelligence on their own experience, he claims they underestimate the potential power of AI. Biotechnology Biotechnology can pose a global catastrophic risk in the form of bioengineered organisms (viruses, bacteria, fungi, plants, or animals). In many cases the organism will be a pathogen of humans, livestock, crops, or other organisms we depend upon (e.g. pollinators or gut bacteria). However, any organism able to catastrophically disrupt ecosystem functions, e.g. highly competitive weeds, outcompeting essential crops, poses a biotechnology risk. A biotechnology catastrophe may be caused by accidentally releasing a genetically engineered organism from controlled environments, by the planned release of such an organism which then turns out to have unforeseen and catastrophic interactions with essential natural or agro-ecosystems, or by intentional usage of biological agents in biological warfare or bioterrorism attacks. Pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics. For example, a group of Australian researchers unintentionally changed characteristics of the mousepox virus while trying to develop a virus to sterilize rodents. The modified virus became highly lethal even in vaccinated and naturally resistant mice. The technological means to genetically modify virus characteristics are likely to become more widely available in the future if not properly regulated. Biological weapons, whether used in war or terrorism, could result in human extinction. Terrorist applications of biotechnology have historically been infrequent. To what extent this is due to a lack of capabilities or motivation is not resolved. However, given current development, more risk from novel, engineered pathogens is to be expected in the future. Exponential growth has been observed in the biotechnology sector, and Noun and Chyba predict that this will lead to major increases in biotechnological capabilities in the coming decades. They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users). In 2008, a survey by the Future of Humanity Institute estimated a 2% probability of extinction from engineered pandemics by 2100. Noun and Chyba propose three categories of measures to reduce risks from biotechnology and natural pandemics: Regulation or prevention of potentially dangerous research, improved recognition of outbreaks, and developing facilities to mitigate disease outbreaks (e.g. better and/or more widely distributed vaccines). Chemical weapons By contrast with nuclear and biological weapons, chemical warfare, while able to create multiple local catastrophes, is unlikely to create a global one. Choice to have fewer children Population decline through a preference for fewer children. If developing world demographics are assumed to become developed world demographics, and if the latter are extrapolated, some projections suggest an extinction before the year 3000. John A. Leslie estimates that if the reproduction rate drops to the German or Japanese level the extinction date will be 2400. However, some models suggest the demographic transition may reverse itself due to evolutionary biology. Climate change Human-caused climate change has been driven by technology since the 19th century or earlier. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency and severity of some extreme weather events and weather-related disasters. Effects of global warming include loss of biodiversity, stresses to existing food-producing systems, increased spread of known infectious diseases such as malaria, and rapid mutation of microorganisms. A common belief is that the current climate crisis could spiral into human extinction. In November 2017, a statement by 15,364 scientists from 184 countries indicated that increasing levels of greenhouse gases from use of fossil fuels, human population growth, deforestation, and overuse of land for agricultural production, particularly by farming ruminants for meat consumption, are trending in ways that forecast an increase in human misery over coming decades. An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer. The report warned that the pollution crisis was exceeding "the envelope on the amount of pollution the Earth can carry" and "threatens the continuing survival of human societies". Carl Sagan and others have raised the prospect of extreme runaway global warming turning Earth into an uninhabitable Venus-like planet. Some scholars argue that much of the world would become uninhabitable under severe global warming, but even these scholars do not tend to argue that it would lead to complete human extinction, according to Kelsey Piper of Vox. All the IPCC scenarios, including the most pessimistic ones, predict temperatures compatible with human survival. The question of human extinction under "unlikely" outlier models is not generally addressed by the scientific literature. Factcheck.org judges that climate change fails to pose an established "existential risk", stating: "Scientists agree climate change does pose a threat to humans and ecosystems, but they do not envision that climate change will obliterate all people from the planet." Cyberattack Cyberattacks have the potential to destroy everything from personal data to electric grids. Christine Peterson, co-founder and past president of the Foresight Institute, believes a cyberattack on electric grids has the potential to be a catastrophic risk. She notes that little has been done to mitigate such risks, and that mitigation could take several decades of readjustment. Environmental disaster An environmental or ecological disaster, such as world crop failure and collapse of ecosystem services, could be induced by the present trends of overpopulation, economic development, and non-sustainable agriculture. Most environmental scenarios involve one or more of the following: Holocene extinction event, scarcity of water that could lead to approximately half the Earth's population being without safe drinking water, pollinator decline, overfishing, massive deforestation, desertification, climate change, or massive water pollution episodes. Detected in the early 21st century, a threat in this direction is colony collapse disorder, a phenomenon that might foreshadow the imminent extinction of the Western honeybee. As the bee plays a vital role in pollination, its extinction would severely disrupt the food chain. An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer. The report warned that the pollution crisis was exceeding "the envelope on the amount of pollution the Earth can carry" and "threatens the continuing survival of human societies". A May 2020 analysis published in Scientific Reports found that if deforestation and resource consumption continue at current rates they could culminate in a "catastrophic collapse in human population" and possibly "an irreversible collapse of our civilization" within the next several decades. The study says humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest." The authors also note that "while violent events, such as global war or natural catastrophic events, are of immediate concern to everyone, a relatively slow consumption of the planetary resources may be not perceived as strongly as a mortal danger for the human civilization." Evolution Some scenarios envision that humans could use genetic engineering or technological modifications to split into normal humans and a new species – posthumans. Such a species could be fundamentally different from any previous life form on Earth, e.g. by merging humans with technological systems. Such scenarios assess the risk that the "old" human species will be outcompeted and driven to extinction by the new, posthuman entity. Experimental accident Nick Bostrom suggested that in the pursuit of knowledge, humanity might inadvertently create a device that could destroy Earth and the Solar System. Investigations in nuclear and high-energy physics could create unusual conditions with catastrophic consequences. All of these worries have so far proven unfounded. For example, scientists worried that the first nuclear test might ignite the atmosphere. Early in the development of thermonuclear weapons there were some concerns that a fusion reaction could "ignite" the atmosphere in a chain reaction that would engulf Earth. Calculations showed the energy would dissipate far too quickly to sustain a reaction. Others worried that the RHIC or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. It has been pointed out that much more energetic collisions take place currently in Earth's atmosphere. Though these particular concerns have been challenged, the general concern about new experiments remains. Mineral resource exhaustion Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and the paradigm founder of ecological economics, has argued that the carrying capacity of Earth—that is, Earth's capacity to sustain human populations and consumption levels—is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse, leading to the demise of human civilization itself. Ecological economist and steady-state theorist Herman Daly, a student of Georgescu-Roegen, has propounded the same argument by asserting that "all we can do is to avoid wasting the limited capacity of creation to support present and future life [on Earth]." Ever since Georgescu-Roegen and Daly published these views, various scholars in the field have been discussing the existential impossibility of allocating Earth's finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is no way—or only little way—of knowing in advance if or when mankind will ultimately face extinction. In effect, any conceivable intertemporal allocation of the stock will inevitably end up with universal economic decline at some future point. Nanotechnology Many nanoscale technologies are in development or currently in use. The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision. Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories of desktop proportions. When nanofactories gain the ability to produce other nanofactories, production may only be limited by relatively abundant factors such as input materials, energy and software. Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons. Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities. Chris Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories: From augmenting the development of other technologies such as AI and biotechnology. By enabling mass-production of potentially dangerous products that cause risk dynamics (such as arms races) depending on how they are used. From uncontrolled self-perpetuating processes with destructive effects. Several researchers say the bulk of risk from nanotechnology comes from the potential to lead to war, arms races, and destructive global government. Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races): A large number of players may be tempted to enter the race since the threshold for doing so is low; The ability to make weapons with molecular manufacturing will be cheap and easy to hide; Therefore, lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes; Molecular manufacturing may reduce dependency on international trade, a potential peace-promoting factor; Wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield. Since self-regulation by all state and non-state actors seems hard to achieve, measures to mitigate war-related risks have mainly been proposed in the area of international cooperation. International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control. International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed. One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour. The Center for Responsible Nanotechnology also suggests some technical restrictions. Improved transparency regarding technological capabilities may be another important facilitator for arms-control. Gray goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation and has been a theme in mainstream media and fiction. This scenario involves tiny self-replicating robots that consume the entire biosphere (ecophagy) using it as a source of energy and building blocks. Nowadays, however, nanotech experts—including Drexler—discredit the scenario. According to Phoenix, a "so-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident". Nuclear war Some fear a hypothetical World War III could cause the annihilation of humankind. Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating large numbers of nuclear weapons would have an immediate, short term and long-term effects on the climate, potentially causing cold weather known as a "nuclear winter" with reduced sunlight and photosynthesis that may generate significant upheaval in advanced civilizations. However, while popular perception sometimes takes nuclear war as "the end of the world", experts assign low probability to human extinction from nuclear war. In 1982, Brian Martin estimated that a US–Soviet nuclear exchange might kill 400–450 million directly, mostly in the United States, Europe and Russia, and maybe several hundred million more through follow-up consequences in those same areas. In 2008, a survey by the Future of Humanity Institute estimated a 4% probability of extinction from warfare by 2100, with a 1% chance of extinction from nuclear warfare. The scenarios that have been explored most frequently are nuclear warfare and doomsday devices. Mistakenly launching a nuclear attack in response to a false alarm is one possible scenario; this nearly happened during the 1983 Soviet nuclear false alarm incident. Although the probability of a nuclear war per year is slim, Professor Martin Hellman has described it as inevitable in the long run; unless the probability approaches zero, inevitably there will come a day when civilization's luck runs out. During the Cuban Missile Crisis, U.S. president John F. Kennedy estimated the odds of nuclear war at "somewhere between one out of three and even". The United States and Russia have a combined arsenal of 14,700 nuclear weapons, and there is an estimated total of 15,700 nuclear weapons in existence worldwide. World population and agricultural crisis The Global Footprint Network estimates that current activity uses resources twice as fast as they can be naturally replenished, and that growing human population and increased consumption pose the risk of resource depletion and a concomitant population crash. Evidence suggests birth rates may be rising in the 21st century in the developed world. Projections vary; researcher Hans Rosling has projected population growth to start to plateau around 11 billion, and then to slowly grow or possibly even shrink thereafter. A 2014 study published in Science asserts that the human population will grow to around 11 billion by 2100 and that growth will continue into the next century. The 20th century saw a rapid increase in human population due to medical developments and massive increases in agricultural productivity such as the Green Revolution. Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The Green Revolution in agriculture helped food production to keep pace with worldwide population growth or actually enabled population growth. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon-fueled irrigation. David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their 1994 study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy and avert disaster, the United States must reduce its population by at least one-third, and world population will have to be reduced by two-thirds, says the study. The authors of this study believe the mentioned agricultural crisis will begin to have an effect on the world after 2020 and will become critical after 2050. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before. Since supplies of petroleum and natural gas are essential to modern agriculture techniques, a fall in global oil supplies (see peak oil for global concerns) could cause spiking food prices and unprecedented famine in the coming decades. Wheat is humanity's third-most-produced cereal. Extant fungal infections such as Ug99 (a kind of stem rust) can cause 100% crop losses in most modern varieties. Little or no treatment is possible and the infection spreads on the wind. Should the world's large grain-producing areas become infected, the ensuing crisis in wheat availability would lead to price spikes and shortages in other food products. Human activity has triggered an extinction event often referred to as the sixth "mass extinction", which scientists consider a major threat to the continued existence of human civilization. The 2019 Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nations' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, asserts that roughly one million species of plants and animals face extinction from human impacts such as expanding land use for industrial agriculture and livestock rearing, along with overfishing. A 1997 assessment states that over a third of Earth's land has been modified by humans, that atmospheric carbon dioxide has increased around 30 percent, that humans are the dominant source of nitrogen fixation, that humans control most of the Earth's accessible surface fresh water, and that species extinction rates may be over a hundred times faster than normal. Ecological destruction which impacts food production could produce a human population crash. Non-anthropogenic Of all species that have ever lived, 99% have gone extinct. Earth has experienced numerous mass extinction events, in which up to 96% of all species present at the time were eliminated. A notable example is the K-T extinction event, which killed the dinosaurs. The types of threats posed by nature have been argued to be relatively constant, though this has been disputed. A number of other astronomical threats have also been identified. Asteroid impact An impact event involving a near-Earth object (NEOs) could result in localized or widespread destruction, including widespread extinction and possibly human extinction. Several asteroids have collided with Earth in recent geological history. The Chicxulub asteroid, for example, was about ten kilometers (six miles) in diameter and is theorized to have caused the extinction of non-avian dinosaurs at the end of the Cretaceous. No sufficiently large asteroid currently exists in an Earth-crossing orbit; however, a comet of sufficient size to cause human extinction could impact the Earth, though the annual probability may be less than 10−8. Geoscientist Brian Toon estimates that while a few people, such as "some fishermen in Costa Rica", could plausibly survive a ten-kilometer (six-mile) meteorite, a hundred-kilometer (sixty-mile) meteorite would be large enough to "incinerate everybody". Asteroids with around a 1 km diameter have impacted the Earth on average once every 500,000 years; these are probably too small to pose an extinction risk, but might kill billions of people. Larger asteroids are less common. Small near-Earth asteroids are regularly observed and can impact anywhere on the Earth injuring local populations. As of 2013, Spaceguard estimates it has identified 95% of all NEOs over 1 km in size. None of the large "dinosaur-killer" asteroids known to Spaceguard pose a near-term threat of collision with Earth. In April 2018, the B612 Foundation reported "It's a 100 per cent certain we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. Planetary or interstellar collision In April 2008, it was announced that two simulations of long-term planetary movement, one at the Paris Observatory and the other at the University of California, Santa Cruz, indicate a 1% chance that Mercury's orbit could be made unstable by Jupiter's gravitational pull sometime during the lifespan of the Sun. Were this to happen, the simulations suggest a collision with Earth could be one of four possible outcomes (the others being Mercury colliding with the Sun, colliding with Venus, or being ejected from the Solar System altogether). Collision with or a near miss by a large object from outside the Solar System could also be catastrophic to life on Earth. Interstellar objects, including asteroids, comets, and rogue planets, are difficult to detect with current technology until they enter the Solar System, and could potentially do so at high speed. If Mercury or a rogue planet of similar size were to collide with Earth, all life on Earth could be obliterated entirely: an asteroid 15 km wide is believed to have caused the extinction of the non-avian dinosaurs, whereas Mercury is 4,879 km in diameter. The destabilization of Mercury's orbit is unlikely in the foreseeable future. A close pass by a large object could cause massive tidal forces that triggered anything from minor earthquakes to liquification of the Earth's crust to Earth being torn apart, becoming a disrupted planet. Stars and black holes are easier to detect from a longer distance, but are much more difficult to deflect. Passage through the solar system could result in the destruction of the Earth or the Sun by being directly consumed. Astronomers expect the collision of the Milky Way Galaxy with the Andromeda Galaxy in about four billion years, but due to the large amount of empty space between them, most stars are not expected to collide directly. The passage of another star system into or close to the outer reaches of the Solar System could trigger a swarm of asteroid impacts as the orbit of objects in the Oort Cloud is disturbed, or objects orbiting the two stars collide. It also increases the risk of catastrophic irradiation of the Earth. Astronomers have identified fourteen stars with a 90% chance of coming within 3.26 light years of the Sun in the next few million years, and four within 1.6 light years, including HIP 85605 and Gliese 710. Observational data on nearby stars was too incomplete for a full catalog of near misses, but more data is being collected by the Gaia spacecraft. Physics hazards Strangelets, if they exist, might naturally be produced by strange stars, and in the case of a collision, might escape and hit the Earth. Likewise, a false vacuum collapse could be triggered elsewhere in the universe. Gamma-ray burst Another interstellar threat is a gamma-ray burst, typically produced by a supernova when a star collapses inward on itself and then "bounces" outward in a massive explosion. Under certain circumstances, these events are thought to produce massive bursts of gamma radiation emanating outward from the axis of rotation of the star. If such an event were to occur oriented towards the Earth, the massive amounts of gamma radiation could significantly affect the Earth's atmosphere and pose an existential threat to all life. Such a gamma-ray burst may have been the cause of the Ordovician–Silurian extinction events. This scenario is unlikely in the foreseeable future. Astroengineering projects proposed to mitigate the risk of gamma-ray bursts include shielding the Earth with ionised smartdust and star lifting of nearby high mass stars likely to explode in a supernova. A gamma-ray burst would be able to vaporize anything in its beams out to around 200 light-years. The Sun A powerful solar flare, solar superstorm or a solar micronova, which is a drastic and unusual decrease or increase in the Sun's power output, could have severe consequences for life on Earth. The Earth will naturally become uninhabitable due to the Sun's stellar evolution, within about a billion years. In around 1 billion years from now, the Sun's brightness may increase as a result of a shortage of hydrogen, and the heating of its outer layers may cause the Earth's oceans to evaporate, leaving only minor forms of life. Well before this time, the level of carbon dioxide in the atmosphere will be too low to support plant life, destroying the foundation of the food chains. See Future of the Earth. About 7–8 billion years from now, if and after the Sun has become a red giant, the Earth will probably be engulfed by an expanding Sun and destroyed. Uninhabitable universe The ultimate fate of the universe is uncertain, but is likely to eventually become uninhabitable, either suddenly or gradually. If it does not collapse into the Big Crunch, over very long time scales the heat death of the universe may render life impossible. The expansion of spacetime could cause the destruction of all matter in a Big Rip scenario. If our universe lies within a false vacuum, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light, destroying all that is known without forewarning. Such an occurrence is called vacuum decay, or the "Big Slurp". Extraterrestrial invasion Intelligent extraterrestrial life, if it exists, could invade Earth, either to exterminate and supplant human life, enslave it under a colonial system, exploit the planet's resources, or destroy it altogether. Although the existence of sentient alien life has never been conclusively proven, scientists such as Carl Sagan have posited it to be very likely. Scientists consider such a scenario technically possible, but unlikely. An article in The New York Times Magazine discussed the possible threats for humanity of intentionally sending messages aimed at extraterrestrial life into the cosmos in the context of the SETI efforts. Several public figures such as Stephen Hawking and Elon Musk have argued against sending such messages, on the grounds that extraterrestrial civilizations with technology are probably far more advanced than, and could therefore pose an existential threat to, humanity. Invasion by microscopic life is also a possibility. In 1969, the "Extra-Terrestrial Exposure Law" was added to the United States Code of Federal Regulations (Title 14, Section 1211) in response to the possibility of biological contamination resulting from the U.S. Apollo Space Program. It was removed in 1991. Natural pandemic A pandemic involving one or more viruses, prions, or antibiotic-resistant bacteria. Epidemic diseases that have killed millions of people include smallpox, bubonic plague, influenza, HIV/AIDS, COVID-19, cocoliztli, typhus, and cholera. Endemic tuberculosis and malaria kill over a million people each year. Sudden introduction of various European viruses decimated indigenous American populations. A deadly pandemic restricted to humans alone would be self-limiting as its mortality would reduce the density of its target population. A pathogen with a broad host range in multiple species, however, could eventually reach even isolated human populations. U.S. officials assess that an engineered pathogen capable of "wiping out all of humanity", if left unchecked, is technically feasible and that the technical obstacles are "trivial". However, they are confident that in practice, countries would be able to "recognize and intervene effectively" to halt the spread of such a microbe and prevent human extinction. There are numerous historical examples of pandemics that have had a devastating effect on a large number of people. The present, unprecedented scale and speed of human movement make it more difficult than ever to contain an epidemic through local quarantines, and other sources of uncertainty and the evolving nature of the risk mean natural pandemics may pose a realistic threat to human civilization. There are several classes of argument about the likelihood of pandemics. One stems from history, where the limited size of historical pandemics is evidence that larger pandemics are unlikely. This argument has been disputed on grounds including the changing risk due to changing population and behavioral patterns among humans, the limited historical record, and the existence of an anthropic bias. Another argument is based on an evolutionary model that predicts that naturally evolving pathogens will ultimately develop an upper limit to their virulence. This is because pathogens with high enough virulence quickly kill their hosts and reduce their chances of spreading the infection to new hosts or carriers. This model has limits, however, because the fitness advantage of limited virulence is primarily a function of a limited number of hosts. Any pathogen with a high virulence, high transmission rate and long incubation time may have already caused a catastrophic pandemic before ultimately virulence is limited through natural selection. Additionally, a pathogen that infects humans as a secondary host and primarily infects another species (a zoonosis) has no constraints on its virulence in people, since the accidental secondary infections do not affect its evolution. Lastly, in models where virulence level and rate of transmission are related, high levels of virulence can evolve. Virulence is instead limited by the existence of complex populations of hosts with different susceptibilities to infection, or by some hosts being geographically isolated. The size of the host population and competition between different strains of pathogens can also alter virulence. Neither of these arguments is applicable to bioengineered pathogens, and this poses entirely different risks of pandemics. Experts have concluded that "Developments in science and technology could significantly ease the development and use of high consequence biological weapons", and these "highly virulent and highly transmissible [bio-engineered pathogens] represent new potential pandemic threats". Natural climate change Climate change refers to a lasting change in the Earth's climate. The climate has ranged from ice ages to warmer periods when palm trees grew in Antarctica. It has been hypothesized that there was also a period called "snowball Earth" when all the oceans were covered in a layer of ice. These global climatic changes occurred slowly, near the end of the last Major Ice Age when the climate became more stable. However, abrupt climate change on the decade time scale has occurred regionally. A natural variation into a new climate regime (colder or hotter) could pose a threat to civilization. In the history of the Earth, many Ice Ages are known to have occurred. An ice age would have a serious impact on civilization because vast areas of land (mainly in North America, Europe, and Asia) could become uninhabitable. Currently, the world is in an Interglacial period within a much older glacial event. The last glacial expansion ended about 10,000 years ago, and all civilizations evolved later than this. Scientists do not predict that a natural ice age will occur anytime soon. The amount of heat-trapping gases emitted into Earth's oceans and atmosphere will prevent the next ice age, which otherwise would begin in around 50,000 years, and likely more glacial cycles. On a long time scale, natural shifts such as Milankovitch cycles (hypothesized quaternary climatic oscillations) could create unknown climate variability and change. Volcanism A geological event such as massive flood basalt, volcanism, or the eruption of a supervolcano could lead to a so-called volcanic winter, similar to a nuclear winter. Human extinction is a possibility. One such event, the Toba eruption, occurred in Indonesia about 71,500 years ago. According to the Toba catastrophe theory, the event may have reduced human populations to only a few tens of thousands of individuals. Yellowstone Caldera is another such supervolcano, having undergone 142 or more caldera-forming eruptions in the past 17 million years. A massive volcano eruption would eject extraordinary volumes of volcanic dust, toxic and greenhouse gases into the atmosphere with serious effects on global climate (towards extreme global cooling: volcanic winter if short-term, and ice age if long-term) or global warming (if greenhouse gases were to prevail). When the supervolcano at Yellowstone last erupted 640,000 years ago, the thinnest layers of the ash ejected from the caldera spread over most of the United States west of the Mississippi River and part of northeastern Mexico. The magma covered much of what is now Yellowstone National Park and extended beyond, covering much of the ground from Yellowstone River in the east to Idaho falls in the west, with some of the flows extending north beyond Mammoth Springs. According to a recent study, if the Yellowstone caldera erupted again as a supervolcano, an ash layer one to three millimeters thick could be deposited as far away as New York, enough to "reduce traction on roads and runways, short out electrical transformers and cause respiratory problems". There would be centimeters of thickness over much of the U.S. Midwest, enough to disrupt crops and livestock, especially if it happened at a critical time in the growing season. The worst-affected city would likely be Billings, Montana, population 109,000, which the model predicted would be covered with ash estimated as 1.03 to 1.8 meters thick. The main long-term effect is through global climate change, which reduces the temperature globally by about 5–15 °C for a decade, together with the direct effects of the deposits of ash on their crops. A large supervolcano like Toba would deposit one or two meters thickness of ash over an area of several million square kilometers. (1000 cubic kilometers is equivalent to a one-meter thickness of ash spread over a million square kilometers). If that happened in some densely populated agricultural area, such as India, it could destroy one or two seasons of crops for two billion people. However, Yellowstone shows no signs of a supereruption at present, and it is not certain that a future supereruption will occur. Research published in 2011 finds evidence that massive volcanic eruptions caused massive coal combustion, supporting models for the significant generation of greenhouse gases. Researchers have suggested that massive volcanic eruptions through coal beds in Siberia would generate significant greenhouse gases and cause a runaway greenhouse effect. Massive eruptions can also throw enough pyroclastic debris and other material into the atmosphere to partially block out the sun and cause a volcanic winter, as happened on a smaller scale in 1816 following the eruption of Mount Tambora, the so-called Year Without a Summer. Such an eruption might cause the immediate deaths of millions of people several hundred kilometers (or miles) from the eruption, and perhaps billions of death worldwide, due to the failure of the monsoons, resulting in major crop failures causing starvation on a profound scale. A much more speculative concept is the verneshot: a hypothetical volcanic eruption caused by the buildup of gas deep underneath a craton. Such an event may be forceful enough to launch an extreme amount of material from the crust and mantle into a sub-orbital trajectory. See also Great Filter Notes References Works cited . Existential risk Man-made disasters International responses to disasters Doomsday scenarios Apocalyptic fiction
0.765076
0.994014
0.760497
4+1 architectural view model
4+1 is a view model used for "describing the architecture of software-intensive systems, based on the use of multiple, concurrent views". The views are used to describe the system from the viewpoint of different stakeholders, such as end-users, developers, system engineers, and project managers. The four views of the model are logical, development, process and physical view. In addition, selected use cases or scenarios are used to illustrate the architecture serving as the 'plus one' view. Hence, the model contains 4+1 views: Logical view: The logical view is concerned with the functionality that the system provides to end-users. UML diagrams are used to represent the logical view, and include class diagrams, and state diagrams. Process view: The process view deals with the dynamic aspects of the system, explains the system processes and how they communicate, and focuses on the run time behavior of the system. The process view addresses concurrency, distribution, integrator, performance, and scalability, etc. UML diagrams to represent process view include the sequence diagram, communication diagram, activity diagram. Development view: The development view (aka the implementation view) illustrates a system from a programmer's perspective and is concerned with software management. UML Diagrams used to represent the development view include the Package diagram and the Component diagram. Physical view: The physical view (aka the deployment view) depicts the system from a system engineer's point of view. It is concerned with the topology of software components on the physical layer as well as the physical connections between these components. UML diagrams used to represent the physical view include the deployment diagram. Scenarios: The description of an architecture is illustrated using a small set of use cases, or scenarios, which become a fifth view. The scenarios describe sequences of interactions between objects and between processes. They are used to identify architectural elements and to illustrate and validate the architecture design. They also serve as a starting point for tests of an architecture prototype. This view is also known as the use case view. The 4+1 view model is generic and is not restricted to any notation, tool or design method. Quoting Kruchten, See also View model C4 model ISO/IEC 42010 References Software architecture
0.764706
0.994482
0.760486
Marsh
In ecology, a marsh is a wetland that is dominated by herbaceous plants rather than by woody plants. More in general, the word can be used for any low-lying and seasonally waterlogged terrain. In Europe and in agricultural literature low-lying meadows that require draining and embanked polderlands are also referred to as marshes or marshland. Marshes can often be found at the edges of lakes and streams, where they form a transition between the aquatic and terrestrial ecosystems. They are often dominated by grasses, rushes or reeds. If woody plants are present they tend to be low-growing shrubs, and the marsh is sometimes called a carr. This form of vegetation is what differentiates marshes from other types of wetland such as swamps, which are dominated by trees, and mires, which are wetlands that have accumulated deposits of acidic peat. Marshes provide habitats for many kinds of invertebrates, fish, amphibians, waterfowl and aquatic mammals. This biological productivity means that marshes contain 0.1% of global sequestered terrestrial carbon. Moreover, they have an outsized influence on climate resilience of coastal areas and waterways, absorbing high tides and other water changes due to extreme weather. Though some marshes are expected to migrate upland, most natural marshlands will be threatened by sea level rise and associated erosion. Basic information Marshes provide a habitat for many species of plants, animals, and insects that have adapted to living in flooded conditions or other environments. The plants must be able to survive in wet mud with low oxygen levels. Many of these plants, therefore, have aerenchyma, channels within the stem that allow air to move from the leaves into the rooting zone. Marsh plants also tend to have rhizomes for underground storage and reproduction. Common examples include cattails, sedges, papyrus and sawgrass. Aquatic animals, from fish to salamanders, are generally able to live with a low amount of oxygen in the water. Some can obtain oxygen from the air instead, while others can live indefinitely in conditions of low oxygen. The pH in marshes tends to be neutral to alkaline, as opposed to bogs, where peat accumulates under more acid conditions. Values and ecosystem services Marshes provide habitats for many kinds of invertebrates, fish, amphibians, waterfowl and aquatic mammals. Marshes have extremely high levels of biological production, some of the highest in the world, and therefore are important in supporting fisheries. Marshes also improve water quality by acting as a sink to filter pollutants and sediment from the water that flows through them. Marshes partake in water purification by providing nutrient and pollution consumption. Marshes (and other wetlands) are able to absorb water during periods of heavy rainfall and slowly release it into waterways and therefore reduce the magnitude of flooding. Marshes also provide the services of tourism, recreation, education, and research. Types of marshes Marshes differ depending mainly on their location and salinity. These factors greatly influence the range and scope of animal and plant life that can survive and reproduce in these environments. The three main types of marsh are salt marshes, freshwater tidal marshes, and freshwater marshes. These three can be found worldwide, and each contains a different set of organisms. Salt marshes Saltwater marshes are found around the world in mid to high latitudes, wherever there are sections of protected coastline. They are located close enough to the shoreline that the motion of the tides affects them, and, sporadically, they are covered with water. They flourish where the rate of sediment buildup is greater than the rate at which the land level is sinking. Salt marshes are dominated by specially adapted rooted vegetation, primarily salt-tolerant grasses. Salt marshes are most commonly found in lagoons, estuaries, and on the sheltered side of a shingle or sandspit. The currents there carry the fine particles around to the quiet side of the spit, and sediment begins to build up. These locations allow the marshes to absorb the excess nutrients from the water running through them before they reach the oceans and estuaries. These marshes are slowly declining. Coastal development and urban sprawl have caused significant loss of these essential habitats. Freshwater tidal marshes Although considered a freshwater marsh, the ocean tides affect this form of marsh. However, without the stresses of salinity at work in its saltwater counterpart, the diversity of the plants and animals that live in and use freshwater tidal marshes is much higher than in salt marshes. The most severe threats to this form of marsh are the increasing size and pollution of the cities surrounding them. Freshwater marshes Ranging greatly in size and geographic location, freshwater marshes make up North America's most common form of wetland. They are also the most diverse of the three types of marsh. Some examples of freshwater marsh types in North America are: Wet meadows Wet meadows occur in shallow lake basins, low-lying depressions, and the land between shallow marshes and upland areas. They also happen on the edges of large lakes and rivers. Wet meadows often have very high plant diversity and high densities of buried seeds. They are regularly flooded but are often dry in the summer. Vernal pools Vernal pools are a type of marsh found only seasonally in shallow depressions in the land. They can be covered in shallow water, but in the summer and fall, they can be completely dry. In western North America, vernal pools tend to form in open grasslands, whereas in the east, they often occur in forested landscapes. Further south, vernal pools form in pine savannas and flatwoods. Many amphibian species depend upon vernal pools for spring breeding; these ponds provide a habitat free from fish, which eat the eggs and young of amphibians. An example is the endangered gopher frog. Similar temporary ponds occur in other world ecosystems, where they may have local names. However, vernal pool can be applied to all such temporary pool ecosystems. Playa lakes Playa lakes are a form of shallow freshwater marsh in the southern high plains of the United States. Like vernal pools, they are only present at certain times of the year and generally have a circular shape. As the playa dries during the summer, conspicuous plant zonation develops along the shoreline. Prairie potholes Prairie potholes are found in northern North America, such as the Prairie Pothole Region. Glaciers once covered these landscapes, and as a result, shallow depressions were formed in great numbers. These depressions fill with water in the spring. They provide important breeding habitats for many species of waterfowl. Some pools only occur seasonally, while others retain enough water to be present all year. Riverine wetlands Many kinds of marsh occur along the fringes of large rivers. The different types are produced by factors such as water level, nutrients, ice scour, and waves. Embanked marshlands Large tracts of tidal marsh have been embanked and artificially drained. They are usually known by the Dutch name of polders. In Northern Germany and Scandinavia they are called Marschland, Marsch or marsk; in France marais maritime. In the Netherlands and Belgium, they are designated as marine clay districts. In East Anglia, a region in the East of England, the embanked marshes are also known as Fens. Restoration Some areas have already lost 90% of their wetlands, including marshes. They have been drained to create agricultural land or filled to accommodate urban sprawl. Restoration is returning marshes to the landscape to replace those lost in the past. Restoration can be done on a large scale, such as by allowing rivers to flood naturally in the spring, or on a small scale by returning wetlands to urban landscapes. See also References External links Marshes of the Lowcountry (South Carolina) – Beaufort County Library Fluvial landforms Pedology Wetlands tr:Sazlık
0.762463
0.997377
0.760464
Behavioural genetics
Behavioural genetics, also referred to as behaviour genetics, is a field of scientific research that uses genetic methods to investigate the nature and origins of individual differences in behaviour. While the name "behavioural genetics" connotes a focus on genetic influences, the field broadly investigates the extent to which genetic and environmental factors influence individual differences, and the development of research designs that can remove the confounding of genes and environment. Behavioural genetics was founded as a scientific discipline by Francis Galton in the late 19th century, only to be discredited through association with eugenics movements before and during World War II. In the latter half of the 20th century, the field saw renewed prominence with research on inheritance of behaviour and mental illness in humans (typically using twin and family studies), as well as research on genetically informative model organisms through selective breeding and crosses. In the late 20th and early 21st centuries, technological advances in molecular genetics made it possible to measure and modify the genome directly. This led to major advances in model organism research (e.g., knockout mice) and in human studies (e.g., genome-wide association studies), leading to new scientific discoveries. Findings from behavioural genetic research have broadly impacted modern understanding of the role of genetic and environmental influences on behaviour. These include evidence that nearly all researched behaviours are under a significant degree of genetic influence, and that influence tends to increase as individuals develop into adulthood. Further, most researched human behaviours are influenced by a very large number of genes and the individual effects of these genes are very small. Environmental influences also play a strong role, but they tend to make family members more different from one another, not more similar. History Selective breeding and the domestication of animals is perhaps the earliest evidence that humans considered the idea that individual differences in behaviour could be due to natural causes. Plato and Aristotle each speculated on the basis and mechanisms of inheritance of behavioural characteristics. Plato, for example, argued in The Republic that selective breeding among the citizenry to encourage the development of some traits and discourage others, what today might be called eugenics, was to be encouraged in the pursuit of an ideal society. Behavioural genetic concepts also existed during the English Renaissance, where William Shakespeare perhaps first coined the phrase "nature versus nurture" in The Tempest, where he wrote in Act IV, Scene I, that Caliban was "A devil, a born devil, on whose nature Nurture can never stick". Modern-day behavioural genetics began with Sir Francis Galton, a nineteenth-century intellectual and cousin of Charles Darwin. Galton was a polymath who studied many subjects, including the heritability of human abilities and mental characteristics. One of Galton's investigations involved a large pedigree study of social and intellectual achievement in the English upper class. In 1869, 10 years after Darwin's On the Origin of Species, Galton published his results in Hereditary Genius. In this work, Galton found that the rate of "eminence" was highest among close relatives of eminent individuals, and decreased as the degree of relationship to eminent individuals decreased. While Galton could not rule out the role of environmental influences on eminence, a fact which he acknowledged, the study served to initiate an important debate about the relative roles of genes and environment on behavioural characteristics. Through his work, Galton also "introduced multivariate analysis and paved the way towards modern Bayesian statistics" that are used throughout the sciences—launching what has been dubbed the "Statistical Enlightenment". The field of behavioural genetics, as founded by Galton, was ultimately undermined by another of Galton's intellectual contributions, the founding of the eugenics movement in 20th century society. The primary idea behind eugenics was to use selective breeding combined with knowledge about the inheritance of behaviour to improve the human species. The eugenics movement was subsequently discredited by scientific corruption and genocidal actions in Nazi Germany. Behavioural genetics was thereby discredited through its association to eugenics. The field once again gained status as a distinct scientific discipline through the publication of early texts on behavioural genetics, such as Calvin S. Hall's 1951 book chapter on behavioural genetics, in which he introduced the term "psychogenetics", which enjoyed some limited popularity in the 1960s and 1970s. However, it eventually disappeared from usage in favour of "behaviour genetics". The start of behaviour genetics as a well-identified field was marked by the publication in 1960 of the book Behavior Genetics by John L. Fuller and William Robert (Bob) Thompson. It is widely accepted now that many if not most behaviours in animals and humans are under significant genetic influence, although the extent of genetic influence for any particular trait can differ widely. A decade later, in February 1970, the first issue of the journal Behavior Genetics was published and in 1972 the Behavior Genetics Association was formed with Theodosius Dobzhansky elected as the association's first president. The field has since grown and diversified, touching many scientific disciplines. Methods The primary goal of behavioural genetics is to investigate the nature and origins of individual differences in behaviour. A wide variety of different methodological approaches are used in behavioural genetic research, only a few of which are outlined below. Animal studies Investigators in animal behaviour genetics can carefully control for environmental factors and can experimentally manipulate genetic variants, allowing for a degree of causal inference that is not available in studies on human behavioural genetics. In animal research selection experiments have often been employed. For example, laboratory house mice have been bred for open-field behaviour, thermoregulatory nesting, and voluntary wheel-running behaviour. A range of methods in these designs are covered on those pages. Behavioural geneticists using model organisms employ a range of molecular techniques to alter, insert, or delete genes. These techniques include knockouts, floxing, gene knockdown, or genome editing using methods like CRISPR-Cas9. These techniques allow behavioural geneticists different levels of control in the model organism's genome, to evaluate the molecular, physiological, or behavioural outcome of genetic changes. Animals commonly used as model organisms in behavioural genetics include mice, zebra fish, and the nematode species C. elegans. Machine learning and A.I. developments are allowing researchers to design experiments that are able to manage the complexity and large data sets generated, allowing for increasingly complex behavioural experiments. Human studies Some research designs used in behavioural genetic research are variations on family designs (also known as pedigree designs), including twin studies and adoption studies. Quantitative genetic modelling of individuals with known genetic relationships (e.g., parent-child, sibling, dizygotic and monozygotic twins) allows one to estimate to what extent genes and environment contribute to phenotypic differences among individuals. Twin and family studies The basic intuition of the twin study is that monozygotic twins share 100% of their genome and dizygotic twins share, on average, 50% of their segregating genome. Thus, differences between the two members of a monozygotic twin pair can only be due to differences in their environment, whereas dizygotic twins will differ from one another due to genes in addition to the environment. Under this simplistic model, if dizygotic twins differ more than monozygotic twins it can only be attributable to genetic influences. An important assumption of the twin model is the equal environment assumption that monozygotic twins have the same shared environmental experiences as dizygotic twins. If, for example, monozygotic twins tend to have more similar experiences than dizygotic twins—and these experiences themselves are not genetically mediated through gene-environment correlation mechanisms—then monozygotic twins will tend to be more similar to one another than dizygotic twins for reasons that have nothing to do with genes. While this assumption should be kept in mind when interpreting the results of twin studies, research tends to support the equal environment assumption. Twin studies of monozygotic and dizygotic twins use a biometrical formulation to describe the influences on twin similarity and to infer heritability. The formulation rests on the basic observation that the variance in a phenotype is due to two sources, genes and environment. More formally, , where is the phenotype, is the effect of genes, is the effect of the environment, and is a gene by environment interaction. The term can be expanded to include additive, dominance, and epistatic genetic effects. Similarly, the environmental term can be expanded to include shared environment and non-shared environment, which includes any measurement error. Dropping the gene by environment interaction for simplicity (typical in twin studies) and fully decomposing the and terms, we now have . Twin research then models the similarity in monozygotic twins and dizygotic twins using simplified forms of this decomposition, shown in the table. The simplified Falconer formulation can then be used to derive estimates of , , and . Rearranging and substituting the and equations one can obtain an estimate of the additive genetic variance, or heritability, , the non-shared environmental effect and, finally, the shared environmental effect . The Falconer formulation is presented here to illustrate how the twin model works. Modern approaches use maximum likelihood to estimate the genetic and environmental variance components. Measured genetic variants The Human Genome Project has allowed scientists to directly genotype the sequence of human DNA nucleotides. Once genotyped, genetic variants can be tested for association with a behavioural phenotype, such as mental disorder, cognitive ability, personality, and so on. Candidate Genes. One popular approach has been to test for association candidate genes with behavioural phenotypes, where the candidate gene is selected based on some a priori theory about biological mechanisms involved in the manifestation of a behavioural trait or phenotype. In general, such studies have proven difficult to broadly replicate and there has been concern raised that the false positive rate in this type of research is high. Genome-wide association studies In genome-wide association studies, researchers test the relationship of millions of genetic polymorphisms with behavioural phenotypes across the genome. This approach to genetic association studies is largely atheoretical, and typically not guided by a particular biological hypothesis regarding the phenotype. Genetic association findings for behavioural traits and psychiatric disorders have been found to be highly polygenic (involving many small genetic effects). Genetic variants identified to be associated with some trait or disease through GWAS may be used to improve disease risk predictions. However, the genetic variants identified through GWAS of common genetic variants are most likely to have a modest effect on disease risk or development of a given trait. This is different from the strong genetic contribution seen in Mendelian conditions or for some rare variants that may have a larger effect on disease. SNP heritability and co-heritability Recently, researchers have begun to use similarity between classically unrelated people at their measured single nucleotide polymorphisms (SNPs) to estimate genetic variation or covariation that is tagged by SNPs, using mixed effects models implemented in software such as genome-wide complex trait analysis (GCTA). To do this, researchers find the average genetic relatedness over all SNPs between all individuals in a (typically large) sample, and use Haseman–Elston regression or restricted maximum likelihood to estimate the genetic variation that is "tagged" by, or predicted by, the SNPs. The proportion of phenotypic variation that is accounted for by the genetic relatedness has been called "SNP heritability". Intuitively, SNP heritability increases to the degree that phenotypic similarity is predicted by genetic similarity at measured SNPs, and is expected to be lower than the true narrow-sense heritability to the degree that measured SNPs fail to tag (typically rare) causal variants. The value of this method is that it is an independent way to estimate heritability that does not require the same assumptions as those in twin and family studies, and that it gives insight into the allelic frequency spectrum of the causal variants underlying trait variation. Quasi-experimental designs Some behavioural genetic designs are useful not to understand genetic influences on behaviour, but to control for genetic influences to test environmentally-mediated influences on behaviour. Such behavioural genetic designs may be considered a subset of natural experiments, quasi-experiments that attempt to take advantage of naturally occurring situations that mimic true experiments by providing some control over an independent variable. Natural experiments can be particularly useful when experiments are infeasible, due to practical or ethical limitations. A general limitation of observational studies is that the relative influences of genes and environment are confounded. A simple demonstration of this fact is that measures of 'environmental' influence are heritable. Thus, observing a correlation between an environmental risk factor and a health outcome is not necessarily evidence for environmental influence on the health outcome. Similarly, in observational studies of parent-child behavioural transmission, for example, it is impossible to know if the transmission is due to genetic or environmental influences, due to the problem of passive gene–environment correlation. The simple observation that the children of parents who use drugs are more likely to use drugs as adults does not indicate why the children are more likely to use drugs when they grow up. It could be because the children are modelling their parents' behaviour. Equally plausible, it could be that the children inherited drug-use-predisposing genes from their parent, which put them at increased risk for drug use as adults regardless of their parents' behaviour. Adoption studies, which parse the relative effects of rearing environment and genetic inheritance, find a small to negligible effect of rearing environment on smoking, alcohol, and marijuana use in adopted children, but a larger effect of rearing environment on harder drug use. Other behavioural genetic designs include discordant twin studies, children of twins designs, and Mendelian randomization. General findings There are many broad conclusions to be drawn from behavioural genetic research about the nature and origins of behaviour. Three major conclusions include: all behavioural traits and disorders are influenced by genes environmental influences tend to make members of the same family more different, rather than more similar the influence of genes tends to increase in relative importance as individuals age. Genetic influences on behaviour are pervasive It is clear from multiple lines of evidence that all researched behavioural traits and disorders are influenced by genes; that is, they are heritable. The single largest source of evidence comes from twin studies, where it is routinely observed that monozygotic (identical) twins are more similar to one another than are same-sex dizygotic (fraternal) twins. The conclusion that genetic influences are pervasive has also been observed in research designs that do not depend on the assumptions of the twin method. Adoption studies show that adoptees are routinely more similar to their biological relatives than their adoptive relatives for a wide variety of traits and disorders. In the Minnesota Study of Twins Reared Apart, monozygotic twins separated shortly after birth were reunited in adulthood. These adopted, reared-apart twins were as similar to one another as were twins reared together on a wide range of measures including general cognitive ability, personality, religious attitudes, and vocational interests, among others. Approaches using genome-wide genotyping have allowed researchers to measure genetic relatedness between individuals and estimate heritability based on millions of genetic variants. Methods exist to test whether the extent of genetic similarity (aka, relatedness) between nominally unrelated individuals (individuals who are not close or even distant relatives) is associated with phenotypic similarity. Such methods do not rely on the same assumptions as twin or adoption studies, and routinely find evidence for heritability of behavioural traits and disorders. Nature of environmental influence Just as all researched human behavioural phenotypes are influenced by genes (i.e., are heritable), all such phenotypes are also influenced by the environment. The basic fact that monozygotic twins are genetically identical but are never perfectly concordant for psychiatric disorder or perfectly correlated for behavioural traits, indicates that the environment shapes human behaviour. The nature of this environmental influence, however, is such that it tends to make individuals in the same family more different from one another, not more similar to one another. That is, estimates of shared environmental effects in human studies are small, negligible, or zero for the vast majority of behavioural traits and psychiatric disorders, whereas estimates of non-shared environmental effects are moderate to large. From twin studies is typically estimated at 0 because the correlation between monozygotic twins is at least twice the correlation for dizygotic twins. When using the Falconer variance decomposition this difference between monozygotic and dizygotic twin similarity results in an estimated . The Falconer decomposition is simplistic. It removes the possible influence of dominance and epistatic effects which, if present, will tend to make monozygotic twins more similar than dizygotic twins and mask the influence of shared environmental effects. This is a limitation of the twin design for estimating . However, the general conclusion that shared environmental effects are negligible does not rest on twin studies alone. Adoption research also fails to find large components; that is, adoptive parents and their adopted children tend to show much less resemblance to one another than the adopted child and his or her non-rearing biological parent. In studies of adoptive families with at least one biological child and one adopted child, the sibling resemblance also tends to be nearly zero for most traits that have been studied. The figure provides an example from personality research, where twin and adoption studies converge on the conclusion of zero to small influences of shared environment on broad personality traits measured by the Multidimensional Personality Questionnaire including positive emotionality, negative emotionality, and constraint. Given the conclusion that all researched behavioural traits and psychiatric disorders are heritable, biological siblings will always tend to be more similar to one another than will adopted siblings. However, for some traits, especially when measured during adolescence, adopted siblings do show some significant similarity (e.g., correlations of .20) to one another. Traits that have been demonstrated to have significant shared environmental influences include internalizing and externalizing psychopathology, substance use and dependence, and intelligence. Nature of genetic influence Genetic effects on human behavioural outcomes can be described in multiple ways. One way to describe the effect is in terms of how much variance in the behaviour can be accounted for by alleles in the genetic variant, otherwise known as the coefficient of determination or . An intuitive way to think about is that it describes the extent to which the genetic variant makes individuals, who harbour different alleles, different from one another on the behavioural outcome. A complementary way to describe effects of individual genetic variants is in how much change one expects on the behavioural outcome given a change in the number of risk alleles an individual harbours, often denoted by the Greek letter (denoting the slope in a regression equation), or, in the case of binary disease outcomes by the odds ratio of disease given allele status. Note the difference: describes the population-level effect of alleles within a genetic variant; or describe the effect of having a risk allele on the individual who harbours it, relative to an individual who does not harbour a risk allele. When described on the metric, the effects of individual genetic variants on complex human behavioural traits and disorders are vanishingly small, with each variant accounting for of variation in the phenotype. This fact has been discovered primarily through genome-wide association studies of complex behavioural phenotypes, including results on substance use, personality, fertility, schizophrenia, depression, and endophenotypes including brain structure and function. There are a small handful of replicated and robustly studied exceptions to this rule, including the effect of APOE on Alzheimer's disease, and CHRNA5 on smoking behaviour, and ALDH2 (in individuals of East Asian ancestry) on alcohol use. On the other hand, when assessing effects according to the metric, there are a large number of genetic variants that have very large effects on complex behavioural phenotypes. The risk alleles within such variants are exceedingly rare, such that their large behavioural effects impact only a small number of individuals. Thus, when assessed at a population level using the metric, they account for only a small amount of the differences in risk between individuals in the population. Examples include variants within APP that result in familial forms of severe early onset Alzheimer's disease but affect only relatively few individuals. Compare this to risk alleles within APOE, which pose much smaller risk compared to APP, but are far more common and therefore affect a much greater proportion of the population. Finally, there are classical behavioural disorders that are genetically simple in their etiology, such as Huntington's disease. Huntington's is caused by a single autosomal dominant variant in the HTT gene, which is the only variant that accounts for any differences among individuals in their risk for developing the disease, assuming they live long enough. In the case of genetically simple and rare diseases such as Huntington's, the variant and the are simultaneously large. Additional general findings In response to general concerns about the replicability of psychological research, behavioural geneticists Robert Plomin, John C. DeFries, Valerie Knopik, and Jenae Neiderhiser published a review of the ten most well-replicated findings from behavioural genetics research. The ten findings were: "All psychological traits show significant and substantial genetic influence." "No behavioural traits are 100% heritable." "Heritability is caused by many genes of small effect." "Phenotypic correlations between psychological traits show significant and substantial genetic mediation." "The heritability of intelligence increases throughout development." "Age-to-age stability is mainly due to genetics." "Most measures of the 'environment' show significant genetic influence." "Most associations between environmental measures and psychological traits are significantly mediated genetically." "Most environmental effects are not shared by children growing up in the same family." "Abnormal is normal." Criticisms and controversies Behavioural genetic research and findings have at times been controversial. Some of this controversy has arisen because behavioural genetic findings can challenge societal beliefs about the nature of human behaviour and abilities. Major areas of controversy have included genetic research on topics such as racial differences, intelligence, violence, and human sexuality. Other controversies have arisen due to misunderstandings of behavioural genetic research, whether by the lay public or the researchers themselves. For example, the notion of heritability is easily misunderstood to imply causality, or that some behaviour or condition is determined by one's genetic endowment. When behavioural genetics researchers say that a behaviour is X% heritable, that does not mean that genetics causes, determines, or fixes up to X% of the behaviour. Instead, heritability is a statement about genetic differences correlated with trait differences on the population level. Historically, perhaps the most controversial subject has been on race and genetics. Race is not a scientifically exact term, and its interpretation can depend on one's culture and country of origin. Instead, geneticists use concepts such as ancestry, which is more rigorously defined. For example, a so-called "Black" race may include all individuals of relatively recent African descent ("recent" because all humans are descended from African ancestors). However, there is more genetic diversity in Africa than the rest of the world combined, so speaking of a "Black" race is without a precise genetic meaning. Qualitative research has fostered arguments that behavioural genetics is an ungovernable field without scientific norms or consensus, which fosters controversy. The argument continues that this state of affairs has led to controversies including race, intelligence, instances where variation within a single gene was found to very strongly influence a controversial phenotype (e.g., the "gay gene" controversy), and others. This argument further states that because of the persistence of controversy in behaviour genetics and the failure of disputes to be resolved, behaviour genetics does not conform to the standards of good science. The scientific assumptions on which parts of behavioural genetic research are based have also been criticized as flawed. Genome wide association studies are often implemented with simplifying statistical assumptions, such as additivity, which may be statistically robust but unrealistic for some behaviours. Critics further contend that, in humans, behaviour genetics represents a misguided form of genetic reductionism based on inaccurate interpretations of statistical analyses. Studies comparing monozygotic (MZ) and dizygotic (DZ) twins assume that environmental influences will be the same in both types of twins, but this assumption may also be unrealistic. MZ twins may be treated more alike than DZ twins, which itself may be an example of evocative gene–environment correlation, suggesting that one's genes influence their treatment by others. It is also not possible in twin studies to eliminate effects of the shared womb environment, although studies comparing twins who experience monochorionic and dichorionic environments in utero do exist, and indicate limited impact. Studies of twins separated in early life include children who were separated not at birth but part way through childhood. The effect of early rearing environment can therefore be evaluated to some extent in such a study, by comparing twin similarity for those twins separated early and those separated later. See also Behavior Genetics Behavior Genetics Association Behavioural neurogenetics Biocultural evolution Evolutionary psychology Genes, Brain and Behavior Genome-wide association study International Behavioural and Neural Genetics Society International Society of Psychiatric Genetics Journal of Neurogenetics Nature versus nurture Personality psychology Psychiatric genetics Psychiatric Genetics Quantitative genetics References Further reading External links
0.767697
0.990566
0.760454
Gödel, Escher, Bach
Gödel, Escher, Bach: an Eternal Golden Braid, also known as GEB, is a 1979 book by Douglas Hofstadter. By exploring common themes in the lives and works of logician Kurt Gödel, artist M. C. Escher, and composer Johann Sebastian Bach, the book expounds concepts fundamental to mathematics, symmetry, and intelligence. Through short stories, illustrations, and analysis, the book discusses how systems can acquire meaningful context despite being made of "meaningless" elements. It also discusses self-reference and formal rules, isomorphism, what it means to communicate, how knowledge can be represented and stored, the methods and limitations of symbolic representation, and even the fundamental notion of "meaning" itself. In response to confusion over the book's theme, Hofstadter emphasized that Gödel, Escher, Bach is not about the relationships of mathematics, art, and music—but rather about how cognition emerges from hidden neurological mechanisms. One point in the book presents an analogy about how individual neurons in the brain coordinate to create a unified sense of a coherent mind by comparing it to the social organization displayed in a colony of ants. Gödel, Escher, Bach won the Pulitzer Prize for general non-fiction and the National Book Award for Science Hardcover. Structure Gödel, Escher, Bach takes the form of interweaving narratives. The main chapters alternate with dialogues between imaginary characters, usually Achilles and the tortoise, first used by Zeno of Elea and later by Lewis Carroll in "What the Tortoise Said to Achilles". These origins are related in the first two dialogues, and later ones introduce new characters such as the Crab. These narratives frequently dip into self-reference and metafiction. Word play also features prominently in the work. Puns are occasionally used to connect ideas, such as the "Magnificrab, Indeed" with Bach's Magnificat in D; "SHRDLU, Toy of Man's Designing" with Bach's "Jesu, Joy of Man's Desiring"; and "Typographical Number Theory", or "TNT", which inevitably reacts explosively when it attempts to make statements about itself. One dialogue contains a story about a genie (from the Arabic "Djinn") and various "tonics" (of both the liquid and musical varieties), which is titled "Djinn and Tonic". Sometimes word play has no significant connection, such as the dialogue "A Mu Offering", which has no close affinity to Bach's The Musical Offering. One dialogue in the book is written in the form of a crab canon, in which every line before the midpoint corresponds to an identical line past the midpoint. The conversation still makes sense due to uses of common phrases that can be used as either greetings or farewells ("Good day") and the positioning of lines that double as an answer to a question in the next line. Another is a sloth canon, where one character repeats the lines of another, but slower and negated. Themes The book contains many instances of recursion and self-reference, where objects and ideas speak about or refer back to themselves. One is Quining, a term Hofstadter invented in homage to Willard Van Orman Quine, referring to programs that produce their own source code. Another is the presence of a fictional author in the index, Egbert B. Gebstadter, a man with initials E, G, and B and a surname that partially matches Hofstadter. A phonograph dubbed "Record Player X" destroys itself by playing a record titled I Cannot Be Played on Record Player X (an analogy to Gödel's incompleteness theorems), an examination of canon form in music, and a discussion of Escher's lithograph of two hands drawing each other. To describe such self-referencing objects, Hofstadter coins the term "strange loop"—a concept he examines in more depth in his follow-up book I Am a Strange Loop. To escape many of the logical contradictions brought about by these self-referencing objects, Hofstadter discusses Zen koans. He attempts to show readers how to perceive reality outside their own experience and embrace such paradoxical questions by rejecting the premise—a strategy also called "unasking". Elements of computer science such as call stacks are also discussed in Gödel, Escher, Bach, as one dialogue describes the adventures of Achilles and the Tortoise as they make use of "pushing potion" and "popping tonic" involving entering and leaving different layers of reality. The same dialogue has a genie with a lamp containing another genie with another lamp and so on. Subsequent sections discuss the basic tenets of logic, self-referring statements, ("typeless") systems, and even programming. Hofstadter further creates BlooP and FlooP, two simple programming languages, to illustrate his point. Puzzles The book is filled with puzzles, including Hofstadter's MU puzzle, which contrasts reasoning within a defined logical system with reasoning about that system. Another example can be found in the chapter titled Contracrostipunctus, which combines the words acrostic and contrapunctus (counterpoint). In this dialogue between Achilles and the Tortoise, the author hints that there is a contrapunctal acrostic in the chapter that refers both to the author (Hofstadter) and Bach. This can be spelled out by taking the first word of each paragraph, to reveal "Hofstadter's Contracrostipunctus Acrostically Backwards Spells J. S. Bach". The second acrostic is found by taking the first letters of the words of the first, and reading them backwards to get "J S Bach", as the acrostic sentence self-referentially states. Reception and impact Gödel, Escher, Bach won the Pulitzer Prize for general non-fiction and the National Book Award for Science Hardcover. Martin Gardner's July 1979 column in Scientific American stated, "Every few decades, an unknown author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event." For Summer 2007, the Massachusetts Institute of Technology created an online course for high school students built around the book. In its February 19, 2010, investigative summary on the 2001 anthrax attacks, the Federal Bureau of Investigation suggested that Bruce Edwards Ivins was inspired by the book to hide secret codes based upon nucleotide sequences in the anthrax-laced letters he allegedly sent in September and October 2001, using bold letters, as suggested on page 404 of the book. It was also suggested that he attempted to hide the book from investigators by throwing it in the trash. In 2019, British mathematician Marcus du Sautoy curated a series of events at London's Barbican Centre to celebrate the book's fortieth anniversary. I Am a Strange Loop Hofstadter has expressed some frustration with how Gödel, Escher, Bach was received. He felt that readers did not fully grasp that strange loops were supposed to be the central theme of the book, and attributed this confusion to the length of the book and the breadth of the topics covered. To remedy this issue, Hofstadter published I Am a Strange Loop in 2007, which had a more focused discussion of the idea. Translation Hofstadter claims the idea of translating his book "never crossed [his] mind" when he was writing it—but when his publisher brought it up, he was "very excited about seeing [the] book in other languages, especially… French." He knew, however, that "there were a million issues to consider" when translating, since the book relies not only on word-play, but on "structural puns" as well—writing where the form and content of the work mirror each other (such as the "Crab canon" dialogue, which reads almost exactly the same forwards as backwards). Hofstadter gives an example of translation trouble in the paragraph "Mr. Tortoise, Meet Madame Tortue", saying translators "instantly ran headlong into the conflict between the feminine gender of the French noun tortue and the masculinity of my character, the Tortoise." Hofstadter agreed to the translators' suggestions of naming the French character Madame Tortue, and the Italian version Signorina Tartaruga. Because of other troubles translators might have retaining meaning, Hofstadter "painstakingly went through every sentence of Gödel, Escher, Bach, annotating a copy for translators into any language that might be targeted." Translation also gave Hofstadter a way to add new meaning and puns. For instance, in Chinese, the subtitle is not a translation of an Eternal Golden Braid, but a seemingly unrelated phrase Jí Yì Bì (集异璧, literally "collection of exotic jades"), which is homophonic to GEB in Chinese. Some material regarding this interplay is in Hofstadter's later book, Le Ton beau de Marot, which is mainly about translation. Editions See also Chinese room Church–Turing thesis Collatz conjecture Fractal Heterarchy Indra's net Isomorphism John Lucas (philosopher) Meta Mind–body problem Neural correlates of consciousness Strange loop Typographical Number Theory Notes References External links Video lectures from a summer GEB seminar for high schoolers, MIT OpenCourseWare Mårten's GEB site Class about GEB, at the University of Michigan Java 3D game based on the GEB triplets 1979 non-fiction books Basic Books books Books about consciousness Books by Douglas Hofstadter Cognitive science literature Cultural depictions of Johann Sebastian Bach Dialogues English-language books M. C. Escher Mathematics books National Book Award-winning works Books about philosophy of mathematics Pulitzer Prize for General Non-Fiction-winning works Puzzle books
0.76188
0.998122
0.760449
Cryobiology
Cryobiology is the branch of biology that studies the effects of low temperatures on living things within Earth's cryosphere or in science. The word cryobiology is derived from the Greek words κρῧος [kryos], "cold", βίος [bios], "life", and λόγος [logos], "word". In practice, cryobiology is the study of biological material or systems at temperatures below normal. Materials or systems studied may include proteins, cells, tissues, organs, or whole organisms. Temperatures may range from moderately hypothermic conditions to cryogenic temperatures. Areas of study At least six major areas of cryobiology can be identified: 1) study of cold-adaptation of microorganisms, plants (cold hardiness), and animals, both invertebrates and vertebrates (including hibernation), 2) cryopreservation of cells, tissues, gametes, and embryos of animal and human origin for (medical) purposes of long-term storage by cooling to temperatures below the freezing point of water. This usually requires the addition of substances which protect the cells during freezing and thawing (cryoprotectants), 3) preservation of organs under hypothermic conditions for transplantation, 4) lyophilization (freeze-drying) of pharmaceuticals, 5) cryosurgery, a (minimally) invasive approach for the destruction of unhealthy tissue using cryogenic gases/fluids, and 6) physics of supercooling, ice nucleation/growth and mechanical engineering aspects of heat transfer during cooling and warming, as applied to biological systems. Cryobiology would include cryonics, the low temperature preservation of humans and mammals with the intention of future revival, although this is not part of mainstream cryobiology, depending heavily on speculative technology yet to be invented. Several of these areas of study rely on cryogenics, the branch of physics and engineering that studies the production and use of very low temperatures. Cryopreservation in nature Many living organisms are able to tolerate prolonged periods of time at temperatures below the freezing point of water. Most living organisms accumulate cryoprotectants such as antinucleating proteins, polyols, and glucose to protect themselves against frost damage by sharp ice crystals. Most plants, in particular, can safely reach temperatures of −4 °C to −12 °C. Bacteria Three species of bacteria, Carnobacterium pleistocenium, Chryseobacterium greenlandensis, and Herminiimonas glaciei, have reportedly been revived after surviving for thousands of years frozen in ice. Certain bacteria, notably Pseudomonas syringae, produce specialized proteins that serve as potent ice nucleators, which they use to force ice formation on the surface of various fruits and plants at about −2 °C. The freezing causes injuries in the epithelia and makes the nutrients in the underlying plant tissues available to the bacteria. Listeria grows slowly in temperatures as low as -1.5 °C and persists for some time in frozen foods. Plants Many plants undergo a process called hardening which allows them to survive temperatures below 0 °C for weeks to months. Cryobiology of plants explores the cellular and molecular adaptations plants develop to survive subzero temperatures, such as antifreeze proteins (AFP) and changes in membrane composition. Cryopreservation is a critical technique in plant cryobiology, used for the long-term storage of genetic material and the preservation of endangered species by maintaining plant tissues or seeds in liquid nitrogen. Research in this area aims to enhance agricultural productivity in cold climates, improve the storage of plant genetic resources, and understand the impacts of climate change on plant biodiversity. Animals Invertebrates Nematodes that survive below 0 °C include Trichostrongylus colubriformis and Panagrolaimus davidi. Cockroach nymphs (Periplaneta japonica) survive short periods of freezing at -6 to -8 °C. The red flat bark beetle (Cucujus clavipes) can survive after being frozen to -150 °C. The fungus gnat Exechia nugatoria can survive after being frozen to -50 °C, by a unique mechanism whereby ice crystals form in the body but not the head. Another freeze-tolerant beetle is Upis ceramboides. See insect winter ecology and antifreeze protein. Another invertebrate that is briefly tolerant to temperatures down to -273 °C is the tardigrade. The larvae of Haemonchus contortus, a nematode, can survive 44 weeks frozen at -196 °C. Vertebrates For the wood frog (Rana sylvatica), in the winter, as much as 45% of its body may freeze and turn to ice. "Ice crystals form beneath the skin and become interspersed among the body's skeletal muscles. During the freeze, the frog's breathing, blood flow, and heartbeat cease. Freezing is made possible by specialized proteins and glucose, which prevent intracellular freezing and dehydration." The wood frog can survive up to 11 days frozen at -4 °C. Other vertebrates that survive at body temperatures below 0 °C include painted turtles (Chrysemys picta), gray tree frogs (Hyla versicolor), moor frogs (Rana arvalis), box turtles (Terrapene carolina - 48 hours at -2 °C), spring peeper (Pseudacris crucifer), garter snakes (Thamnophis sirtalis- 24 hours at -1.5 °C), the chorus frog (Pseudacris triseriata), Siberian salamander (Salamandrella keyserlingii - 24 hours at -15.3 °C), European common lizard (Lacerta vivipara) and Antarctic fish such as Pagothenia borchgrevinki. Antifreeze proteins cloned from such fish have been used to confer frost-resistance on transgenic plants. Hibernating Arctic ground squirrels may have abdominal temperatures as low as −2.9 °C (26.8 °F), maintaining subzero abdominal temperatures for more than three weeks at a time, although the temperatures at the head and neck remain at 0 °C or above. Applied cryobiology Historical background Cryobiology history can be traced back to antiquity. As early as in 2500 BC, low temperatures were used in Egypt in medicine. The use of cold was recommended by Hippocrates to stop bleeding and swelling. With the emergence of modern science, Robert Boyle studied the effects of low temperatures on animals. In 1949, bull semen was cryopreserved for the first time by a team of scientists led by Christopher Polge. This led to a much wider use of cryopreservation today, with many organs, tissues and cells routinely stored at low temperatures. Large organs such as hearts are usually stored and transported, for short times only, at cool but not freezing temperatures for transplantation. Cell suspensions (like blood and semen) and thin tissue sections can sometimes be stored almost indefinitely in liquid nitrogen temperature (cryopreservation). Human sperm, eggs, and embryos are routinely stored in fertility research and treatments. Controlled-rate and slow freezing are well established techniques pioneered in the early 1970s which enabled the first human embryo frozen birth (Zoe Leyland) in 1984. Since then, machines that freeze biological samples using programmable steps, or controlled rates, have been used all over the world for human, animal, and cell biology – 'freezing down' a sample to better preserve it for eventual thawing, before it is deep frozen, or cryopreserved, in liquid nitrogen. Such machines are used for freezing oocytes, skin, blood products, embryo, sperm, stem cells, and general tissue preservation in hospitals, veterinary practices, and research labs. The number of live births from 'slow frozen' embryos is some 300,000 to 400,000 or 20% of the estimated 3 million in vitro fertilized births. Dr Christopher Chen, Australia, reported the world’s first pregnancy using slow-frozen oocytes from a British controlled-rate freezer in 1986. Cryosurgery (intended and controlled tissue destruction by ice formation) was carried out by James Arnott in 1845 in an operation on a patient with cancer. Preservation techniques Cryobiology as an applied science is primarily concerned with low-temperature preservation. Hypothermic storage is typically above 0 °C but below normothermic (32 °C to 37 °C) mammalian temperatures. Storage by cryopreservation, on the other hand, will be in the −80 to −196 °C temperature range. Organs, and tissues are more frequently the objects of hypothermic storage, whereas single cells have been the most common objects cryopreserved. A rule of thumb in hypothermic storage is that every 10 °C reduction in temperature is accompanied by a 50% decrease in oxygen consumption. Although hibernating animals have adapted mechanisms to avoid metabolic imbalances associated with hypothermia, hypothermic organs, and tissues being maintained for transplantation require special preservation solutions to counter acidosis, depressed sodium pump activity. and increased intracellular calcium. Special organ preservation solutions such as Viaspan (University of Wisconsin solution), HTK, and Celsior have been designed for this purpose. These solutions also contain ingredients to minimize damage by free radicals, prevent edema, compensate for ATP loss, etc. Cryopreservation of cells is guided by the "two-factor hypothesis" of American cryobiologist Peter Mazur, which states that excessively rapid cooling kills cells by intracellular ice formation and excessively slow cooling kills cells by either electrolyte toxicity or mechanical crushing. During slow cooling, ice forms extracellularly, causing water to osmotically leave cells, thereby dehydrating them. Intracellular ice can be much more damaging than extracellular ice. For red blood cells, the optimum cooling rate is very rapid (nearly 100 °C per second), whereas for stem cells the optimum cooling rate is very slow (1 °C per minute). Cryoprotectants, such as dimethyl sulfoxide and glycerol, are used to protect cells from freezing. A variety of cell types are protected by 10% dimethyl sulfoxide. Cryobiologists attempt to optimize cryoprotectant concentration (minimizing both ice formation and toxicity) and cooling rate. Cells may be cooled at an optimum rate to a temperature between −30 and −40 °C before being plunged into liquid nitrogen. Slow cooling methods rely on the fact that cells contain few nucleating agents, but contain naturally occurring vitrifying substances that can prevent ice formation in cells that have been moderately dehydrated. Some cryobiologists are seeking mixtures of cryoprotectants for full vitrification (zero ice formation) in preservation of cells, tissues, and organs. Vitrification methods pose a challenge in the requirement to search for cryoprotectant mixtures that can minimize toxicity. In humans Human gametes and two-, four- and eight-cell embryos can survive cryopreservation at -196 °C for 10 years under well-controlled laboratory conditions. Cryopreservation in humans with regards to infertility involves preservation of embryos, sperm, or oocytes via freezing. Conception, in vitro, is attempted when the sperm is thawed and introduced to the 'fresh' eggs, the frozen eggs are thawed and sperm is placed with the eggs and together they are placed back into the uterus or a frozen embryo is introduced to the uterus. Vitrification has flaws and is not as reliable or proven as freezing fertilized sperm, eggs, or embryos as traditional slow freezing methods because eggs alone are extremely sensitive to temperature. Many researchers are also freezing ovarian tissue in conjunction with the eggs in hopes that the ovarian tissue can be transplanted back into the uterus, stimulating normal ovulation cycles. In 2004, Donnez of Louvain in Belgium reported the first successful ovarian birth from frozen ovarian tissue. In 1997, samples of ovarian cortex were taken from a woman with Hodgkin's lymphoma and cryopreserved in a (Planer, UK) controlled-rate freezer and then stored in liquid nitrogen. Chemotherapy was initiated after the patient had premature ovarian failure. In 2003, after freeze-thawing, orthotopic autotransplantation of ovarian cortical tissue was done by laparoscopy and after five months, reimplantation signs indicated recovery of regular ovulatory cycles. Eleven months after reimplantation, a viable intrauterine pregnancy was confirmed, which resulted in the first such live birth – a girl named Tamara. Therapeutic hypothermia, e.g. during heart surgery on a "cold" heart (generated by cold perfusion without any ice formation) allows for much longer operations and improves recovery rates for patients. Scientific societies The Society for Cryobiology was founded in 1964 to bring together those from the biological, medical, and physical sciences who have a common interest in the effects of low temperatures on biological systems. As of 2007, the Society for Cryobiology had about 280 members from around the world, and one-half of them are US-based. The purpose of the Society is to promote scientific research in low temperature biology, to improve scientific understanding in this field, and to disseminate and apply this knowledge to the benefit of mankind. The Society requires of all its members the highest ethical and scientific standards in the performance of their professional activities. According to the Society's bylaws, membership may be refused to applicants whose conduct is deemed detrimental to the Society; in 1982, the bylaws were amended explicitly to exclude "any practice or application of freezing deceased persons in the anticipation of their reanimation", over the objections of some members who were cryonicists, such as Jerry Leaf. The Society organizes an annual scientific meeting dedicated to all aspects of low-temperature biology. This international meeting offers opportunities for presentation and discussion of the most up-to-date research in cryobiology, as well as reviewing specific aspects through symposia and workshops. Members are also kept informed of news and forthcoming meetings through the Society newsletter, News Notes. The 2011–2012 president of the Society for Cryobiology was John H. Crowe. The Society for Low Temperature Biology was founded in 1964 and became a registered charity in 2003 with the purpose of promoting research into the effects of low temperatures on all types of organisms and their constituent cells, tissues, and organs. As of 2006, the society had around 130 (mostly British and European) members and holds at least one annual general meeting. The program usually includes both a symposium on a topical subject and a session of free communications on any aspect of low-temperature biology. Recent symposia have included long-term stability, preservation of aquatic organisms, cryopreservation of embryos and gametes, preservation of plants, low-temperature microscopy, vitrification (glass formation of aqueous systems during cooling), freeze drying and tissue banking. Members are informed through the Society Newsletter, which is presently published three times a year. Journals Cryobiology (publisher: Elsevier) is the foremost scientific publication in this area, with about 60 refereed contributions published each year. Articles concern any aspect of low-temperature biology and medicine (e.g. freezing, freeze-drying, hibernation, cold tolerance and adaptation, cryoprotective compounds, medical applications of reduced temperature, cryosurgery, hypothermia, and perfusion of organs). Cryo Letters is an independent UK-based rapid communication journal which publishes papers on the effects produced by low temperatures on a wide variety of biophysical and biological processes, or studies involving low-temperature techniques in the investigation of biological and ecological topics. Biopreservation and Biobanking (formerly Cell Preservation Technology) is a peer-reviewed quarterly scientific journal published by Mary Ann Liebert, Inc. dedicated to the diverse spectrum of preservation technologies including cryopreservation, dry-state (anhydrobiosis), and glassy-state and hypothermic maintenance. Cell Preservation Technology has been renamed Biopreservation and Biobanking and is the official journal of International Society for Biological and Environmental Repositories. Problems of Cryobiology and Cryomedicine (formerly 'Kriobiologiya' (1985-1990) and 'Problems of Cryobiology'(1991-2012) ) published by Institute for Problems of Cryobiology and Cryomedicine. The journal covers all topics related to low temperature biology, medicine and engineering. See also Cryptobiosis Aldehyde-stabilized cryopreservation References External links Cell Preservation Technology Cellular cryobiology and anhydrobiology An overview of the science behind cryobiology at the Science Creative Quarterly Phase transitions Cryogenics Cryonics
0.782861
0.971353
0.760434
Technological evolution
The term "technological evolution" captures explanations of technological change that draw on mechanisms from evolutionary biology. Evolutionary biology was originally described in On the Origin of Species by Charles Darwin. In the style of this catchphrase, technological evolution can be used to describe the origin of new technologies. Combinatoric theory of technological change The combinatoric theory of technological change states that every technology always consists of simpler technologies, and a new technology is made of already existing technologies. One notion of this theory is that this interaction of technologies creates a network. All the technologies which interact to form a new technology can be thought of as complements, such as a screwdriver and a screw which by their interaction create the process of screwing a screw. This newly formed process of screwing a screw can be perceived as a technology itself and can therefore be represented by a new node in the network of technologies. The new technology itself can interact with other technologies to form a new technology again. As the process of combining existing technologies is repeated again and again, the network of technologies grows. A described mechanism of technological change has been termed, “combinatorial evolution”. Others have called it, “technological recursion”. Brian Arthur has elaborated how the theory is related to the mechanism of genetic recombination from evolutionary biology and in which aspects it differs. History of technological evolution Technological evolution is a theory of radical transformation of society through technological development. This theory originated with Czech philosopher Radovan Richta. Mankind In Transition; A View of the Distant Past, the Present and the Far Future, Masefield Books, 1993. Technology (which Richta defines as "a material entity created by the application of mental and physical effort to nature in order to achieve some value") evolves in three stages: tools, machine, automation. This evolution, he says, follows two trends: The pre-technological period, in which other animal species remain today (aside from some avian and primate species) was a non-rational period of the early prehistoric man. The emergence of technology, made possible by the development of the rational faculty, paved the way for the first stage: the tool. A tool provides a mechanical advantage in accomplishing a physical task, such as an arrow, plow, or hammer that augments physical labor to more efficiently achieve his objective. Later animal-powered tools such as the plow and the horse, increased the productivity of food production about tenfold over the technology of the hunter-gatherers. Tools allow one to do things impossible to accomplish with one's body alone, such as seeing minute visual detail with a microscope, manipulating heavy objects with a pulley and cart, or carrying volumes of water in a bucket. The second technological stage was the creation of the machine. A machine (a powered machine to be more precise) is a tool that substitutes part of or all of the element of human physical effort, requiring only the control of its functions. Machines became widespread with the industrial revolution, though windmills, a type of machine, are much older. Examples of this include cars, trains, computers, and lights. Machines allow humans to tremendously exceed the limitations of their bodies. Putting a machine on the farm, a tractor, increased food productivity at least tenfold over the technology of the plow and the horse. The third, and final stage of technological evolution is the automation. The automation is a machine that removes the element of human control with an automatic algorithm. Examples of machines that exhibit this characteristic are digital watches, automatic telephone switches, pacemakers, and computer programs. Each of these three stages outline the introduction and development of the fundamental types of technology, and all three continue to be widely used today. A spear, a plow, a pen, a knife, a glove, and an optical microscope are all examples of tools. See also Self-replicating machines in fiction Sociocultural evolution References External links The Evolution of Technology, George Basalla, University of Delaware Technology in society Evolution Technological change
0.771537
0.985609
0.760433
Cartagena Protocol on Biosafety
The Cartagena Protocol on Biosafety to the Convention on Biological Diversity is an international agreement on biosafety as a supplement to the Convention on Biological Diversity (CBD) effective since 2003. The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by genetically modified organisms resulting from modern biotechnology. The Biosafety Protocol makes clear that products from new technologies must be based on the precautionary principle and allow developing nations to balance public health against economic benefits. It will for example let countries ban imports of genetically modified organisms if they feel there is not enough scientific evidence that the product is safe and requires exporters to label shipments containing genetically altered commodities such as corn or cotton. The required number of 50 instruments of ratification/accession/approval/acceptance by countries was reached in May 2003. In accordance with the provisions of its Article 37, the Protocol entered into force on 11 September 2003. As of July 2020, the Protocol had 173 parties, which includes 170 United Nations member states, the State of Palestine, Niue, and the European Union. Background The Cartagena Protocol on Biosafety, also known as the Biosafety Protocol, was adopted in January 2000, after a CBD Open-ended Ad Hoc Working Group on Biosafety had met six times between July 1996 and February 1999. The Working Group submitted a draft text of the Protocol, for consideration by Conference of the Parties at its first extraordinary meeting, which was convened for the express purpose of adopting a protocol on biosafety to the CBD. After a few delays, the Cartagena Protocol was eventually adopted on 29 January 2000 The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by living modified organisms resulting from modern biotechnology. Objective In accordance with the precautionary approach, contained in Principle 15 of the Rio Declaration on Environment and Development, the objective of the Protocol is to contribute to ensuring an adequate level of protection in the field of the safe transfer, handling and use of 'living modified organisms resulting from modern biotechnology' that may have adverse effects on the conservation and sustainable use of biological diversity, taking also into account risks to human health, and specifically focusing on transboundary movements (Article 1 of the Protocol, SCBD 2000). Living modified organisms (LMOs) The protocol defines a 'living modified organism' as any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology, and 'living organism' means any biological entity capable of transferring or replicating genetic material, including sterile organisms, viruses and viroids. 'Modern biotechnology' is defined in the Protocol to mean the application of in vitro nucleic acid techniques, or fusion of cells beyond the taxonomic family, that overcome natural physiological reproductive or recombination barriers and are not techniques used in traditional breeding and selection. 'Living modified organism (LMO) Products' are defined as processed material that are of living modified organism origin, containing detectable novel combinations of replicable genetic material obtained through the use of modern biotechnology. Common LMOs include agricultural crops that have been genetically modified for greater productivity or for resistance to pests or diseases. Examples of modified crops include tomatoes, cassava, corn, cotton and soybeans. 'Living modified organism intended for direct use as food or feed, or for processing (LMO-FFP)' are agricultural commodities from GM crops. Overall the term 'living modified organisms' is equivalent to genetically modified organism – the Protocol did not make any distinction between these terms and did not use the term 'genetically modified organism.' Precautionary approach One of the outcomes of the United Nations Conference on Environment and Development (also known as the Earth Summit) held in Rio de Janeiro, Brazil, in June 1992, was the adoption of the Rio Declaration on Environment and Development, which contains 27 principles to underpin sustainable development. Commonly known as the precautionary principle, Principle 15 states that "In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation." Elements of the precautionary approach are reflected in a number of the provisions of the Protocol, such as: The preamble, reaffirming "the precautionary approach contained in Principle 15 of the Rio Declaration on environment and Development"; Article 1, indicating that the objective of the Protocol is "in accordance with the precautionary approach contained in Principle 15 of the Rio Declaration on Environment and Development"; Article 10.6 and 11.8, which states "Lack of scientific certainty due to insufficient relevant scientific information and knowledge regarding the extent of the potential adverse effects of an LMO on biodiversity, taking into account risks to human health, shall not prevent a Party of import from taking a decision, as appropriate, with regard to the import of the LMO in question, in order to avoid or minimize such potential adverse effects."; and Annex III on risk assessment, which notes that "Lack of scientific knowledge or scientific consensus should not necessarily be interpreted as indicating a particular level of risk, an absence of risk, or an acceptable risk." Application The Protocol applies to the transboundary movement, transit, handling and use of all living modified organisms that may have adverse effects on the conservation and sustainable use of biological diversity, taking also into account risks to human health (Article 4 of the Protocol, SCBD 2000). Parties and non-parties The governing body of the Protocol is called the Conference of the Parties to the Convention serving as the meeting of the Parties to the Protocol (also the COP-MOP). The main function of this body is to review the implementation of the Protocol and make decisions necessary to promote its effective operation. Decisions under the Protocol can only be taken by Parties to the Protocol. Parties to the Convention that are not Parties to the Protocol may only participate as observers in the proceedings of meetings of the COP-MOP. The Protocol addresses the obligations of Parties in relation to the transboundary movements of LMOs to and from non-Parties to the Protocol. The transboundary movements between Parties and non-Parties must be carried out in a manner that is consistent with the objective of the Protocol. Parties are required to encourage non-Parties to adhere to the Protocol and to contribute information to the Biosafety Clearing-House. Relationship with the WTO A number of agreements under the World Trade Organization (WTO), such as the Agreement on the Application of Sanitary and Phytosanitary Measures (SPS Agreement) and the Agreement on Technical Barriers to Trade (TBT Agreement), and the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs), contain provisions that are relevant to the Protocol. This Protocol states in its preamble that parties: Recognize that trade and environment agreements should be mutually supportive; Emphasize that the Protocol is not interpreted as implying a change in the rights and obligations under any existing agreements; and Understand that the above recital is not intended to subordinate the Protocol to other international agreements. Main features Overview of features The Protocol promotes biosafety by establishing rules and procedures for the safe transfer, handling, and use of LMOs, with specific focus on transboundary movements of LMOs. It features a set of procedures including one for LMOs that are to be intentionally introduced into the environment called the advance informed agreement procedure, and one for LMOs that are intended to be used directly as food or feed or for processing. Parties to the Protocol must ensure that LMOs are handled, packaged and transported under conditions of safety. Furthermore, the shipment of LMOs subject to transboundary movement must be accompanied by appropriate documentation specifying, among other things, identity of LMOs and contact point for further information. These procedures and requirements are designed to provide importing Parties with the necessary information needed for making informed decisions about whether or not to accept LMO imports and for handling them in a safe manner. The Party of import makes its decisions in accordance with scientifically sound risk assessments. The Protocol sets out principles and methodologies on how to conduct a risk assessment. In case of insufficient relevant scientific information and knowledge, the Party of import may use precaution in making their decisions on import. Parties may also take into account, consistent with their international obligations, socio-economic considerations in reaching decisions on import of LMOs. Parties must also adopt measures for managing any risks identified by the risk assessment, and they must take necessary steps in the event of accidental release of LMOs. To facilitate its implementation, the Protocol establishes a Biosafety Clearing-House for Parties to exchange information, and contains a number of important provisions, including capacity-building, a financial mechanism, compliance procedures, and requirements for public awareness and participation. Procedures for moving LMOs across borders Advance Informed Agreement The "Advance Informed Agreement" (AIA) procedure applies to the first intentional transboundary movement of LMOs for intentional introduction into the environment of the Party of import. It includes four components: notification by the Party of export or the exporter, acknowledgment of receipt of notification by the Party of import, the decision procedure, and opportunity for review of decisions. The purpose of this procedure is to ensure that importing countries have both the opportunity and the capacity to assess risks that may be associated with the LMO before agreeing to its import. The Party of import must indicate the reasons on which its decisions are based (unless consent is unconditional). A Party of import may, at any time, in light of new scientific information, review and change a decision. A Party of export or a notifier may also request the Party of import to review its decisions. However, the Protocol's AIA procedure does not apply to certain categories of LMOs: LMOs in transit; LMOs destined for contained use; LMOs intended for direct use as food or feed or for processing While the Protocol's AIA procedure does not apply to certain categories of LMOs, Parties have the right to regulate the importation on the basis of domestic legislation. There are also allowances in the Protocol to declare certain LMOs exempt from application of the AIA procedure. LMOs intended for food or feed, or for processing LMOs intended for direct use as food or feed, or processing (LMOs-FFP) represent a large category of agricultural commodities. The Protocol, instead of using the AIA procedure, establishes a more simplified procedure for the transboundary movement of LMOs-FFP. Under this procedure, A Party must inform other Parties through the Biosafety Clearing-House, within 15 days, of its decision regarding domestic use of LMOs that may be subject to transboundary movement. Decisions by the Party of import on whether or not to accept the import of LMOs-FFP are taken under its domestic regulatory framework that is consistent with the objective of the Protocol. A developing country Party or a Party with an economy in transition may, in the absence of a domestic regulatory framework, declare through the Biosafety Clearing-House that its decisions on the first import of LMOs-FFP will be taken in accordance with risk assessment as set out in the Protocol and time frame for decision-making. Handling, transport, packaging and identification The Protocol provides for practical requirements that are deemed to contribute to the safe movement of LMOs. Parties are required to take measures for the safe handling, packaging and transportation of LMOs that are subject to transboundary movement. The Protocol specifies requirements on identification by setting out what information must be provided in documentation that should accompany transboundary shipments of LMOs. It also leaves room for possible future development of standards for handling, packaging, transport and identification of LMOs by the meeting of the Parties to the Protocol. Each Party is required to take measures ensuring that LMOs subject to intentional transboundary movement are accompanied by documentation identifying the LMOs and providing contact details of persons responsible for such movement. The details of these requirements vary according to the intended use of the LMOs, and, in the case of LMOs for food, feed or for processing, they should be further addressed by the governing body of the Protocol. (Article 18 of the Protocol, SCBD 2000). The first meeting of the Parties adopted decisions outlining identification requirements for different categories of LMOs (Decision BS-I/6, SCBD 2004). However, the second meeting of the Parties failed to reach agreement on the detailed requirements to identify LMOs intended for direct use as food, feed or for processing and will need to reconsider this issue at its third meeting in March 2006. Biosafety Clearing-House The Protocol established a Biosafety Clearing-House (BCH), in order to facilitate the exchange of scientific, technical, environmental and legal information on, and experience with, living modified organisms; and to assist Parties to implement the Protocol (Article 20 of the Protocol, SCBD 2000). It was established in a phased manner, and the first meeting of the Parties approved the transition from the pilot phase to the fully operational phase, and adopted modalities for its operations (Decision BS-I/3, SCBD 2004). See also Biosafety Clearing-House Substantial equivalence Nagoya Protocol, another supplementary protocol adopted by the CBD References Secretariat of the Convention on Biological Diversity (2000) Cartagena Protocol on Biosafety to the Convention on Biological Diversity: text and annexes. Montreal, Quebec, Canada. Secretariat of the Convention on Biological Diversity (2004) Global Biosafety – From concepts to action: Decisions adopted by the first meeting of the Conference of the Parties to the Convention on Biological Diversity serving as the meeting of the Parties to the Cartagena Protocol on Biosafety. Montreal, Quebec, Canada. External links Biosafety Protocol Homepage Ratifications at depositary Biosafety Clearing-House Central Portal Text of the Protocol Map showing the state of the ratification of the Cartagena Protocol on Biosafety. Introductory note by Laurence Boisson de Chazournes, procedural history note and audiovisual material on the Cartagena Protocol on Biosafety to the Convention on Biological Diversity in the Historic Archives of the United Nations Audiovisual Library of International Law Health risk Biodiversity Environmental treaties United Nations treaties Treaties concluded in 2000 Treaties entered into force in 2003 2003 in the environment Treaties of Afghanistan Treaties of Albania Treaties of Algeria Treaties of Angola Treaties of Antigua and Barbuda Treaties of Argentina Treaties of Armenia Treaties of Austria Treaties of Azerbaijan Treaties of the Bahamas Treaties of Bahrain Treaties of Bangladesh Treaties of Barbados Treaties of Belarus Treaties of Belgium Treaties of Belize Treaties of Benin Treaties of Bhutan Treaties of Bolivia Treaties of Bosnia and Herzegovina Treaties of Botswana Treaties of Brazil Treaties of Bulgaria Treaties of Burkina Faso Treaties of Burundi Treaties of Cambodia Treaties of Cameroon Treaties of Cape Verde Treaties of the Central African Republic Treaties of Chad Treaties of the People's Republic of China Treaties of Colombia Treaties of the Comoros Treaties of the Republic of the Congo Treaties of Costa Rica Treaties of Ivory Coast Treaties of Croatia Treaties of Cuba Treaties of Cyprus Treaties of the Czech Republic Treaties of North Korea Treaties of the Democratic Republic of the Congo Treaties of Denmark Treaties of Djibouti Treaties of Dominica Treaties of the Dominican Republic Treaties of Ecuador Treaties of Egypt Treaties of El Salvador Treaties of Eritrea Treaties of Estonia Treaties of the Transitional Government of Ethiopia Treaties of Fiji Treaties of Finland Treaties of France Treaties of Gabon Treaties of the Gambia Treaties of Georgia (country) Treaties of Germany Treaties of Ghana Treaties of Greece Treaties of Grenada Treaties of Guatemala Treaties of Guinea Treaties of Guinea-Bissau Treaties of Guyana Treaties of Honduras Treaties of Hungary Treaties of India Treaties of Indonesia Treaties of Iran Treaties of Iraq Treaties of Ireland Treaties of Italy Treaties of Jamaica Treaties of Japan Treaties of Jordan Treaties of Kazakhstan Treaties of Kenya Treaties of Kiribati Treaties of Kyrgyzstan Treaties of Laos Treaties of Latvia Treaties of Lebanon Treaties of Lesotho Treaties of Liberia Treaties of the Libyan Arab Jamahiriya Treaties of Lithuania Treaties of Luxembourg Treaties of Madagascar Treaties of Malawi Treaties of Malaysia Treaties of the Maldives Treaties of Mali Treaties of Malta Treaties of the Marshall Islands Treaties of Mauritania Treaties of Mauritius Treaties of Mexico Treaties of Mongolia Treaties of Montenegro Treaties of Morocco Treaties of Mozambique Treaties of Myanmar Treaties of Namibia Treaties of Nauru Treaties of the Netherlands Treaties of New Zealand Treaties of Nicaragua Treaties of Niger Treaties of Nigeria Treaties of Norway Treaties of Oman Treaties of Pakistan Treaties of Palau Treaties of the State of Palestine Treaties of Panama Treaties of Papua New Guinea Treaties of Paraguay Treaties of Peru Treaties of the Philippines Treaties of Poland Treaties of Portugal Treaties of Qatar Treaties of South Korea Treaties of Moldova Treaties of Romania Treaties of Rwanda Treaties of Samoa Treaties of Saudi Arabia Treaties of Senegal Treaties of Serbia and Montenegro Treaties of Seychelles Treaties of Slovakia Treaties of Slovenia Treaties of the Solomon Islands Treaties of the Transitional Federal Government of Somalia Treaties of South Africa Treaties of Spain Treaties of Sri Lanka Treaties of Saint Kitts and Nevis Treaties of Saint Lucia Treaties of Saint Vincent and the Grenadines Treaties of the Republic of the Sudan (1985–2011) Treaties of Suriname Treaties of Eswatini Treaties of Sweden Treaties of Switzerland Treaties of Syria Treaties of Tajikistan Treaties of Thailand Treaties of North Macedonia Treaties of Togo Treaties of Tonga Treaties of Trinidad and Tobago Treaties of Tunisia Treaties of Turkey Treaties of Turkmenistan Treaties of Uganda Treaties of Ukraine Treaties of the United Arab Emirates Treaties of the United Kingdom Treaties of Tanzania Treaties of Uruguay Treaties of Venezuela Treaties of Vietnam Treaties of Yemen Treaties of Zambia Treaties of Zimbabwe Treaties entered into by the European Union Treaties of Niue 2000 in Canada Treaties extended to Hong Kong Treaties extended to Gibraltar Convention on Biological Diversity Treaties of Kuwait
0.768587
0.98939
0.760432
Rote learning
Rote learning is a memorization technique based on repetition. The method rests on the premise that the recall of repeated material becomes faster the more one repeats it. Some of the alternatives to rote learning include meaningful learning, associative learning, spaced repetition and active learning. Versus critical thinking Rote learning is widely used in the mastery of foundational knowledge. Examples of school topics where rote learning is frequently used include phonics in reading, the periodic table in chemistry, multiplication tables in mathematics, anatomy in medicine, cases or statutes in law, basic formulae in any science, etc. By definition, rote learning eschews comprehension, so by itself it is an ineffective tool in mastering any complex subject at an advanced level. For instance, one illustration of rote learning can be observed in preparing quickly for exams, a technique which may be colloquially referred to as "cramming". Rote learning is sometimes disparaged with the derogative terms parrot fashion, regurgitation, cramming, or mugging because one who engages in rote learning may give the wrong impression of having understood what they have written or said. It is strongly discouraged by many new curriculum standards. For example, science and mathematics standards in the United States specifically emphasize the importance of deep understanding over the mere recall of facts, which is seen to be less important. The National Council of Teachers of Mathematics stated: More than ever, mathematics must include the mastery of concepts instead of mere memorization and the following of procedures. More than ever, school mathematics must include an understanding of how to use technology to arrive meaningfully at solutions to problems instead of endless attention to increasingly outdated computational tedium.However, advocates of traditional education have criticized the new American standards as slighting learning basic facts and elementary arithmetic, and replacing content with process-based skills. In math and science, rote methods are often used, for example to memorize formulas. There is greater understanding if students commit a formula to memory through exercises that use the formula rather than through rote repetition of the formula. Newer standards often recommend that students derive formulas themselves to achieve the best understanding. Nothing is faster than rote learning if a formula must be learned quickly for an imminent test and rote methods can be helpful for committing an understood fact to memory. However, students who learn with understanding are able to transfer their knowledge to tasks requiring problem-solving with greater success than those who learn only by rote. On the other side, those who disagree with the inquiry-based philosophy maintain that students must first develop computational skills before they can understand concepts of mathematics. These people would argue that time is better spent practicing skills rather than in investigations inventing alternatives, or justifying more than one correct answer or method. In this view, estimating answers is insufficient and, in fact, is considered to be dependent on strong foundational skills. Learning abstract concepts of mathematics is perceived to depend on a solid base of knowledge of the tools of the subject. Thus, these people believe that rote learning is an important part of the learning process. In computer science Rote learning is also used to describe a simple learning pattern used in machine learning, although it does not involve repetition, unlike the usual meaning of rote learning. The machine is programmed to keep a history of calculations and compare new input against its history of inputs and outputs, retrieving the stored output if present. This pattern requires that the machine can be modeled as a pure function — always producing same output for same input — and can be formally described as follows: f() → → store ((),()) Rote learning was used by Samuel's Checkers on an IBM 701, a milestone in the use of artificial intelligence. Learning methods for school The flashcard, outline, and mnemonic device are traditional tools for memorizing course material and are examples of rote learning. See also References External links Education reform Learning methods Memorization Pedagogy
0.764225
0.995012
0.760413
Ludonarrative dissonance
Ludonarrative dissonance is the conflict between a video game's narrative told through the non-interactive elements and the narrative told through the gameplay. Ludonarrative, a compound portmanteau of ludology and narrative, refers to the intersection in a video game of ludic elements (gameplay) and narrative elements. The term was coined by game designer Clint Hocking in 2007 in a blog post. History Clint Hocking, a former creative director at LucasArts (then at Ubisoft), coined the term on his blog in October 2007, in response to the game BioShock. As explained by Hocking, BioShock is themed around the principles of Objectivism and the nature of free will, taking place in a dystopia within the underwater city of Rapture. During the game, the player-character encounters Little Sisters, young girls that have been conditioned to extract a rare resource from corpses, which is used as a means to increase the player-character's abilities. The player has the option of following what Hocking describes as the Objectivist approach by killing the Little Sister and gaining a larger amount of the resource, or following a compassionate approach, freeing the girl from the conditioning and only receiving a modest amount of the resource in return; this choice upholds the nature of free will that the gameplay presents, according to Hocking. Hocking then points out that as the story progresses, the player-character is forced into accepting one specific path, to help the person behind the revolution within Rapture, and given no option to challenge that role. This seemingly strips away the notion of free will that the gameplay had offered. Hocking claimed that because of this, BioShock promotes the theme of self-interest through its gameplay while promoting the opposing theme of selflessness through its narrative, creating a violation of aesthetic distance that often pulls the player out of the game. Jonathan Blow also used BioShock as an example in his 2008 talk. Writer Tom Bissell, in his book Extra Lives: Why Video Games Matter (2010), notes the example of Call of Duty 4: Modern Warfare, where a player can all but kill their digital partner during gameplay without upsetting the built-in narrative of the game. Brett Makedonski of Destructoid used the Mass Effect series as another example, in which the player-character Commander Shepard can perform actions that are seen as ethically good (Paragon) or bad (Renegade), but throughout the game, Shepard is still regarded as a hero regardless of how much of a Renegade status they may have obtained. Jeffrey Matulef of Eurogamer used the term when referencing the Uncharted series, saying that "Uncharted has often been mocked for being about a supposedly likable rogue who just so happens to recklessly slaughter hundreds of people". Uncharted 4: A Thief's End acknowledged the criticism with a trophy called "Ludonarrative Dissonance" that is awarded to the player for killing 1,000 enemies. The game's co-director Neil Druckmann said that in Uncharted 4 the studio was "conscious to have fewer fights, but it came more from a desire to have a different kind of pacing than to answer the 'ludonarrative dissonance' argument. Because we don't buy into it". In 2016, Frédéric Seraphine, semiotician and researcher specialized in game design at the University of Tokyo wrote a literature review about the notion of ludonarrative dissonance. In this article, developing on debates sparked by Hocking's blog post, Seraphine identifies the reason of ludonarrative dissonance as an opposition between "incentives" and "directives" within the "ludic structure (the gameplay)" and the "narrative structure (the story)". Chris Plante of Polygon wrote there had been an increasing number of games being designed around violence that meant the story shifted to accommodate gameplay, rather than vice versa. He considered the game The Last of Us Part II, also directed by Druckmann, to be the culmination of this ludonarrative dissonance due to its revenge-driven plot. Plante argued that due to the appeal and constant supply of violent games it was unnecessary for them to justify why their player characters exhibited violence, and expressed his desire for more games to tell stories that didn't hinge around violence. Debates on the potential positive use of the notion Some scholars, game writers and journalists have challenged the supposedly negative nature of ludonarrative dissonance. Nick Ballantyne, managing editor at GameCloud Australia, in an article from 2015, argues: While acknowledging the potential of ludonarrative dissonance to create what he calls "emersion", defined in opposition to "immersion" as the "sensation of being pulled out of the play experience", Seraphine agrees with Ballantyne that it is possible to purposefully use ludonarrative dissonance as a storytelling device. Seraphine concludes his article with: "It seems that more games in the near future might use ludonarrative dissonance as a way to tell more compelling stories. In essence, stories are about characters and the most interesting stories are often told with dissonant characters; as it is the surprise, the disturbance, the accident, the sacrosanct disruptive element, that justifies the very act of telling a story". In a 2013 Game Developers Conference talk, Spec Ops: The Line writer Walt Williams argued that embracing ludonarrative dissonance allows the developer to portray the character as a hypocrite and forces the player to rationalize their actions. Ludonarrative consistency The Dead Space series is noted for its ludonarrative consistency. Brett Makedonski of Destructoid states that the gameplay aptly conveys the sense of sheer terror and loneliness that the narrative expertly strives to establish. References Further reading "The Last of Us 2 epitomizes one of gaming's longest debates" Polygon article on the archetypes of Ludonarrative dissonance. Video game design Video game terminology Narratology Video game studies
0.764014
0.995271
0.760401
Chlorophyta
Chlorophyta is a division of green algae informally called chlorophytes. Description Chlorophytes are eukaryotic organisms composed of cells with a variety of coverings or walls, and usually a single green chloroplast in each cell. They are structurally diverse: most groups of chlorophytes are unicellular, such as the earliest-diverging prasinophytes, but in two major classes (Chlorophyceae and Ulvophyceae) there is an evolutionary trend toward various types of complex colonies and even multicellularity. Chloroplasts Chlorophyte cells contain green chloroplasts surrounded by a double-membrane envelope. These contain chlorophylls a and b, and the carotenoids carotin, lutein, zeaxanthin, antheraxanthin, violaxanthin, and neoxanthin, which are also present in the leaves of land plants. Some special carotenoids are present in certain groups, or are synthesized under specific environmental factors, such as siphonaxanthin, prasinoxanthin, echinenon, canthaxanthin, loroxanthin, and astaxanthin. They accumulate carotenoids under nitrogen deficiency, high irradiance of sunlight, or high salinity. In addition, they store starch inside the chloroplast as carbohydrate reserves. The thylakoids can appear single or in stacks. In contrast to other divisions of algae such as Ochrophyta, chlorophytes lack a chloroplast endoplasmic reticulum. Flagellar apparatus Chlorophytes often form flagellate cells that generally have two or four flagella of equal length, although in prasinophytes heteromorphic (i.e. differently shaped) flagella are common because different stages of flagellar maturation are displayed in the same cell. Flagella have been independently lost in some groups, such as the Chlorococcales. Flagellate chlorophyte cells have symmetrical cross-shaped ('cruciate') root systems, in which ciliary rootlets with a variable high number of microtubules alternate with rootlets composed of just two microtubules; this forms an arrangement known as the "X-2-X-2" arrangement, unique to chlorophytes. They are also distinguished from streptophytes by the place where their flagella are inserted: directly at the cell apex, whereas streptophyte flagella are inserted at the sides of the cell apex (sub-apically). Below the flagellar apparatus of prasinophytes are rhizoplasts, contractile muscle-like structures that sometimes connect with the chloroplast or the cell membrane. In core chlorophytes, this structure connects directly with the surface of the nucleus. The surface of flagella lacks microtubular hairs, but some genera present scales or fibrillar hairs. The earliest-branching groups have flagella often covered in at least one layer of scales, if not naked. Metabolism Chlorophytes and streptophytes differ in the enzymes and organelles involved in photorespiration. Chlorophyte algae use a dehydrogenase inside the mitochondria to process glycolate during photorespiration. In contrast, streptophytes (including land plants) use peroxisomes that contain glycolate oxidase, which converts glycolate to glycoxylate, and the hydrogen peroxide created as a subproduct is reduced by catalases located in the same organelles. Reproduction and life cycle Asexual reproduction is widely observed in chlorophytes. Among core chlorophytes, both unicellular groups can reproduce asexually through autospores, wall-less zoospores, fragmentation, plain cell division, and exceptionally budding. Multicellular thalli can reproduce asexually through motile zoospores, non-motile aplanospores, autospores, filament fragmentation, differentiated resting cells, and even unmated gametes. Colonial groups can reproduce asexually through the formation of autocolonies, where each cell divides to form a colony with the same number and arrangement of cells as the parent colony. Many chlorophytes exclusively conduct asexual reproduction, but some display sexual reproduction, which may be isogamous (i.e., gametes of both sexes are identical), anisogamous (gametes are different) or oogamous (gametes are sperm and egg cells), with an evolutionary tendency towards oogamy. Their gametes are usually specialized cells differentiated from vegetative cells, although in unicellular Volvocales the vegetative cells can function simultaneously as gametes. Most chlorophytes have a diplontic life cycle (also known as zygotic), where the gametes fuse into a zygote which germinates, grows and eventually undergoes meiosis to produce haploid spores (gametes), similarly to ochrophytes and animals. Some exceptions display a haplodiplontic life cycle, where there is an alternation of generations, similarly to land plants. These generations can be isomorphic (i.e., of similar shape and size) or heteromorphic. The formation of reproductive cells usually does not occur in specialized cells, but some Ulvophyceae have specialized reproductive structures: gametangia, to produce gametes, and sporangia, to produce spores. The earliest-diverging unicellular chlorophytes (prasinophytes) produce walled resistant stages called cysts or 'phycoma' stages before reproduction; in some groups the cysts are as large as 230 μm in diameter. To develop them, the flagellate cells form an inner wall by discharging mucilage vesicles to the outside, increase the level of lipids in the cytoplasm to enhance buoyancy, and finally develop an outer wall. Inside the cysts, the nucleus and cytoplasm undergo division into numerous flagellate cells that are released by rupturing the wall. In some species these daughter cells have been confirmed to be gametes; otherwise, sexual reproduction is unknown in prasinophytes. Ecology Free-living Chlorophytes are an important portion of the phytoplankton in both freshwater and marine habitats, fixating more than a billion tons of carbon every year. They also live as multicellular macroalgae, or seaweeds, settled along rocky ocean shores. Most species of Chlorophyta are aquatic, prevalent in both marine and freshwater environments. About 90% of all known species live in freshwater. Some species have adapted to a wide range of terrestrial environments. For example, Chlamydomonas nivalis lives on summer alpine snowfields, and Trentepohlia species, live attached to rocks or woody parts of trees. Several species have adapted to specialised and extreme environments, such as deserts, arctic environments, hypersaline habitats, marine deep waters, deep-sea hydrothermal vents and habitats that experience extreme changes in temperature, light and salinity. Some groups, such as the Trentepohliales, are exclusively found on land. Symbionts Several species of Chlorophyta live in symbiosis with a diverse range of eukaryotes, including fungi (to form lichens), ciliates, forams, cnidarians and molluscs. Some species of Chlorophyta are heterotrophic, either free-living or parasitic. Others are mixotrophic bacterivores through phagocytosis. Two common species of the heterotrophic green alga Prototheca are pathogenic and can cause the disease protothecosis in humans and animals. With the exception of the three classes Ulvophyceae, Trebouxiophyceae and Chlorophyceae in the UTC clade, which show various degrees of multicellularity, all the Chlorophyta lineages are unicellular. Some members of the group form symbiotic relationships with protozoa, sponges, and cnidarians. Others form symbiotic relationships with fungi to form lichens, but the majority of species are free-living. All members of the clade have motile flagellated swimming cells. Monostroma kuroshiense, an edible green alga cultivated worldwide and most expensive among green algae, belongs to this group. Systematics Taxonomic history The first mention of Chlorophyta belongs to German botanist Heinrich Gottlieb Ludwig Reichenbach in his 1828 work Conspectus regni vegetabilis. Under this name, he grouped all algae, mosses ('musci') and ferns ('filices'), as well as some seed plants (Zamia and Cycas). This usage did not gain popularity. In 1914, Bohemian botanist Adolf Pascher modified the name to encompass exclusively green algae, that is, algae which contain chlorophylls a and b and store starch in their chloroplasts. Pascher established a scheme where Chlorophyta was composed of two groups: Chlorophyceae, which included algae now known as Chlorophyta, and Conjugatae, which are now known as Zygnematales and belong to the Streptophyta clade from which land plants evolved. During the 20th century, many different classification schemes for the Chlorophyta arose. The Smith system, published in 1938 by American botanist Gilbert Morgan Smith, distinguished two classes: Chlorophyceae, which contained all green algae (unicellular and multicellular) that did not grow through an apical cell; and Charophyceae, which contained only multicellular green algae that grew via an apical cell and had special sterile envelopes to protect the sex organs. With the advent of electron microscopy studies, botanists published various classification proposals based on finer cellular structures and phenomena, such as mitosis, cytokinesis, cytoskeleton, flagella and cell wall polysaccharides. British botanist proposed in 1971 a scheme which distinguishes Chlorophyta from other green algal divisions Charophyta, Prasinophyta and Euglenophyta. He included four classes of chlorophytes: Zygnemaphyceae, Oedogoniophyceae, Chlorophyceae and Bryopsidophyceae. Other proposals retained the Chlorophyta as containing all green algae, and varied from one another in the number of classes. For example, the 1984 proposal by Mattox & Stewart included five classes, while the 1985 proposal by Bold & Wynne included only two, and the 1995 proposal by Christiaan van den Hoek and coauthors included up to eleven classes. The modern usage of the name 'Chlorophyta' was established in 2004, when phycologists Lewis & McCourt firmly separated the chlorophytes from the streptophytes on the basis of molecular phylogenetics. All green algae that were more closely related to land plants than to chlorophytes were grouped as a paraphyletic division Charophyta. Within the green algae, the earliest-branching lineages were grouped under the informal name of "prasinophytes", and they were all believed to belong to the Chlorophyta clade. However, in 2020 a study recovered a new clade and division known as Prasinodermophyta, which contains two prasinophyte lineages previously considered chlorophytes. Below is a cladogram representing the current state of green algal classification: Classification Currently eleven chlorophyte classes are accepted, here presented in alphabetical order with some of their characteristics and biodiversity: Chlorodendrophyceae (60 species, 15 extinct): unicellular flagellates (monadoids) surrounded by an outer cell covering or theca of organic extracellular scales composed of proteins and ketosugars. Some of these scales make up hair-like structures. Capable of asexual reproduction through cell division inside the theca. No sexual reproduction has been described. Each cell contains a single chloroplast and exhibits two flagella. Present in marine and freshwater habitats. Chlorophyceae (3,974 species): either unicellular monadoids (flagellated) or coccoids (without flagella) living solitary or in varied colonial forms (including coenobial), or multicellular filamentous (branch-like) thalli that may be ramified, or foliose (leaf-like) thalli. Cells are surrounded by a crystalline covering composed of glycoproteins abundant in glycine and hydroxyproline, as well as pectins, arabinogalactan proteins, and extensin. They exhibit a haplontic life cycle with isogamy, anisogamy or oogamy. They are capable of asexual reproduction through flagellated zoospores, aplanospores, or autospores. Each cell contains a single chloroplast, a variable number of pyrenoids (including lack thereof), and from one to hundreds of flagella without mastigonemes. Present in marine, freshwater and terrestrial habitats. Chloropicophyceae (8 species): unicellular solitary coccoids. Cells are surrounded by a multi-layered cell wall. No sexual or asexual reproduction has been described. Each cell contains a single chloroplast with astaxanthin and loroxanthin, and lacks pyrenoids or flagella. They are exclusively marine. Chuariophyceae (3 extinct species): exclusively fossil group containing carbonaceous megafossils found in Ediacaran rocks, such as Tawuia. Mamiellophyceae (25 species): unicellular solitary monadoids. Cells are naked or covered by one or two layers of flat scales, mainly with spiderweb-like or reticulate ornamentation. Each cell contains one or rarely two chloroplasts, almost always with prasinoxanthin; two equal or unequal flagella, or just one flagellum, or lacking any flagella. If flagella are present, they can be either smooth or covered in scales in the same manner as the cells. Present in marine and freshwater habitats. Nephroselmidophyceae (29 species): unicellular monadoids. Cells are covered by scales. They are capable of sexual reproduction through hologamy (fusion of entire cells), and of asexual reproduction through binary fission. Each cell contains a single cloroplast, a pyrenoid, and two flagella covered by scales. Present in marine and freshwater habitats. Pedinophyceae (24 species): unicellular asymmetrical monadoids that undergo a coccoid palmelloid phase covered by mucilage. Cells lack extracellular scales, but in rare cases are covered on the posterior side by a theca. Each cell contains a single chloroplast, a pyrenoid, and a single flagellum usually covered in mastigonemes. Present in marine, freshwater and terrestrial habitats. Picocystophyceae (1 species): unicellular coccoids, ovoid and trilobed in shape. Cells are surrounded by a multi-layered cell wall of poly-arabinose, mannose, galactose and glucose. No sexual reproduction has been described. They are capable of asexual reproduction through autosporulation, resulting in two or rarely four daughter cells. Each cell contains a single bilobed chloroplast with diatoxanthin and monadoxanthin, without any pyrenoid or flagella. Present in saline lakes. Pyramimonadophyceae (166 species, 59 extinct): unicellular monadoids or coccoids. Cells are covered by two or more layers of organic scales. No sexual reproduction has been described, but some cells with only one flagellum have been interpreted as potential gametes. Asexual reproduction has only been observed in the coccoid forms, via zoospores. Each cell contains a single chloroplast, a pyrenoid, and between 4 and 16 flagella. The flagella are covered in at least two layers of organic scales: a bottom layer of pentagonal scales organized in 24 rows, and a top layer of limuloid scales distributed in 11 rows. They are exclusively marine. Trebouxiophyceae (926 species, 1 extinct): unicellular monadoids occasionally without flagella, or colonial, or ramified filamentous thalli, or living as the photobionts of lichen. Cells are covered by a cell wall of cellulose, algaenans, and β-galactofuranane. No sexual reproduction has been described with the exception of some observations of gamete fusion and presence of meiotic genes. They are capable of asexual reproduction through autospores or zoospores. Each cell contains a single chloroplast, a pyrenoid, and one or two pairs of smooth flagella. They are present in marine, freshwater and terrestrial habitats. Ulvophyceae (2,695 species, 990 extinct): macroscopic thalli, either filamentous (which may be ramified) or foliose (composed of monostromatic or distromatic layers) or even compact tubular forms, generally multinucleate. Cells surrounded by a cell wall that may be calcified, composed of cellulose, β-manane, β-xilane, sulphated or piruvilated polysaccharides or sulphated ramnogalacturonanes, arabinogalactan proteins, and extensin. They exhibit a haplodiplontic life cycle where the alternating generations can be isomorphic or heteromorphic. They reproduce asexually via zoospores that may be covered in scales. Each cell contains a single chloroplast, and one or two pairs of flagella without mastigonemes but covered in scales. They are present in marine, freshwater and terrestrial habitats. Evolution In February 2020, the fossilized remains of a green alga, named Proterocladus antiquus were discovered in the northern province of Liaoning, China. At around a billion years old, it is believed to be one of the oldest examples of a multicellular chlorophyte. It is currently classified as a member of order Siphonocladales, class Ulvophyceae. In 2023, a study calculated the molecular age of green algae as calibrated by this fossil. The study estimated the origin of Chlorophyta within the Mesoproterozoic era, at around 2.04–1.23 billion years ago. Usage Model organisms Among chlorophytes, a small group known as the volvocine green algae is being researched to understand the origins of cell differentiation and multicellularity. In particular, the unicellular flagellate Chlamydomonas reinhardtii and the colonial organism Volvox carteri are object of interest due to sharing homologous genes that in Volvox are directly involved in the development of two different cell types with full division of labor between swimming and reproduction, whereas in Chlamydomonas only one cell type exists that can function as a gamete. Other volvocine species, with intermediate characters between these two, are studied to further understand the transition towards the cellular division of labor, namely Gonium pectorale, Pandorina morum, Eudorina elegans and Pleodorina starrii. Industrial uses Chlorophyte microalgae are a valuable source of biofuel and various chemicals and products in industrial amounts, such as carotenoids, vitamins and unsaturated fatty acids. The genus Botryococcus is an efficient producer of hydrocarbons, which are converted into biodiesel. Various genera (Chlorella, Scenedesmus, Haematococcus, Dunaliella and Tetraselmis) are used as cellular factories of biomass, lipids and different vitamins for either human or animal consumption, and even for usage as pharmaceuticals. Some of their pigments are employed for cosmetics. References Citations Cited literature Further reading Chlorophyta Plant divisions Taxa named by Ludwig Reichenbach Protist phyla
0.765288
0.993605
0.760394
Composition over inheritance
Composition over inheritance (or composite reuse principle) in object-oriented programming (OOP) is the principle that classes should favor polymorphic behavior and code reuse by their composition (by containing instances of other classes that implement the desired functionality) over inheritance from a base or parent class. Ideally all reuse can be achieved by assembling existing components, but in practice inheritance is often needed to make new ones. Therefore inheritance and object composition typically work hand-in-hand, as discussed in the book Design Patterns (1994). Basics An implementation of composition over inheritance typically begins with the creation of various interfaces representing the behaviors that the system must exhibit. Interfaces can facilitate polymorphic behavior. Classes implementing the identified interfaces are built and added to business domain classes as needed. Thus, system behaviors are realized without inheritance. In fact, business domain classes may all be base classes without any inheritance at all. Alternative implementation of system behaviors is accomplished by providing another class that implements the desired behavior interface. A class that contains a reference to an interface can support implementations of the interface—a choice that can be delayed until runtime. Example Inheritance An example in C++ follows: class Object { public: virtual void update() { // no-op } virtual void draw() { // no-op } virtual void collide(Object objects[]) { // no-op } }; class Visible : public Object { Model* model; public: virtual void draw() override { // code to draw a model at the position of this object } }; class Solid : public Object { public: virtual void collide(Object objects[]) override { // code to check for and react to collisions with other objects } }; class Movable : public Object { public: virtual void update() override { // code to update the position of this object } }; Then, suppose we also have these concrete classes: class - which is , and class - which is and , but not class - which is and , but not class - which is , but neither nor Note that multiple inheritance is dangerous if not implemented carefully because it can lead to the diamond problem. One solution to this is to create classes such as , , , etc. for every needed combination; however, this leads to a large amount of repetitive code. C++ uses virtual inheritance to solve the diamond problem of multiple inheritance. Composition and interfaces The C++ examples in this section demonstrate the principle of using composition and interfaces to achieve code reuse and polymorphism. Due to the C++ language not having a dedicated keyword to declare interfaces, the following C++ example uses inheritance from a pure abstract base class. For most purposes, this is functionally equivalent to the interfaces provided in other languages, such as Java and C#. Introduce an abstract class named , with the subclasses and , which provides a means of drawing an object: class VisibilityDelegate { public: virtual void draw() = 0; }; class NotVisible : public VisibilityDelegate { public: virtual void draw() override { // no-op } }; class Visible : public VisibilityDelegate { public: virtual void draw() override { // code to draw a model at the position of this object } }; Introduce an abstract class named , with the subclasses and , which provides a means of moving an object: class UpdateDelegate { public: virtual void update() = 0; }; class NotMovable : public UpdateDelegate { public: virtual void update() override { // no-op } }; class Movable : public UpdateDelegate { public: virtual void update() override { // code to update the position of this object } }; Introduce an abstract class named , with the subclasses and , which provides a means of colliding with an object: class CollisionDelegate { public: virtual void collide(Object objects[]) = 0; }; class NotSolid : public CollisionDelegate { public: virtual void collide(Object objects[]) override { // no-op } }; class Solid : public CollisionDelegate { public: virtual void collide(Object objects[]) override { // code to check for and react to collisions with other objects } }; Finally, introduce a class named with members to control its visibility (using a ), movability (using an ), and solidity (using a ). This class has methods which delegate to its members, e.g. simply calls a method on the : class Object { VisibilityDelegate* _v; UpdateDelegate* _u; CollisionDelegate* _c; public: Object(VisibilityDelegate* v, UpdateDelegate* u, CollisionDelegate* c) : _v(v) , _u(u) , _c(c) {} void update() { _u->update(); } void draw() { _v->draw(); } void collide(Object objects[]) { _c->collide(objects); } }; Then, concrete classes would look like: class Player : public Object { public: Player() : Object(new Visible(), new Movable(), new Solid()) {} // ... }; class Smoke : public Object { public: Smoke() : Object(new Visible(), new Movable(), new NotSolid()) {} // ... }; Benefits To favor composition over inheritance is a design principle that gives the design higher flexibility. It is more natural to build business-domain classes out of various components than trying to find commonality between them and creating a family tree. For example, an accelerator pedal and a steering wheel share very few common traits, yet both are vital components in a car. What they can do and how they can be used to benefit the car are easily defined. Composition also provides a more stable business domain in the long term as it is less prone to the quirks of the family members. In other words, it is better to compose what an object can do (has-a) than extend what it is (is-a). Initial design is simplified by identifying system object behaviors in separate interfaces instead of creating a hierarchical relationship to distribute behaviors among business-domain classes via inheritance. This approach more easily accommodates future requirements changes that would otherwise require a complete restructuring of business-domain classes in the inheritance model. Additionally, it avoids problems often associated with relatively minor changes to an inheritance-based model that includes several generations of classes. Composition relation is more flexible as it may be changed on runtime, while sub-typing relations are static and need recompilation in many languages. Some languages, notably Go and Rust, use type composition exclusively. Drawbacks One common drawback of using composition instead of inheritance is that methods being provided by individual components may have to be implemented in the derived type, even if they are only forwarding methods (this is true in most programming languages, but not all; see ). In contrast, inheritance does not require all of the base class's methods to be re-implemented within the derived class. Rather, the derived class only needs to implement (override) the methods having different behavior than the base class methods. This can require significantly less programming effort if the base class contains many methods providing default behavior and only a few of them need to be overridden within the derived class. For example, in the C# code below, the variables and methods of the base class are inherited by the and derived subclasses. Only the method needs to be implemented (specialized) by each derived subclass. The other methods are implemented by the base class itself, and are shared by all of its derived subclasses; they do not need to be re-implemented (overridden) or even mentioned in the subclass definitions. // Base class public abstract class Employee { // Properties protected string Name { get; set; } protected int ID { get; set; } protected decimal PayRate { get; set; } protected int HoursWorked { get; } // Get pay for the current pay period public abstract decimal Pay(); } // Derived subclass public class HourlyEmployee : Employee { // Get pay for the current pay period public override decimal Pay() { // Time worked is in hours return HoursWorked * PayRate; } } // Derived subclass public class SalariedEmployee : Employee { // Get pay for the current pay period public override decimal Pay() { // Pay rate is annual salary instead of hourly rate return HoursWorked * PayRate / 2087; } } Avoiding drawbacks This drawback can be avoided by using traits, mixins, (type) embedding, or protocol extensions. Some languages provide specific means to mitigate this: C# provides default interface methods since version 8.0 which allows to define body to interface member. D provides an explicit "alias this" declaration within a type can forward into it every method and member of another contained type. Dart provides mixins with default implementations that can be shared. Go type embedding avoids the need for forwarding methods. Java provides default interface methods since version 8. Project Lombok supports delegation using the annotation on the field, instead of copying and maintaining the names and types of all the methods from the delegated field. Julia macros can be used to generate forwarding methods. Several implementations exist such as Lazy.jl and TypedDelegation.jl. Kotlin includes the delegation pattern in the language syntax. PHP supports traits, since PHP 5.4. Raku provides a trait to facilitate method forwarding. Rust provides traits with default implementations. Scala (since version 3) provides an "export" clause to define aliases for selected members of an object. Swift extensions can be used to define a default implementation of a protocol on the protocol itself, rather than within an individual type's implementation. Empirical studies A 2013 study of 93 open source Java programs (of varying size) found that: See also Delegation pattern Liskov substitution principle Object-oriented design Object composition Role-oriented programming State pattern Strategy pattern References Component-based software engineering Software architecture Programming principles Articles with example C Sharp code
0.762046
0.997825
0.760388
Mitochondrion
A mitochondrion is an organelle found in the cells of most eukaryotes, such as animals, plants and fungi. Mitochondria have a double membrane structure and use aerobic respiration to generate adenosine triphosphate (ATP), which is used throughout the cell as a source of chemical energy. They were discovered by Albert von Kölliker in 1857 in the voluntary muscles of insects. Meaning a thread-like granule, the term mitochondrion was coined by Carl Benda in 1898. The mitochondrion is popularly nicknamed the "powerhouse of the cell", a phrase popularized by Philip Siekevitz in a 1957 Scientific American article of the same name. Some cells in some multicellular organisms lack mitochondria (for example, mature mammalian red blood cells). The multicellular animal Henneguya salminicola is known to have retained mitochondrion-related organelles despite a complete loss of their mitochondrial genome. A large number of unicellular organisms, such as microsporidia, parabasalids and diplomonads, have reduced or transformed their mitochondria into other structures, e.g. hydrogenosomes and mitosomes. The oxymonads Monocercomonoides, Streblomastix, and Blattamonas have completely lost their mitochondria. Mitochondria are commonly between 0.75 and 3 μm in cross section, but vary considerably in size and structure. Unless specifically stained, they are not visible. In addition to supplying cellular energy, mitochondria are involved in other tasks, such as signaling, cellular differentiation, and cell death, as well as maintaining control of the cell cycle and cell growth. Mitochondrial biogenesis is in turn temporally coordinated with these cellular processes. Mitochondria have been implicated in several human disorders and conditions, such as mitochondrial diseases, cardiac dysfunction, heart failure and autism. The number of mitochondria in a cell can vary widely by organism, tissue, and cell type. A mature red blood cell has no mitochondria, whereas a liver cell can have more than 2000. The mitochondrion is composed of compartments that carry out specialized functions. These compartments or regions include the outer membrane, intermembrane space, inner membrane, cristae, and matrix. Although most of a eukaryotic cell's DNA is contained in the cell nucleus, the mitochondrion has its own genome ("mitogenome") that is substantially similar to bacterial genomes. This finding has led to general acceptance of the endosymbiotic hypothesis - that free-living prokaryotic ancestors of modern mitochondria permanently fused with eukaryotic cells in the distant past, evolving such that modern animals, plants, fungi, and other eukaryotes are able to respire to generate cellular energy. Structure Mitochondria may have a number of different shapes. A mitochondrion contains outer and inner membranes composed of phospholipid bilayers and proteins. The two membranes have different properties. Because of this double-membraned organization, there are five distinct parts to a mitochondrion: The outer mitochondrial membrane, The intermembrane space (the space between the outer and inner membranes), The inner mitochondrial membrane, The cristae space (formed by infoldings of the inner membrane), and The matrix (space within the inner membrane), which is a fluid. Mitochondria have folding to increase surface area, which in turn increases ATP (adenosine triphosphate) production. Mitochondria stripped of their outer membrane are called mitoplasts. Outer membrane The outer mitochondrial membrane, which encloses the entire organelle, is 60 to 75 angstroms (Å) thick. It has a protein-to-phospholipid ratio similar to that of the cell membrane (about 1:1 by weight). It contains large numbers of integral membrane proteins called porins. A major trafficking protein is the pore-forming voltage-dependent anion channel (VDAC). The VDAC is the primary transporter of nucleotides, ions and metabolites between the cytosol and the intermembrane space. It is formed as a beta barrel that spans the outer membrane, similar to that in the gram-negative bacterial outer membrane. Larger proteins can enter the mitochondrion if a signaling sequence at their N-terminus binds to a large multisubunit protein called translocase in the outer membrane, which then actively moves them across the membrane. Mitochondrial pro-proteins are imported through specialised translocation complexes. The outer membrane also contains enzymes involved in such diverse activities as the elongation of fatty acids, oxidation of epinephrine, and the degradation of tryptophan. These enzymes include monoamine oxidase, rotenone-insensitive NADH-cytochrome c-reductase, kynurenine hydroxylase and fatty acid Co-A ligase. Disruption of the outer membrane permits proteins in the intermembrane space to leak into the cytosol, leading to cell death. The outer mitochondrial membrane can associate with the endoplasmic reticulum (ER) membrane, in a structure called MAM (mitochondria-associated ER-membrane). This is important in the ER-mitochondria calcium signaling and is involved in the transfer of lipids between the ER and mitochondria. Outside the outer membrane are small (diameter: 60 Å) particles named sub-units of Parson. Intermembrane space The mitochondrial intermembrane space is the space between the outer membrane and the inner membrane. It is also known as perimitochondrial space. Because the outer membrane is freely permeable to small molecules, the concentrations of small molecules, such as ions and sugars, in the intermembrane space is the same as in the cytosol. However, large proteins must have a specific signaling sequence to be transported across the outer membrane, so the protein composition of this space is different from the protein composition of the cytosol. One protein that is localized to the intermembrane space in this way is cytochrome c. Inner membrane The inner mitochondrial membrane contains proteins with three types of functions: Those that perform the electron transport chain redox reactions ATP synthase, which generates ATP in the matrix Specific transport proteins that regulate metabolite passage into and out of the mitochondrial matrix It contains more than 151 different polypeptides, and has a very high protein-to-phospholipid ratio (more than 3:1 by weight, which is about 1 protein for 15 phospholipids). The inner membrane is home to around 1/5 of the total protein in a mitochondrion. Additionally, the inner membrane is rich in an unusual phospholipid, cardiolipin. This phospholipid was originally discovered in cow hearts in 1942, and is usually characteristic of mitochondrial and bacterial plasma membranes. Cardiolipin contains four fatty acids rather than two, and may help to make the inner membrane impermeable, and its disruption can lead to multiple clinical disorders including neurological disorders and cancer. Unlike the outer membrane, the inner membrane does not contain porins, and is highly impermeable to all molecules. Almost all ions and molecules require special membrane transporters to enter or exit the matrix. Proteins are ferried into the matrix via the translocase of the inner membrane (TIM) complex or via OXA1L. In addition, there is a membrane potential across the inner membrane, formed by the action of the enzymes of the electron transport chain. Inner membrane fusion is mediated by the inner membrane protein OPA1. Cristae The inner mitochondrial membrane is compartmentalized into numerous folds called cristae, which expand the surface area of the inner mitochondrial membrane, enhancing its ability to produce ATP. For typical liver mitochondria, the area of the inner membrane is about five times as large as that of the outer membrane. This ratio is variable and mitochondria from cells that have a greater demand for ATP, such as muscle cells, contain even more cristae. Mitochondria within the same cell can have substantially different crista-density, with the ones that are required to produce more energy having much more crista-membrane surface. These folds are studded with small round bodies known as F particles or oxysomes. Matrix The matrix is the space enclosed by the inner membrane. It contains about 2/3 of the total proteins in a mitochondrion. The matrix is important in the production of ATP with the aid of the ATP synthase contained in the inner membrane. The matrix contains a highly concentrated mixture of hundreds of enzymes, special mitochondrial ribosomes, tRNA, and several copies of the mitochondrial DNA genome. Of the enzymes, the major functions include oxidation of pyruvate and fatty acids, and the citric acid cycle. The DNA molecules are packaged into nucleoids by proteins, one of which is TFAM. Function The most prominent roles of mitochondria are to produce the energy currency of the cell, ATP (i.e., phosphorylation of ADP), through respiration and to regulate cellular metabolism. The central set of reactions involved in ATP production are collectively known as the citric acid cycle, or the Krebs cycle, and oxidative phosphorylation. However, the mitochondrion has many other functions in addition to the production of ATP. Energy conversion A dominant role for the mitochondria is the production of ATP, as reflected by the large number of proteins in the inner membrane for this task. This is done by oxidizing the major products of glucose: pyruvate, and NADH, which are produced in the cytosol. This type of cellular respiration, known as aerobic respiration, is dependent on the presence of oxygen. When oxygen is limited, the glycolytic products will be metabolized by anaerobic fermentation, a process that is independent of the mitochondria. The production of ATP from glucose and oxygen has an approximately 13-times higher yield during aerobic respiration compared to fermentation. Plant mitochondria can also produce a limited amount of ATP either by breaking the sugar produced during photosynthesis or without oxygen by using the alternate substrate nitrite. ATP crosses out through the inner membrane with the help of a specific protein, and across the outer membrane via porins. After conversion of ATP to ADP by dephosphorylation that releases energy, ADP returns via the same route. Pyruvate and the citric acid cycle Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form CO, acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g., in muscle) are suddenly increased by activity. In the citric acid cycle, all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that the additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence, the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell. Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO and water, with the energy thus released captured in the form of ATP. In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway, which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here, the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted to cytosolic oxaloacetate, and ultimately to glucose, in a process that is almost the reverse of glycolysis. The enzymes of the citric acid cycle are located in the mitochondrial matrix, with the exception of succinate dehydrogenase, which is bound to the inner mitochondrial membrane as part of Complex II. The citric acid cycle oxidizes the acetyl-CoA to carbon dioxide, and, in the process, produces reduced cofactors (three molecules of NADH and one molecule of FADH) that are a source of electrons for the electron transport chain, and a molecule of GTP (which is readily converted to an ATP). O and NADH: energy-releasing reactions The electrons from NADH and FADH are transferred to oxygen (O) and hydrogen (protons) in several steps via an electron transport chain. NADH and FADH molecules are produced within the matrix via the citric acid cycle and in the cytoplasm by glycolysis. Reducing equivalents from the cytoplasm can be imported via the malate-aspartate shuttle system of antiporter proteins or fed into the electron transport chain using a glycerol phosphate shuttle. The major energy-releasing reactions that make the mitochondrion the "powerhouse of the cell" occur at protein complexes I, III and IV in the inner mitochondrial membrane (NADH dehydrogenase (ubiquinone), cytochrome c reductase, and cytochrome c oxidase). At complex IV, O2 reacts with the reduced form of iron in cytochrome c: O2{} + 4H+(aq){} + 4 Fe^{2+}(cyt\,c) -> 2H2O{} + 4 Fe^{3+}(cyt\,c) releasing a lot of free energy from the reactants without breaking bonds of an organic fuel. The free energy put in to remove an electron from Fe2+ is released at complex III when Fe3+ of cytochrome c reacts to oxidize ubiquinol (QH2): 2 Fe^{3+}(cyt\,c){} + QH2 -> 2 Fe^{2+}(cyt\,c){} + Q{} + 2H+(aq) The ubiquinone (Q) generated reacts, in complex I, with NADH: Q + H+(aq){} + NADH -> QH2 + NAD+ { } While the reactions are controlled by an electron transport chain, free electrons are not amongst the reactants or products in the three reactions shown and therefore do not affect the free energy released, which is used to pump protons (H) into the intermembrane space. This process is efficient, but a small percentage of electrons may prematurely reduce oxygen, forming reactive oxygen species such as superoxide. This can cause oxidative stress in the mitochondria and may contribute to the decline in mitochondrial function associated with aging. As the proton concentration increases in the intermembrane space, a strong electrochemical gradient is established across the inner membrane. The protons can return to the matrix through the ATP synthase complex, and their potential energy is used to synthesize ATP from ADP and inorganic phosphate (P). This process is called chemiosmosis, and was first described by Peter Mitchell, who was awarded the 1978 Nobel Prize in Chemistry for his work. Later, part of the 1997 Nobel Prize in Chemistry was awarded to Paul D. Boyer and John E. Walker for their clarification of the working mechanism of ATP synthase. Heat production Under certain conditions, protons can re-enter the mitochondrial matrix without contributing to ATP synthesis. This process is known as proton leak or mitochondrial uncoupling and is due to the facilitated diffusion of protons into the matrix. The process results in the unharnessed potential energy of the proton electrochemical gradient being released as heat. The process is mediated by a proton channel called thermogenin, or UCP1. Thermogenin is primarily found in brown adipose tissue, or brown fat, and is responsible for non-shivering thermogenesis. Brown adipose tissue is found in mammals, and is at its highest levels in early life and in hibernating animals. In humans, brown adipose tissue is present at birth and decreases with age. Mitochondrial fatty acid synthesis Mitochondrial fatty acid synthesis (mtFASII) is essential for cellular respiration and mitochondrial biogenesis. It is also thought to play a role as a mediator in intracellular signaling due to its influence on the levels of bioactive lipids, such as lysophospholipids and sphingolipids. Octanoyl-ACP (C8) is considered to be the most important end product of mtFASII, which also forms the starting substrate of lipoic acid biosynthesis. Since lipoic acid is the cofactor of important mitochondrial enzyme complexes, such as the pyruvate dehydrogenase complex (PDC), α-ketoglutarate dehydrogenase complex (OGDC), branched-chain α-ketoacid dehydrogenase complex (BCKDC), and in the glycine cleavage system (GCS), mtFASII has an influence on energy metabolism. Other products of mtFASII play a role in the regulation of mitochondrial translation, FeS cluster biogenesis and assembly of oxidative phosphorylation complexes. Furthermore, with the help of mtFASII and acylated ACP, acetyl-CoA regulates its consumption in mitochondria. Uptake, storage and release of calcium ions The concentrations of free calcium in the cell can regulate an array of reactions and is important for signal transduction in the cell. Mitochondria can transiently store calcium, a contributing process for the cell's homeostasis of calcium. Their ability to rapidly take in calcium for later release makes them good "cytosolic buffers" for calcium. The endoplasmic reticulum (ER) is the most significant storage site of calcium, and there is a significant interplay between the mitochondrion and ER with regard to calcium. The calcium is taken up into the matrix by the mitochondrial calcium uniporter on the inner mitochondrial membrane. It is primarily driven by the mitochondrial membrane potential. Release of this calcium back into the cell's interior can occur via a sodium-calcium exchange protein or via "calcium-induced-calcium-release" pathways. This can initiate calcium spikes or calcium waves with large changes in the membrane potential. These can activate a series of second messenger system proteins that can coordinate processes such as neurotransmitter release in nerve cells and release of hormones in endocrine cells. Ca influx to the mitochondrial matrix has recently been implicated as a mechanism to regulate respiratory bioenergetics by allowing the electrochemical potential across the membrane to transiently "pulse" from ΔΨ-dominated to pH-dominated, facilitating a reduction of oxidative stress. In neurons, concomitant increases in cytosolic and mitochondrial calcium act to synchronize neuronal activity with mitochondrial energy metabolism. Mitochondrial matrix calcium levels can reach the tens of micromolar levels, which is necessary for the activation of isocitrate dehydrogenase, one of the key regulatory enzymes of the Krebs cycle. Cellular proliferation regulation The relationship between cellular proliferation and mitochondria has been investigated. Tumor cells require ample ATP to synthesize bioactive compounds such as lipids, proteins, and nucleotides for rapid proliferation. The majority of ATP in tumor cells is generated via the oxidative phosphorylation pathway (OxPhos). Interference with OxPhos cause cell cycle arrest suggesting that mitochondria play a role in cell proliferation. Mitochondrial ATP production is also vital for cell division and differentiation in infection in addition to basic functions in the cell including the regulation of cell volume, solute concentration, and cellular architecture. ATP levels differ at various stages of the cell cycle suggesting that there is a relationship between the abundance of ATP and the cell's ability to enter a new cell cycle. ATP's role in the basic functions of the cell make the cell cycle sensitive to changes in the availability of mitochondrial derived ATP. The variation in ATP levels at different stages of the cell cycle support the hypothesis that mitochondria play an important role in cell cycle regulation. Although the specific mechanisms between mitochondria and the cell cycle regulation is not well understood, studies have shown that low energy cell cycle checkpoints monitor the energy capability before committing to another round of cell division. Programmed cell death and innate immunity Programmed cell death (PCD) is crucial for various physiological functions , including organ development and cellular homeostasis. It serves as an intrinsic mechanism to prevent malignant transformation and plays a fundamental role in immunity by aiding in antiviral defense, pathogen elimination, inflammation, and immune cell recruitment. Mitochondria have long been recognized for their central role in the intrinsic pathway of apoptosis, a form of PCD. In recent decades, they have also been identified as a signalling hub for much of the innate immune system. The endosymbiotic origin of mitochondria distinguishes them from other cellular components, and the exposure of mitochondrial elements to the cytosol can trigger the same pathways as infection markers. These pathways lead to apoptosis, autophagy, or the induction of proinflammatory genes. Mitochondria contribute to apoptosis by releasing cytochrome c, which directly induces the formation of apoptosomes. Additionally, they are a source of various damage-associated molecular patterns (DAMPs). These DAMPs are often recognised by the same pattern-recognition receptors (PRRs) that respond to pathogen-associated molecular patterns (PAMPs) during infections. For example, mitochondrial mtDNA resembles bacterial DNA due to its lack of CpG methylation and can be detected by Toll-like receptor 9 and cGAS. Double-stranded RNA (dsRNA), produced due to bidirectional mitochondrial transcription, can activate viral sensing pathways through RIG-I-like receptors. Additionally, the N-formylation of mitochondrial proteins, similar to that of bacterial proteins, can be recognized by formyl peptide receptors. Normally, these mitochondrial components are sequestered from the rest of the cell but are released following mitochondrial membrane permeabilization during apoptosis or passively after mitochondrial damage. However, mitochondria also play an active role in innate immunity, releasing mtDNA in response to metabolic cues. Mitochondria are also the localization site for immune and apoptosis regulatory proteins, such as BAX, MAVS (located on the outer membrane), and NLRX1 (found in the matrix). These proteins are modulated by the mitochondrial metabolic status and mitochondrial dynamics. Additional functions Mitochondria play a central role in many other metabolic tasks, such as: Signaling through mitochondrial reactive oxygen species Regulation of the membrane potential Calcium signaling (including calcium-evoked apoptosis) Regulation of cellular metabolism Certain heme synthesis reactions (see also: Porphyrin) Steroid synthesis Hormonal signaling – mitochondria are sensitive and responsive to hormones, in part by the action of mitochondrial estrogen receptors (mtERs). These receptors have been found in various tissues and cell types, including brain and heart Development and function of immune cells Neuronal mitochondria also contribute to cellular quality control by reporting neuronal status towards microglia through specialised somatic-junctions. Mitochondria of developing neurons contribute to intercellular signaling towards microglia, which communication is indispensable for proper regulation of brain development. Some mitochondrial functions are performed only in specific types of cells. For example, mitochondria in liver cells contain enzymes that allow them to detoxify ammonia, a waste product of protein metabolism. A mutation in the genes regulating any of these functions can result in mitochondrial diseases. Mitochondrial proteins (proteins transcribed from mitochondrial DNA) vary depending on the tissue and the species. In humans, 615 distinct types of proteins have been identified from cardiac mitochondria, whereas in rats, 940 proteins have been reported. The mitochondrial proteome is thought to be dynamically regulated. Organization and distribution Mitochondria (or related structures) are found in all eukaryotes (except the Oxymonad Monocercomonoides). Although commonly depicted as bean-like structures they form a highly dynamic network in the majority of cells where they constantly undergo fission and fusion. The population of all the mitochondria of a given cell constitutes the chondriome. Mitochondria vary in number and location according to cell type. A single mitochondrion is often found in unicellular organisms, while human liver cells have about 1000–2000 mitochondria per cell, making up 1/5 of the cell volume. The mitochondrial content of otherwise similar cells can vary substantially in size and membrane potential, with differences arising from sources including uneven partitioning at cell division, leading to extrinsic differences in ATP levels and downstream cellular processes. The mitochondria can be found nestled between myofibrils of muscle or wrapped around the sperm flagellum. Often, they form a complex 3D branching network inside the cell with the cytoskeleton. The association with the cytoskeleton determines mitochondrial shape, which can affect the function as well: different structures of the mitochondrial network may afford the population a variety of physical, chemical, and signalling advantages or disadvantages. Mitochondria in cells are always distributed along microtubules and the distribution of these organelles is also correlated with the endoplasmic reticulum. Recent evidence suggests that vimentin, one of the components of the cytoskeleton, is also critical to the association with the cytoskeleton. Mitochondria-associated ER membrane (MAM) The mitochondria-associated ER membrane (MAM) is another structural element that is increasingly recognized for its critical role in cellular physiology and homeostasis. Once considered a technical snag in cell fractionation techniques, the alleged ER vesicle contaminants that invariably appeared in the mitochondrial fraction have been re-identified as membranous structures derived from the MAM—the interface between mitochondria and the ER. Physical coupling between these two organelles had previously been observed in electron micrographs and has more recently been probed with fluorescence microscopy. Such studies estimate that at the MAM, which may comprise up to 20% of the mitochondrial outer membrane, the ER and mitochondria are separated by a mere 10–25 nm and held together by protein tethering complexes. Purified MAM from subcellular fractionation is enriched in enzymes involved in phospholipid exchange, in addition to channels associated with Ca signaling. These hints of a prominent role for the MAM in the regulation of cellular lipid stores and signal transduction have been borne out, with significant implications for mitochondrial-associated cellular phenomena, as discussed below. Not only has the MAM provided insight into the mechanistic basis underlying such physiological processes as intrinsic apoptosis and the propagation of calcium signaling, but it also favors a more refined view of the mitochondria. Though often seen as static, isolated 'powerhouses' hijacked for cellular metabolism through an ancient endosymbiotic event, the evolution of the MAM underscores the extent to which mitochondria have been integrated into overall cellular physiology, with intimate physical and functional coupling to the endomembrane system. Phospholipid transfer The MAM is enriched in enzymes involved in lipid biosynthesis, such as phosphatidylserine synthase on the ER face and phosphatidylserine decarboxylase on the mitochondrial face. Because mitochondria are dynamic organelles constantly undergoing fission and fusion events, they require a constant and well-regulated supply of phospholipids for membrane integrity. But mitochondria are not only a destination for the phospholipids they finish synthesis of; rather, this organelle also plays a role in inter-organelle trafficking of the intermediates and products of phospholipid biosynthetic pathways, ceramide and cholesterol metabolism, and glycosphingolipid anabolism. Such trafficking capacity depends on the MAM, which has been shown to facilitate transfer of lipid intermediates between organelles. In contrast to the standard vesicular mechanism of lipid transfer, evidence indicates that the physical proximity of the ER and mitochondrial membranes at the MAM allows for lipid flipping between opposed bilayers. Despite this unusual and seemingly energetically unfavorable mechanism, such transport does not require ATP. Instead, in yeast, it has been shown to be dependent on a multiprotein tethering structure termed the ER-mitochondria encounter structure, or ERMES, although it remains unclear whether this structure directly mediates lipid transfer or is required to keep the membranes in sufficiently close proximity to lower the energy barrier for lipid flipping. The MAM may also be part of the secretory pathway, in addition to its role in intracellular lipid trafficking. In particular, the MAM appears to be an intermediate destination between the rough ER and the Golgi in the pathway that leads to very-low-density lipoprotein, or VLDL, assembly and secretion. The MAM thus serves as a critical metabolic and trafficking hub in lipid metabolism. Calcium signaling A critical role for the ER in calcium signaling was acknowledged before such a role for the mitochondria was widely accepted, in part because the low affinity of Ca channels localized to the outer mitochondrial membrane seemed to contradict this organelle's purported responsiveness to changes in intracellular Ca flux. But the presence of the MAM resolves this apparent contradiction: the close physical association between the two organelles results in Ca microdomains at contact points that facilitate efficient Ca transmission from the ER to the mitochondria. Transmission occurs in response to so-called "Ca puffs" generated by spontaneous clustering and activation of IP3R, a canonical ER membrane Ca channel. The fate of these puffs—in particular, whether they remain restricted to isolated locales or integrated into Ca waves for propagation throughout the cell—is determined in large part by MAM dynamics. Although reuptake of Ca by the ER (concomitant with its release) modulates the intensity of the puffs, thus insulating mitochondria to a certain degree from high Ca exposure, the MAM often serves as a firewall that essentially buffers Ca puffs by acting as a sink into which free ions released into the cytosol can be funneled. This Ca tunneling occurs through the low-affinity Ca receptor VDAC1, which recently has been shown to be physically tethered to the IP3R clusters on the ER membrane and enriched at the MAM. The ability of mitochondria to serve as a Ca sink is a result of the electrochemical gradient generated during oxidative phosphorylation, which makes tunneling of the cation an exergonic process. Normal, mild calcium influx from cytosol into the mitochondrial matrix causes transient depolarization that is corrected by pumping out protons. But transmission of Ca is not unidirectional; rather, it is a two-way street. The properties of the Ca pump SERCA and the channel IP3R present on the ER membrane facilitate feedback regulation coordinated by MAM function. In particular, the clearance of Ca by the MAM allows for spatio-temporal patterning of Ca signaling because Ca alters IP3R activity in a biphasic manner. SERCA is likewise affected by mitochondrial feedback: uptake of Ca by the MAM stimulates ATP production, thus providing energy that enables SERCA to reload the ER with Ca for continued Ca efflux at the MAM. Thus, the MAM is not a passive buffer for Ca puffs; rather it helps modulate further Ca signaling through feedback loops that affect ER dynamics. Regulating ER release of Ca at the MAM is especially critical because only a certain window of Ca uptake sustains the mitochondria, and consequently the cell, at homeostasis. Sufficient intraorganelle Ca signaling is required to stimulate metabolism by activating dehydrogenase enzymes critical to flux through the citric acid cycle. However, once Ca signaling in the mitochondria passes a certain threshold, it stimulates the intrinsic pathway of apoptosis in part by collapsing the mitochondrial membrane potential required for metabolism. Studies examining the role of pro- and anti-apoptotic factors support this model; for example, the anti-apoptotic factor Bcl-2 has been shown to interact with IP3Rs to reduce Ca filling of the ER, leading to reduced efflux at the MAM and preventing collapse of the mitochondrial membrane potential post-apoptotic stimuli. Given the need for such fine regulation of Ca signaling, it is perhaps unsurprising that dysregulated mitochondrial Ca has been implicated in several neurodegenerative diseases, while the catalogue of tumor suppressors includes a few that are enriched at the MAM. Molecular basis for tethering Recent advances in the identification of the tethers between the mitochondrial and ER membranes suggest that the scaffolding function of the molecular elements involved is secondary to other, non-structural functions. In yeast, ERMES, a multiprotein complex of interacting ER- and mitochondrial-resident membrane proteins, is required for lipid transfer at the MAM and exemplifies this principle. One of its components, for example, is also a constituent of the protein complex required for insertion of transmembrane beta-barrel proteins into the lipid bilayer. However, a homologue of the ERMES complex has not yet been identified in mammalian cells. Other proteins implicated in scaffolding likewise have functions independent of structural tethering at the MAM; for example, ER-resident and mitochondrial-resident mitofusins form heterocomplexes that regulate the number of inter-organelle contact sites, although mitofusins were first identified for their role in fission and fusion events between individual mitochondria. Glucose-related protein 75 (grp75) is another dual-function protein. In addition to the matrix pool of grp75, a portion serves as a chaperone that physically links the mitochondrial and ER Ca channels VDAC and IP3R for efficient Ca transmission at the MAM. Another potential tether is Sigma-1R, a non-opioid receptor whose stabilization of ER-resident IP3R may preserve communication at the MAM during the metabolic stress response. Perspective The MAM is a critical signaling, metabolic, and trafficking hub in the cell that allows for the integration of ER and mitochondrial physiology. Coupling between these organelles is not simply structural but functional as well and critical for overall cellular physiology and homeostasis. The MAM thus offers a perspective on mitochondria that diverges from the traditional view of this organelle as a static, isolated unit appropriated for its metabolic capacity by the cell. Instead, this mitochondrial-ER interface emphasizes the integration of the mitochondria, the product of an endosymbiotic event, into diverse cellular processes. Recently it has also been shown, that mitochondria and MAM-s in neurons are anchored to specialised intercellular communication sites (so called somatic-junctions). Microglial processes monitor and protect neuronal functions at these sites, and MAM-s are supposed to have an important role in this type of cellular quality-control. Origin and evolution There are two hypotheses about the origin of mitochondria: endosymbiotic and autogenous. The endosymbiotic hypothesis suggests that mitochondria were originally prokaryotic cells, capable of implementing oxidative mechanisms that were not possible for eukaryotic cells; they became endosymbionts living inside the eukaryote. In the autogenous hypothesis, mitochondria were born by splitting off a portion of DNA from the nucleus of the eukaryotic cell at the time of divergence with the prokaryotes; this DNA portion would have been enclosed by membranes, which could not be crossed by proteins. Since mitochondria have many features in common with bacteria, the endosymbiotic hypothesis is the more widely accepted of the two accounts. A mitochondrion contains DNA, which is organized as several copies of a single, usually circular chromosome. This mitochondrial chromosome contains genes for redox proteins, such as those of the respiratory chain. The CoRR hypothesis proposes that this co-location is required for redox regulation. The mitochondrial genome codes for some RNAs of ribosomes, and the 22 tRNAs necessary for the translation of mRNAs into protein. The circular structure is also found in prokaryotes. The proto-mitochondrion was probably closely related to Rickettsia. However, the exact relationship of the ancestor of mitochondria to the alphaproteobacteria and whether the mitochondrion was formed at the same time or after the nucleus, remains controversial. For example, it has been suggested that the SAR11 clade of bacteria shares a relatively recent common ancestor with the mitochondria, while phylogenomic analyses indicate that mitochondria evolved from a Pseudomonadota lineage that is closely related to or a member of alphaproteobacteria. Some papers describe mitochondria as sister to the alphaproteobactera, together forming the sister the marineproteo1 group, together forming the sister to Magnetococcidae. The ribosomes coded for by the mitochondrial DNA are similar to those from bacteria in size and structure. They closely resemble the bacterial 70S ribosome and not the 80S cytoplasmic ribosomes, which are coded for by nuclear DNA. The endosymbiotic relationship of mitochondria with their host cells was popularized by Lynn Margulis. The endosymbiotic hypothesis suggests that mitochondria descended from aerobic bacteria that somehow survived endocytosis by another cell, and became incorporated into the cytoplasm. The ability of these bacteria to conduct respiration in host cells that had relied on glycolysis and fermentation would have provided a considerable evolutionary advantage. This symbiotic relationship probably developed 1.7 to 2 billion years ago. A few groups of unicellular eukaryotes have only vestigial mitochondria or derived structures: The microsporidians, metamonads, and archamoebae. These groups appear as the most primitive eukaryotes on phylogenetic trees constructed using rRNA information, which once suggested that they appeared before the origin of mitochondria. However, this is now known to be an artifact of long-branch attraction: They are derived groups and retain genes or organelles derived from mitochondria (e. g., mitosomes and hydrogenosomes). Hydrogenosomes, mitosomes, and related organelles as found in some loricifera (e. g. Spinoloricus) and myxozoa (e. g. Henneguya zschokkei) are together classified as MROs, mitochondrion-related organelles. Monocercomonoides and other oxymonads appear to have lost their mitochondria completely and at least some of the mitochondrial functions seem to be carried out by cytoplasmic proteins now. Mitochondrial genetics Mitochondria contain their own genome. The human mitochondrial genome is a circular double-stranded DNA molecule of about 16 kilobases. It encodes 37 genes: 13 for subunits of respiratory complexes I, III, IV and V, 22 for mitochondrial tRNA (for the 20 standard amino acids, plus an extra gene for leucine and serine), and 2 for rRNA (12S and 16S rRNA). One mitochondrion can contain two to ten copies of its DNA. One of the two mitochondrial DNA (mtDNA) strands has a disproportionately higher ratio of the heavier nucleotides adenine and guanine, and this is termed the heavy strand (or H strand), whereas the other strand is termed the light strand (or L strand). The weight difference allows the two strands to be separated by centrifugation. mtDNA has one long non-coding stretch known as the non-coding region (NCR), which contains the heavy strand promoter (HSP) and light strand promoter (LSP) for RNA transcription, the origin of replication for the H strand (OriH) localized on the L strand, three conserved sequence boxes (CSBs 1–3), and a termination-associated sequence (TAS). The origin of replication for the L strand (OriL) is localized on the H strand 11,000 bp downstream of OriH, located within a cluster of genes coding for tRNA. As in prokaryotes, there is a very high proportion of coding DNA and an absence of repeats. Mitochondrial genes are transcribed as multigenic transcripts, which are cleaved and polyadenylated to yield mature mRNAs. Most proteins necessary for mitochondrial function are encoded by genes in the cell nucleus and the corresponding proteins are imported into the mitochondrion. The exact number of genes encoded by the nucleus and the mitochondrial genome differs between species. Most mitochondrial genomes are circular. In general, mitochondrial DNA lacks introns, as is the case in the human mitochondrial genome; however, introns have been observed in some eukaryotic mitochondrial DNA, such as that of yeast and protists, including Dictyostelium discoideum. Between protein-coding regions, tRNAs are present. Mitochondrial tRNA genes have different sequences from the nuclear tRNAs, but lookalikes of mitochondrial tRNAs have been found in the nuclear chromosomes with high sequence similarity. In animals, the mitochondrial genome is typically a single circular chromosome that is approximately 16 kb long and has 37 genes. The genes, while highly conserved, may vary in location. Curiously, this pattern is not found in the human body louse (Pediculus humanus). Instead, this mitochondrial genome is arranged in 18 minicircular chromosomes, each of which is 3–4 kb long and has one to three genes. This pattern is also found in other sucking lice, but not in chewing lice. Recombination has been shown to occur between the minichromosomes. Human population genetic studies The near-absence of genetic recombination in mitochondrial DNA makes it a useful source of information for studying population genetics and evolutionary biology. Because all the mitochondrial DNA is inherited as a single unit, or haplotype, the relationships between mitochondrial DNA from different individuals can be represented as a gene tree. Patterns in these gene trees can be used to infer the evolutionary history of populations. The classic example of this is in human evolutionary genetics, where the molecular clock can be used to provide a recent date for mitochondrial Eve. This is often interpreted as strong support for a recent modern human expansion out of Africa. Another human example is the sequencing of mitochondrial DNA from Neanderthal bones. The relatively large evolutionary distance between the mitochondrial DNA sequences of Neanderthals and living humans has been interpreted as evidence for the lack of interbreeding between Neanderthals and modern humans. However, mitochondrial DNA reflects only the history of the females in a population. This can be partially overcome by the use of paternal genetic sequences, such as the non-recombining region of the Y-chromosome. Recent measurements of the molecular clock for mitochondrial DNA reported a value of 1 mutation every 7884 years dating back to the most recent common ancestor of humans and apes, which is consistent with estimates of mutation rates of autosomal DNA (10 per base per generation). Alternative genetic code While slight variations on the standard genetic code had been predicted earlier, none was discovered until 1979, when researchers studying human mitochondrial genes determined that they used an alternative code. Nonetheless, the mitochondria of many other eukaryotes, including most plants, use the standard code. Many slight variants have been discovered since, including various alternative mitochondrial codes. Further, the AUA, AUC, and AUU codons are all allowable start codons. Some of these differences should be regarded as pseudo-changes in the genetic code due to the phenomenon of RNA editing, which is common in mitochondria. In higher plants, it was thought that CGG encoded for tryptophan and not arginine; however, the codon in the processed RNA was discovered to be the UGG codon, consistent with the standard genetic code for tryptophan. Of note, the arthropod mitochondrial genetic code has undergone parallel evolution within a phylum, with some organisms uniquely translating AGG to lysine. Replication and inheritance Mitochondria divide by mitochondrial fission, a form of binary fission that is also done by bacteria although the process is tightly regulated by the host eukaryotic cell and involves communication between and contact with several other organelles. The regulation of this division differs between eukaryotes. In many single-celled eukaryotes, their growth and division are linked to the cell cycle. For example, a single mitochondrion may divide synchronously with the nucleus. This division and segregation process must be tightly controlled so that each daughter cell receives at least one mitochondrion. In other eukaryotes (in mammals for example), mitochondria may replicate their DNA and divide mainly in response to the energy needs of the cell, rather than in phase with the cell cycle. When the energy needs of a cell are high, mitochondria grow and divide. When energy use is low, mitochondria are destroyed or become inactive. In such examples mitochondria are apparently randomly distributed to the daughter cells during the division of the cytoplasm. Mitochondrial dynamics, the balance between mitochondrial fusion and fission, is an important factor in pathologies associated with several disease conditions. The hypothesis of mitochondrial binary fission has relied on the visualization by fluorescence microscopy and conventional transmission electron microscopy (TEM). The resolution of fluorescence microscopy (≈200 nm) is insufficient to distinguish structural details, such as double mitochondrial membrane in mitochondrial division or even to distinguish individual mitochondria when several are close together. Conventional TEM has also some technical limitations in verifying mitochondrial division. Cryo-electron tomography was recently used to visualize mitochondrial division in frozen hydrated intact cells. It revealed that mitochondria divide by budding. An individual's mitochondrial genes are inherited only from the mother, with rare exceptions. In humans, when an egg cell is fertilized by a sperm, the mitochondria, and therefore the mitochondrial DNA, usually come from the egg only. The sperm's mitochondria enter the egg, but do not contribute genetic information to the embryo. Instead, paternal mitochondria are marked with ubiquitin to select them for later destruction inside the embryo. The egg cell contains relatively few mitochondria, but these mitochondria divide to populate the cells of the adult organism. This mode is seen in most organisms, including the majority of animals. However, mitochondria in some species can sometimes be inherited paternally. This is the norm among certain coniferous plants, although not in pine trees and yews. For Mytilids, paternal inheritance only occurs within males of the species. It has been suggested that it occurs at a very low level in humans. Uniparental inheritance leads to little opportunity for genetic recombination between different lineages of mitochondria, although a single mitochondrion can contain 2–10 copies of its DNA. What recombination does take place maintains genetic integrity rather than maintaining diversity. However, there are studies showing evidence of recombination in mitochondrial DNA. It is clear that the enzymes necessary for recombination are present in mammalian cells. Further, evidence suggests that animal mitochondria can undergo recombination. The data are more controversial in humans, although indirect evidence of recombination exists. Entities undergoing uniparental inheritance and with little to no recombination may be expected to be subject to Muller's ratchet, the accumulation of deleterious mutations until functionality is lost. Animal populations of mitochondria avoid this buildup through a developmental process known as the mtDNA bottleneck. The bottleneck exploits stochastic processes in the cell to increase the cell-to-cell variability in mutant load as an organism develops: a single egg cell with some proportion of mutant mtDNA thus produces an embryo where different cells have different mutant loads. Cell-level selection may then act to remove those cells with more mutant mtDNA, leading to a stabilization or reduction in mutant load between generations. The mechanism underlying the bottleneck is debated, with a recent mathematical and experimental metastudy providing evidence for a combination of random partitioning of mtDNAs at cell divisions and random turnover of mtDNA molecules within the cell. DNA repair Mitochondria can repair oxidative DNA damage by mechanisms analogous to those occurring in the cell nucleus. The proteins employed in mtDNA repair are encoded by nuclear genes, and are translocated to the mitochondria. The DNA repair pathways in mammalian mitochondria include base excision repair, double-strand break repair, direct reversal and mismatch repair. Alternatively, DNA damage may be bypassed, rather than repaired, by translesion synthesis. Of the several DNA repair process in mitochondria, the base excision repair pathway has been most comprehensively studied. Base excision repair is carried out by a sequence of enzyme-catalyzed steps that include recognition and excision of a damaged DNA base, removal of the resulting abasic site, end processing, gap filling and ligation. A common damage in mtDNA that is repaired by base excision repair is 8-oxoguanine produced by oxidation of guanine. Double-strand breaks can be repaired by homologous recombinational repair in both mammalian mtDNA and plant mtDNA. Double-strand breaks in mtDNA can also be repaired by microhomology-mediated end joining. Although there is evidence for the repair processes of direct reversal and mismatch repair in mtDNA, these processes are not well characterized. Lack of mitochondrial DNA Some organisms have lost mitochondrial DNA altogether. In these cases, genes encoded by the mitochondrial DNA have been lost or transferred to the nucleus. Cryptosporidium have mitochondria that lack any DNA, presumably because all their genes have been lost or transferred. In Cryptosporidium, the mitochondria have an altered ATP generation system that renders the parasite resistant to many classical mitochondrial inhibitors such as cyanide, azide, and atovaquone. Mitochondria that lack their own DNA have been found in a marine parasitic dinoflagellate from the genus Amoebophyra. This microorganism, A. cerati, has functional mitochondria that lack a genome. In related species, the mitochondrial genome still has three genes, but in A. cerati only a single mitochondrial gene — the cytochrome c oxidase I gene (cox1) — is found, and it has migrated to the genome of the nucleus. Dysfunction and disease Mitochondrial diseases Damage and subsequent dysfunction in mitochondria is an important factor in a range of human diseases due to their influence in cell metabolism. Mitochondrial disorders often present as neurological disorders, including autism. They can also manifest as myopathy, diabetes, multiple endocrinopathy, and a variety of other systemic disorders. Diseases caused by mutation in the mtDNA include Kearns–Sayre syndrome, MELAS syndrome and Leber's hereditary optic neuropathy. In the vast majority of cases, these diseases are transmitted by a female to her children, as the zygote derives its mitochondria and hence its mtDNA from the ovum. Diseases such as Kearns-Sayre syndrome, Pearson syndrome, and progressive external ophthalmoplegia are thought to be due to large-scale mtDNA rearrangements, whereas other diseases such as MELAS syndrome, Leber's hereditary optic neuropathy, MERRF syndrome, and others are due to point mutations in mtDNA. It has also been reported that drug tolerant cancer cells have an increased number and size of mitochondria which suggested an increase in mitochondrial biogenesis. A 2022 study in Nature Nanotechnology has reported that cancer cells can hijack the mitochondria from immune cells via physical tunneling nanotubes. In other diseases, defects in nuclear genes lead to dysfunction of mitochondrial proteins. This is the case in Friedreich's ataxia, hereditary spastic paraplegia, and Wilson's disease. These diseases are inherited in a dominance relationship, as applies to most other genetic diseases. A variety of disorders can be caused by nuclear mutations of oxidative phosphorylation enzymes, such as coenzyme Q10 deficiency and Barth syndrome. Environmental influences may interact with hereditary predispositions and cause mitochondrial disease. For example, there may be a link between pesticide exposure and the later onset of Parkinson's disease. Other pathologies with etiology involving mitochondrial dysfunction include schizophrenia, bipolar disorder, dementia, Alzheimer's disease, Parkinson's disease, epilepsy, stroke, cardiovascular disease, myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), retinitis pigmentosa, and diabetes mellitus. Mitochondria-mediated oxidative stress plays a role in cardiomyopathy in type 2 diabetics. Increased fatty acid delivery to the heart increases fatty acid uptake by cardiomyocytes, resulting in increased fatty acid oxidation in these cells. This process increases the reducing equivalents available to the electron transport chain of the mitochondria, ultimately increasing reactive oxygen species (ROS) production. ROS increases uncoupling proteins (UCPs) and potentiate proton leakage through the adenine nucleotide translocator (ANT), the combination of which uncouples the mitochondria. Uncoupling then increases oxygen consumption by the mitochondria, compounding the increase in fatty acid oxidation. This creates a vicious cycle of uncoupling; furthermore, even though oxygen consumption increases, ATP synthesis does not increase proportionally because the mitochondria are uncoupled. Less ATP availability ultimately results in an energy deficit presenting as reduced cardiac efficiency and contractile dysfunction. To compound the problem, impaired sarcoplasmic reticulum calcium release and reduced mitochondrial reuptake limits peak cytosolic levels of the important signaling ion during muscle contraction. Decreased intra-mitochondrial calcium concentration increases dehydrogenase activation and ATP synthesis. So in addition to lower ATP synthesis due to fatty acid oxidation, ATP synthesis is impaired by poor calcium signaling as well, causing cardiac problems for diabetics. Mitochondria also modulate processes such as testicular somatic cell development, spermatogonial stem cell differentiation, luminal acidification, testosterone production in testes, and more. Thus, dysfunction of mitochondria in spermatozoa can be a cause for infertility. In efforts to combat mitochondrial disease, mitochondrial replacement therapy (MRT) has been developed. This form of in vitro fertilization uses donor mitochondria, which avoids the transmission of diseases caused by mutations of mitochondrial DNA. However, this therapy is still being researched and can introduce genetic modification, as well as safety concerns. These diseases are rare but can be extremely debilitating and progressive diseases, thus posing complex ethical questions for public policy. Relationships to aging There may be some leakage of the electrons transferred in the respiratory chain to form reactive oxygen species. This was thought to result in significant oxidative stress in the mitochondria with high mutation rates of mitochondrial DNA. Hypothesized links between aging and oxidative stress are not new and were proposed in 1956, which was later refined into the mitochondrial free radical theory of aging. A vicious cycle was thought to occur, as oxidative stress leads to mitochondrial DNA mutations, which can lead to enzymatic abnormalities and further oxidative stress. A number of changes can occur to mitochondria during the aging process. Tissues from elderly humans show a decrease in enzymatic activity of the proteins of the respiratory chain. However, mutated mtDNA can only be found in about 0.2% of very old cells. Large deletions in the mitochondrial genome have been hypothesized to lead to high levels of oxidative stress and neuronal death in Parkinson's disease. Mitochondrial dysfunction has also been shown to occur in amyotrophic lateral sclerosis. Since mitochondria cover a pivotal role in the ovarian function, by providing ATP necessary for the development from germinal vesicle to mature oocyte, a decreased mitochondria function can lead to inflammation, resulting in premature ovarian failure and accelerated ovarian aging. The resulting dysfunction is then reflected in quantitative (such as mtDNA copy number and mtDNA deletions), qualitative (such as mutations and strand breaks) and oxidative damage (such as dysfunctional mitochondria due to ROS), which are not only relevant in ovarian aging, but perturb oocyte-cumulus crosstalk in the ovary, are linked to genetic disorders (such as Fragile X) and can interfere with embryo selection. History The first observations of intracellular structures that probably represented mitochondria were published in 1857, by the physiologist Albert von Kolliker. Richard Altmann, in 1890, established them as cell organelles and called them "bioblasts." In 1898, Carl Benda coined the term "mitochondria" from the Greek , , "thread", and , , "granule." Leonor Michaelis discovered that Janus green can be used as a supravital stain for mitochondria in 1900. In 1904, Friedrich Meves made the first recorded observation of mitochondria in plants in cells of the white waterlily, Nymphaea alba, and in 1908, along with Claudius Regaud, suggested that they contain proteins and lipids. Benjamin F. Kingsbury, in 1912, first related them with cell respiration, but almost exclusively based on morphological observations. In 1913, Otto Heinrich Warburg linked respiration to particles which he had obtained from extracts of guinea-pig liver and which he called "grana". Warburg and Heinrich Otto Wieland, who had also postulated a similar particle mechanism, disagreed on the chemical nature of the respiration. It was not until 1925, when David Keilin discovered cytochromes, that the respiratory chain was described. In 1939, experiments using minced muscle cells demonstrated that cellular respiration using one oxygen molecule can form four adenosine triphosphate (ATP) molecules, and in 1941, the concept of the phosphate bonds of ATP being a form of energy in cellular metabolism was developed by Fritz Albert Lipmann. In the following years, the mechanism behind cellular respiration was further elaborated, although its link to the mitochondria was not known. The introduction of tissue fractionation by Albert Claude allowed mitochondria to be isolated from other cell fractions and biochemical analysis to be conducted on them alone. In 1946, he concluded that cytochrome oxidase and other enzymes responsible for the respiratory chain were isolated to the mitochondria. Eugene Kennedy and Albert Lehninger discovered in 1948 that mitochondria are the site of oxidative phosphorylation in eukaryotes. Over time, the fractionation method was further developed, improving the quality of the mitochondria isolated, and other elements of cell respiration were determined to occur in the mitochondria. The first high-resolution electron micrographs appeared in 1952, replacing the Janus Green stains as the preferred way to visualize mitochondria. This led to a more detailed analysis of the structure of the mitochondria, including confirmation that they were surrounded by a membrane. It also showed a second membrane inside the mitochondria that folded up in ridges dividing up the inner chamber and that the size and shape of the mitochondria varied from cell to cell. The popular term "powerhouse of the cell" was coined by Philip Siekevitz in 1957. In 1967, it was discovered that mitochondria contained ribosomes. In 1968, methods were developed for mapping the mitochondrial genes, with the genetic and physical map of yeast mitochondrial DNA completed in 1976. See also Anti-mitochondrial antibodies Mitochondrial metabolic rates Mitochondrial permeability transition pore Mitophagy Nebenkern Oncocyte Oncocytoma Paternal mtDNA transmission Plastid Submitochondrial particle References General External links Powering the Cell Mitochondria – XVIVO Scientific Animation Mitodb.com – The mitochondrial disease database. Mitochondria Atlas at University of Mainz Mitochondria Research Portal at mitochondrial.net Mitochondria: Architecture dictates function at cytochemistry.net Mitochondria links at University of Alabama MIP Mitochondrial Physiology Society 3D structures of proteins from inner mitochondrial membrane at University of Michigan 3D structures of proteins associated with outer mitochondrial membrane at University of Michigan Mitochondrial Protein Partnership at University of Wisconsin MitoMiner – A mitochondrial proteomics database at MRC Mitochondrial Biology Unit Mitochondrion – Cell Centered Database Mitochondrion Reconstructed by Electron Tomography at San Diego State University Video Clip of Rat-liver Mitochondrion from Cryo-electron Tomography Cellular respiration Endosymbiotic events
0.760677
0.999602
0.760374
Identity formation
Identity formation, also called identity development or identity construction, is a complex process in which humans develop a clear and unique view of themselves and of their identity. Self-concept, personality development, and values are all closely related to identity formation. Individuation is also a critical part of identity formation. Continuity and inner unity are healthy identity formation, while a disruption in either could be viewed and labeled as abnormal development; certain situations, like childhood trauma, can contribute to abnormal development. Specific factors also play a role in identity formation, such as race, ethnicity, and spirituality. The concept of personal continuity, or personal identity, refers to an individual posing questions about themselves that challenge their original perception, like "Who am I?" The process defines individuals to others and themselves. Various factors make up a person's actual identity, including a sense of continuity, a sense of uniqueness from others, and a sense of affiliation based on their membership in various groups like family, ethnicity, and occupation. These group identities demonstrate the human need for affiliation or for people to define themselves in the eyes of others and themselves. Identities are formed on many levels. The micro-level is self-definition, relations with people, and issues as seen from a personal or an individual perspective. The meso-level pertains to how identities are viewed, formed, and questioned by immediate communities and/or families. The macro-level are the connections among and individuals and issues from a national perspective. The global level connects individuals, issues, and groups at a worldwide level. Identity is often described as finite and consisting of separate and distinct parts (e.g., family, cultural, personal, professional). Theories Many theories of development have aspects of identity formation included in them. Two theories directly address the process of identity formation: Erik Erikson's stages of psychosocial development (specifically the Identity versus Role Confusion stage), James Marcia's identity status theory, and Jeffrey Arnett's theories of identity formation in emerging adulthood. Erikson's theory of identity vs. role confusion Erikson's theory is that people experience different crises or conflicts throughout their lives in eight stages. Each stage occurs at a certain point in life and must be successfully resolved to progress to the next stage. The particular stage relevant to identity formation takes place during adolescence: Identity versus Role Confusion. The Identity versus Role Confusion stage involves adolescents trying to figure out who they are in order to form a basic identity that they will build on throughout their life, especially concerning social and occupational identities. They ask themselves the existential questions: "Who am I?" and "What can I be?" They face the complexities of determining one's own identity. Erikson stated that this crisis is resolved with identity achievement, the point at which an individual has extensively considered various goals and values, accepting some and rejecting others, and understands who they are as a unique person. When an adolescent attains identity achievement, they are ready to enter the next stage of Erikson's theory, Intimacy versus Isolation, where they will form strong friendships and a sense of companionship with others. If the Identity versus Role Confusion crisis is not positively resolved, an adolescent will face confusion about future plans, particularly their roles in adulthood. Failure to form one's own identity leads to failure to form a shared identity with others, which can lead to instability in many areas as an adult. The identity formation stage of Erik Erikson's theory of psychosocial development is a crucial stage in life. Marcia's identity status theory Marcia created a structural interview designed to classify adolescents into one of four statuses of identity. The statuses are used to describe and pinpoint the progression of an adolescent's identity formation process. In Marcia's theory, identity is operationally defined as whether an individual has explored various alternatives and made firm commitments to an occupation, religion, sexual orientation, and a set of political values. The four identity statuses in James Marcia's theory are: Identity Diffusion (also known as Role Confusion): The opposite of identity achievement. The individual has not resolved their identity crisis yet by failing to commit to any goals or values and establish a future life direction. In adolescents, this stage is characterized by disorganized thinking, procrastination, and avoidance of issues and actions. Identity Foreclosure: This occurs when teenagers conform to an identity without exploring what suits them best. For instance, teenagers might follow the values and roles of their parents or cultural norms. They might also foreclose on a negative identity, or the direct opposite of their parents' values or cultural norms. Identity Moratorium: This postpones identity achievement by providing temporary shelter. This status provides opportunities for exploration, either in breadth or in-depth. Examples of moratoria common in American society include college or the military. Identity Achievement: This status is attained when the person has solved the identity issues by making commitments to goals, beliefs, and values after an extensive exploration of different areas. Jeffrey Arnett's Theories on Identity Formation in Emerging Adulthood Jeffrey Arnett's theory states that identity formation is most prominent in emerging adulthood, consisting of ages 18–25. Arnett holds that identity formation consists of indulging in different life opportunities and possibilities to eventually make important life decisions. He believes this phase of life includes a broad range of opportunities for identity formation, specifically in three different realms. These three realms of identity exploration are: Love: In emerging adulthood, individuals explore love to find a profound sense of intimacy. While trying to find love, individuals often explore their identity by focusing on questions such as: "Given the kind of person I am, what kind of person do I wish to have as a partner through life?" Work: Work opportunities that people get involved in are now centered around the idea that they are preparing for careers that they might have throughout adulthood. Individuals explore their identity by asking themselves questions such as: "What kind of work am I good at?", "What kind of work would I find satisfying for the long term", or "What are my chance of getting a job in the field that seems to suit me best?" Worldviews: It is common for those in the stage of emerging adulthood to attend college. There they may be exposed to different worldviews, compared to those they were raised in, and become open to altering their previous worldviews. Individuals who don't attend college also believe that as adult they should also decide what their beliefs and values are. Self-concept Self-concept, or self-identity, is the set of beliefs and ideas an individual has about themselves. Self-concept is different from self-consciousness, which is an awareness of one's self. Components of the self-concept include physical, psychological, and social attributes, which can be influenced by the individual's attitudes, habits, beliefs, and ideas; they cannot be condensed into the general concepts of self-image or self-esteem. Multiple types of identity come together within an individual and can be broken down into the following: cultural identity, professional identity, ethnic and national identity, religious identity, gender identity, and disability identity. Cultural identity Cultural identity is formation of ideas an individual takes based on the culture they belong to. Cultural identity relates to but is not synonymous with identity politics. There are modern questions of culture that are transferred into questions of identity. Historical culture also influences individual identity, and as with modern cultural identity, individuals may pick and choose aspects of cultural identity, while rejecting or disowning other associated ideas. Professional identity Professional identity is the identification with a profession, exhibited by an aligning of roles, responsibilities, values, and ethical standards as accepted by the profession. In business, professional identity is the professional self-concept that is founded upon attributes, values, and experiences. A professional identity is developed when there is a philosophy that is manifested in a distinct corporate culture – the corporate personality. A business professional is a person in a profession with certain types of skills that sometimes require formal training or education. Career development encompasses the total dimensions of psychological, sociological, educational, physical, economic, and chance that alter a person's career practice across the lifespan. Career development also refers to the practices from a company or organization that enhance someone's career or encourages them to make practical career choices. Training is a form of identity setting, since it not only affects knowledge but also affects a team member's self-concept. On the other hand, knowledge of the position introduces a new path of less effort to the trainee, which prolongs the effects of training and promotes a stronger self-concept. Other forms of identity setting in an organization include Business Cards, Specific Benefits by Role, and Task Forwarding. Ethnic and national identity An ethnic identity is an identification with a certain ethnicity, usually on the basis of a presumed common genealogy or ancestry. Recognition by others as a distinct ethnic group is often a contributing factor to developing this identity. Ethnic groups are also often united by common cultural, behavioral, linguistic, ritualistic, or religious traits. Processes that result in the emergence of such identification are summarized as ethnogenesis. Various cultural studies and social theory investigate the question of cultural and ethnic identities. Cultural identity adheres to location, gender, race, history, nationality, sexual orientation, religious beliefs, and ethnicity. National identity is an ethical and philosophical concept where all humans are divided into groups called nations. Members of a "nation" share a common identity and usually a common origin, in the sense of ancestry, parentage, or descent. Religious identity A religious identity is the set of beliefs and practices generally held by an individual, involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions, writings, history, mythology, and faith and mystical experience. Religious identity refers to the personal practices related to communal faith along with rituals and communication stemming from such conviction. This identity formation begins with an association in the parents' religious contacts, and individuation requires that the person chooses the same or different religious identity than that of their parents. Gender identity In sociology, gender identity describes the gender with which a person identifies (i.e., whether one perceives oneself to be a man, a woman, outside of the gender binary), but can also be used to refer to the gender that other people attribute to the individual on the basis of what they know from gender role indications (social behavior, clothing, hairstyle, etc.). Gender identity may be affected by a variety of social structures, including the person's ethnic group, employment status, religion or irreligion, and family. It can also be biological in the sense of puberty. Disability identity Disability identity refers to the particular disabilities that an individual identifies with. This may be something as obvious as a paraplegic person identifying as such, or something less prominent such as a deaf person regarding themselves as part of a local, national, or global community of Deaf People Culture. Disability identity is almost always determined by the particular disabilities that an individual is born with, though it may change later in life if an individual later becomes disabled or when an individual later discovers a previously overlooked disability (particularly applicable to mental disorders). In some rare cases, it may be influenced by exposure to disabled people as with body integrity dysphoria. Political identity Political identities often form the basis of public claims and mobilization of material and other resources for collective action. One theory that explores how this occurs is social movement theory. According to Charles Tilly, the interpretation of our relationship to others ("stories") create the rationale and construct of political identity. The capacity for action is constrained by material resources and sometimes perceptions that can be manipulated by using communication strategies that support the creation of illusory ties. Interpersonal identity development Interpersonal identity development comes from Marcia's Identity Status Theory, and refers to friendship, dating, gender roles, and recreation as tools to maturity in a psychosocial aspect of an individual. Social relation can refer to a multitude of social interactions regulated by social norms between two or more people, with each having a social position and performing a social role. In a sociological hierarchy, social relation is more advanced than behavior, action, social behavior, social action, social contact, and social interaction. It forms the basis of concepts like social organization, social structure, social movement, and social system. Interpersonal identity development is composed of three elements: Categorization: Assigning everyone into categories. Identification: Associating others with certain groups. Comparison: Comparing groups. Interpersonal identity development allows an individual to question and examine various personality elements, such as ideas, beliefs, and behaviors. The actions or thoughts of others create social influences that change an individual. Examples of social influence can be seen in socialization and peer pressure, which can affect a person's behavior, thinking about one's self, and subsequent acceptance or rejection of how other people attempt to influence the individual. Interpersonal identity development occurs during exploratory self-analysis and self-evaluation, and ends at various times to establish an easy-to-understand and consolidative sense of self or identity. Interaction During interpersonal identity development, an exchange of propositions and counter-propositions occurs, resulting in a qualitative transformation of the individual. The aim of interpersonal identity development is to resolve the undifferentiated facets of an individual, which are found to be indistinguishable from others. Given this, and with other admissions, the individual is led to a contradiction between the self and others, and forces the withdrawal of the undifferentiated self as truth. To resolve the incongruence, the person integrates or rejects the encountered elements, which results in a new identity. During each of these exchanges, the individual must resolve the exchange before facing future ones. The exchanges are endless since the changing world constantly presents exchanges between individuals and thus allows individuals to redefine themselves constantly. Collective identity Collective identity is a sense of belonging to a group (the collective). If it is strong, an individual who identifies with the group will dedicate their lives to the group over individual identity: they will defend the views of the group and take risks for the group, often with little to no incentive or coercion. Collective identity often forms through a shared sense of interest, affiliation, or adversity. The cohesiveness of the collective identity goes beyond the community, as the collective experiences grief from the loss of a member. Social support Individuals gain a social identity and group identity from their affiliations in various groups, which include: family, ethnicity, education and occupational status, friendship, dating, and religion. Family One of the most important affiliations is that of the family, whether they be biological, extended, or even adoptive families. Each has its own influence on identity through the interaction that takes place between the family members and with the individual. Researchers and theorists state that an individual's identity (more specifically an adolescent's identity) is influenced by the people around them and the environment in which they live. If a family does not have integration, it is likely to cause identity diffusion (one of James Marcia's four identity statuses, where an individual has not made commitments and does not try to make them), and applies to both males and females. Peer relationships Morgan and Korobov performed a study in order to analyze the influence of same-sex friendships in the development of one's identity. This study involved the use of 24 same-sex college student friendship triads, consisting of 12 males and 12 females, with a total of 72 participants. Each triad was required to have known each other for a minimum of six months. A qualitative method was chosen, as it is the most appropriate in assessing the development of identity. Semi-structured group interviews took place, where the students were asked to reflect on stories and experiences concerning relationship problems. The results showed five common responses when assessing these relationship problems: joking about the relationship's problems, providing support, offering advice, relating others' experiences to their own similar experiences, and providing encouragement. The results concluded that adolescents actively construct their identities through common themes of conversation between same-sex friendships; in this case, involving relationship issues. The common themes of conversation that close peers seem to engage in helping to further their identity formation in life. Influences on identity Cognitive influences Cognitive development influences identity formation. When adolescents are able to think abstractly and reason logically, they have an easier time exploring and contemplating possible identities. When an adolescent has advanced cognitive development and maturity, they tend to resolve identity issues more so than age mates that are less cognitively developed. When identity issues are solved quicker and better, there is more time and effort put into developing that identity. Scholastic influences Adolescents that have a post-secondary education tend to make more concrete goals and stable occupational commitments. Going to college or university can influence identity formation in a productive way. The opposite can also be true, where identity influences education and academics. Education's effect on identity can be beneficial for the individual's identity; the individual becomes educated on different approaches and paths to take in the process of identity formation. Sociocultural influences Sociocultural influences are those of a broader social and historical context. For example, in the past, adolescents would likely just adopt the job or religious beliefs that were expected of them or that were akin to their parents. Today, adolescents have more resources to explore identity choices and more options for commitments. This influence is becoming less significant due to the growing acceptance of identity options that were once less accepted. Many of the identity options from the past are becoming unrecognized and less popular today. The changing sociocultural situation is forcing individuals to develop a unique identity based on their own aspirations. Sociocultural influences play a different role in identity formation now than they have in the past. Parenting influences The type of relationship that adolescents have with their parents has a significant role in identity formation. For example, when there is a solid and positive relationship between parents and adolescents, they are more likely to feel freedom in exploring identity options for themselves. A study found that for boys and girls, identity formation is positively influenced by parental involvement, specifically in the areas of support, social monitoring, and school monitoring. In contrast, when the relationship is not as close and the fear of rejection or discontentment from the parent or other guardians is present, they are more likely to feel less confident in forming a separate identity from their parents. Cyber-socializing and the Internet The Internet is becoming an extension of the expressive dimension of adolescence. On the Internet, youth talk about their lives and concerns, design the content that they make available to others, and assess the reactions of others to it in the form of optimized and electronically mediated social approval. When connected, youth speak of their daily routines and lives. With each post, image or video they upload, they can ask themselves who they are and try out profiles that differ from the ones they practice in the "real" world. See also Otium Poverty Workism Self-Schema Social theory Social defeat Lev Vygotsky Social stigma Social identity Self-discovery Peer pressure Cultural identity Erving Goffman Religious Values Consumer culture Moral development Identity performance Wishful Identification George Herbert Mead In-group and out-group Symbolic interactionism Social comparison theory Identification (psychology) Identity crisis (psychology) Genealogical bewilderment Values (Western philosophy) Georg Wilhelm Friedrich Hegel References Sources Further reading A Erdman, A Study of Bisexual Identity Formation. 2006. A Portes, D MacLeod, What Shall I Call Myself? Hispanic Identity Formation in the Second Generation. Ethnic and Racial Studies, 1996. AS Waterman, Identity Formation: Discovery or Creation? The Journal of Early Adolescence, 1984. AS Waterman, Finding Someone to be: Studies on the Role of Intrinsic Motivation in Identity Formation. Identity: An International Journal of Theory and Research, 2004. A Warde, Consumption, Identity-Formation and Uncertainty. Sociology, 1994. A Wendt, Collective Identity Formation and the International State. The American Political Science Review, 1994. CA Willard, 1996 — Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy, Chicago: University of Chicago Press. ; OCLC 260223405 CG Levine, JE Côté, JE Cãotâ, Identity Formation, Agency, and Culture: a social psychological synthesis. 2002. G Robert, C Bate, C Pope, J Gabbay, A le May, Processes and dynamics of identity formation in professional organizations. 2007. HL Minton, GJ McDonald, Homosexual identity formation as a developmental process. MD Berzonsky, Self-construction over the life-span: A process perspective on identity formation. Advances in personal construct theory, 1990. RB Hall, (Reviewer) Uses of the Other: 'The East' in European Identity Formation (by IB Neumann) University of Minnesota Press, Minneapolis, 1999. 248 pages. International Studies Review Vol.3, Issue 1, Pages 101-111 VC Cass, Sexual orientation identity formation: A Western phenomenon. Textbook of homosexuality and mental health, 1996. External links A positive approach to the identity formation of biracial children ". ematusov.soe.udel.edu Identity: An International Journal of Theory and Research. "Identity" is the official journal of the Society for Research on Identity Formation. Social philosophy Conceptions of self Career development Identity (social science)
0.766976
0.99139
0.760372
Ergonomics
Ergonomics, also known as human factors or human factors engineering (HFE), is the application of psychological and physiological principles to the engineering and design of products, processes, and systems. Primary goals of human factors engineering are to reduce human error, increase productivity and system availability, and enhance safety, health and comfort with a specific focus on the interaction between the human and equipment. The field is a combination of numerous disciplines, such as psychology, sociology, engineering, biomechanics, industrial design, physiology, anthropometry, interaction design, visual design, user experience, and user interface design. Human factors research employs methods and approaches from these and other knowledge disciplines to study human behavior and generate data relevant to previously stated goals. In studying and sharing learning on the design of equipment, devices, and processes that fit the human body and its cognitive abilities, the two terms, "human factors" and "ergonomics", are essentially synonymous as to their referent and meaning in current literature. The International Ergonomics Association defines ergonomics or human factors as follows: Human factors engineering is relevant in the design of such things as safe furniture and easy-to-use interfaces to machines and equipment. Proper ergonomic design is necessary to prevent repetitive strain injuries and other musculoskeletal disorders, which can develop over time and can lead to long-term disability. Human factors and ergonomics are concerned with the "fit" between the user, equipment, and environment or "fitting a job to a person" or "fitting the task to the man". It accounts for the user's capabilities and limitations in seeking to ensure that tasks, functions, information, and the environment suit that user. To assess the fit between a person and the used technology, human factors specialists or ergonomists consider the job (activity) being done and the demands on the user; the equipment used (its size, shape, and how appropriate it is for the task), and the information used (how it is presented, accessed, and changed). Ergonomics draws on many disciplines in its study of humans and their environments, including anthropometry, biomechanics, mechanical engineering, industrial engineering, industrial design, information design, kinesiology, physiology, cognitive psychology, industrial and organizational psychology, and space psychology. Etymology The term ergonomics (from the Greek ἔργον, meaning "work", and νόμος, meaning "natural law") first entered the modern lexicon when Polish scientist Wojciech Jastrzębowski used the word in his 1857 article (The Outline of Ergonomics; i.e. Science of Work, Based on the Truths Taken from the Natural Science). The French scholar Jean-Gustave Courcelle-Seneuil, apparently without knowledge of Jastrzębowski's article, used the word with a slightly different meaning in 1858. The introduction of the term to the English lexicon is widely attributed to British psychologist Hywel Murrell, at the 1949 meeting at the UK's Admiralty, which led to the foundation of The Ergonomics Society. He used it to encompass the studies in which he had been engaged during and after World War II. The expression human factors is a predominantly North American term which has been adopted to emphasize the application of the same methods to non-work-related situations. A "human factor" is a physical or cognitive property of an individual or social behavior specific to humans that may influence the functioning of technological systems. The terms "human factors" and "ergonomics" are essentially synonymous. Domains of specialization According to the International Ergonomics Association, within the discipline of ergonomics there exist domains of specialization. These comprise three main fields of research: physical, cognitive, and organizational ergonomics. There are many specializations within these broad categories. Specializations in the field of physical ergonomics may include visual ergonomics. Specializations within the field of cognitive ergonomics may include usability, human–computer interaction, and user experience engineering. Some specializations may cut across these domains: Environmental ergonomics is concerned with human interaction with the environment as characterized by climate, temperature, pressure, vibration, light. The emerging field of human factors in highway safety uses human factor principles to understand the actions and capabilities of road users – car and truck drivers, pedestrians, cyclists, etc. – and use this knowledge to design roads and streets to reduce traffic collisions. Driver error is listed as a contributing factor in 44% of fatal collisions in the United States, so a topic of particular interest is how road users gather and process information about the road and its environment, and how to assist them to make the appropriate decision. New terms are being generated all the time. For instance, "user trial engineer" may refer to a human factors engineering professional who specializes in user trials. Although the names change, human factors professionals apply an understanding of human factors to the design of equipment, systems and working methods to improve comfort, health, safety, and productivity. Physical ergonomics Physical ergonomics is concerned with human anatomy, and some of the anthropometric, physiological, and biomechanical characteristics as they relate to physical activity. Physical ergonomic principles have been widely used in the design of both consumer and industrial products for optimizing performance and to preventing / treating work-related disorders by reducing the mechanisms behind mechanically induced acute and chronic musculoskeletal injuries / disorders. Risk factors such as localized mechanical pressures, force and posture in a sedentary office environment lead to injuries attributed to an occupational environment. Physical ergonomics is important to those diagnosed with physiological ailments or disorders such as arthritis (both chronic and temporary) or carpal tunnel syndrome. Pressure that is insignificant or imperceptible to those unaffected by these disorders may be very painful, or render a device unusable, for those who are. Many ergonomically designed products are also used or recommended to treat or prevent such disorders, and to treat pressure-related chronic pain. One of the most prevalent types of work-related injuries is musculoskeletal disorder. Work-related musculoskeletal disorders (WRMDs) result in persistent pain, loss of functional capacity and work disability, but their initial diagnosis is difficult because they are mainly based on complaints of pain and other symptoms. Every year, 1.8 million U.S. workers experience WRMDs and nearly 600,000 of the injuries are serious enough to cause workers to miss work. Certain jobs or work conditions cause a higher rate of worker complaints of undue strain, localized fatigue, discomfort, or pain that does not go away after overnight rest. These types of jobs are often those involving activities such as repetitive and forceful exertions; frequent, heavy, or overhead lifts; awkward work positions; or use of vibrating equipment. The Occupational Safety and Health Administration (OSHA) has found substantial evidence that ergonomics programs can cut workers' compensation costs, increase productivity and decrease employee turnover. Mitigation solutions can include both short term and long-term solutions. Short and long-term solutions involve awareness training, positioning of the body, furniture and equipment and ergonomic exercises. Sit-stand stations and computer accessories that provide soft surfaces for resting the palm as well as split keyboards are recommended. Additionally, resources within the HR department can be allocated to provide assessments to employees to ensure the above criteria are met. Therefore, it is important to gather data to identify jobs or work conditions that are most problematic, using sources such as injury and illness logs, medical records, and job analyses. Innovative workstations that are being tested include sit-stand desks, height adjustable desk, treadmill desks, pedal devices and cycle ergometers. In multiple studies these new workstations resulted in decreased waist circumference and improved psychological well-being. However a significant number of additional studies have seen no marked improvement in health outcomes. With the emergence of collaborative robots and smart systems in manufacturing environments, the artificial agents can be used to improve physical ergonomics of human co-workers. For example, during human–robot collaboration the robot can use biomechanical models of the human co-worker in order to adjust the working configuration and account for various ergonomic metrics, such as human posture, joint torques, arm manipulability and muscle fatigue. The ergonomic suitability of the shared workspace with respect to these metrics can also be displayed to the human with workspace maps through visual interfaces. Cognitive ergonomics Cognitive ergonomics is concerned with mental processes, such as perception, emotion, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. (Relevant topics include mental workload, decision-making, skilled performance, human reliability, work stress and training as these may relate to human–system and human–computer interaction design.) Epidemiological studies show a correlation between the time one spends sedentary and their cognitive function such as lowered mood and depression. Organizational ergonomics and safety culture Organizational ergonomics is concerned with the optimization of socio-technical systems, including their organizational structures, policies, and processes. Relevant topics include human communication successes or failures in adaptation to other system elements, crew resource management, work design, work systems, design of working times, teamwork, participatory ergonomics, community ergonomics, cooperative work, new work programs, virtual organizations, remote work, and quality management. Safety culture within an organization of engineers and technicians has been linked to engineering safety with cultural dimensions including power distance and ambiguity tolerance. Low power distance has been shown to be more conducive to a safety culture. Organizations with cultures of concealment or lack of empathy have been shown to have poor safety culture. History Ancient societies Some have stated that human ergonomics began with Australopithecus prometheus (also known as "little foot"), a primate who created handheld tools out of different types of stone, clearly distinguishing between tools based on their ability to perform designated tasks. The foundations of the science of ergonomics appear to have been laid within the context of the culture of Ancient Greece. A good deal of evidence indicates that Greek civilization in the 5th century BC used ergonomic principles in the design of their tools, jobs, and workplaces. One outstanding example of this can be found in the description Hippocrates gave of how a surgeon's workplace should be designed and how the tools he uses should be arranged. The archaeological record also shows that the early Egyptian dynasties made tools and household equipment that illustrated ergonomic principles. Industrial societies Bernardino Ramazzini was one of the first people to systematically study the illness that resulted from work earning himself the nickname "father of occupational medicine". In the late 1600s and early 1700s Ramazzini visited many worksites where he documented the movements of laborers and spoke to them about their ailments. He then published "De Morbis Artificum Diatriba" (Latin for Diseases of Workers) which detailed occupations, common illnesses, remedies. In the 19th century, Frederick Winslow Taylor pioneered the "scientific management" method, which proposed a way to find the optimum method of carrying out a given task. Taylor found that he could, for example, triple the amount of coal that workers were shoveling by incrementally reducing the size and weight of coal shovels until the fastest shoveling rate was reached. Frank and Lillian Gilbreth expanded Taylor's methods in the early 1900s to develop the "time and motion study". They aimed to improve efficiency by eliminating unnecessary steps and actions. By applying this approach, the Gilbreths reduced the number of motions in bricklaying from 18 to 4.5, allowing bricklayers to increase their productivity from 120 to 350 bricks per hour. However, this approach was rejected by Russian researchers who focused on the well-being of the worker. At the First Conference on Scientific Organization of Labour (1921) Vladimir Bekhterev and Vladimir Nikolayevich Myasishchev criticised Taylorism. Bekhterev argued that "The ultimate ideal of the labour problem is not in it [Taylorism], but is in such organisation of the labour process that would yield a maximum of efficiency coupled with a minimum of health hazards, absence of fatigue and a guarantee of the sound health and all round personal development of the working people." Myasishchev rejected Frederick Taylor's proposal to turn man into a machine. Dull monotonous work was a temporary necessity until a corresponding machine can be developed. He also went on to suggest a new discipline of "ergology" to study work as an integral part of the re-organisation of work. The concept was taken up by Myasishchev's mentor, Bekhterev, in his final report on the conference, merely changing the name to "ergonology" Aviation Prior to World War I, the focus of aviation psychology was on the aviator himself, but the war shifted the focus onto the aircraft, in particular, the design of controls and displays, and the effects of altitude and environmental factors on the pilot. The war saw the emergence of aeromedical research and the need for testing and measurement methods. Studies on driver behavior started gaining momentum during this period, as Henry Ford started providing millions of Americans with automobiles. Another major development during this period was the performance of aeromedical research. By the end of World War I, two aeronautical labs were established, one at Brooks Air Force Base, Texas and the other at Wright-Patterson Air Force Base outside of Dayton, Ohio. Many tests were conducted to determine which characteristic differentiated the successful pilots from the unsuccessful ones. During the early 1930s, Edwin Link developed the first flight simulator. The trend continued and more sophisticated simulators and test equipment were developed. Another significant development was in the civilian sector, where the effects of illumination on worker productivity were examined. This led to the identification of the Hawthorne Effect, which suggested that motivational factors could significantly influence human performance. World War II marked the development of new and complex machines and weaponry, and these made new demands on operators' cognition. It was no longer possible to adopt the Tayloristic principle of matching individuals to preexisting jobs. Now the design of equipment had to take into account human limitations and take advantage of human capabilities. The decision-making, attention, situational awareness and hand-eye coordination of the machine's operator became key in the success or failure of a task. There was substantial research conducted to determine the human capabilities and limitations that had to be accomplished. A lot of this research took off where the aeromedical research between the wars had left off. An example of this is the study done by Fitts and Jones (1947), who studied the most effective configuration of control knobs to be used in aircraft cockpits. Much of this research transcended into other equipment with the aim of making the controls and displays easier for the operators to use. The entry of the terms "human factors" and "ergonomics" into the modern lexicon date from this period. It was observed that fully functional aircraft flown by the best-trained pilots, still crashed. In 1943 Alphonse Chapanis, a lieutenant in the U.S. Army, showed that this so-called "pilot error" could be greatly reduced when more logical and differentiable controls replaced confusing designs in airplane cockpits. After the war, the Army Air Force published 19 volumes summarizing what had been established from research during the war. In the decades since World War II, human factors has continued to flourish and diversify. Work by Elias Porter and others within the RAND Corporation after WWII extended the conception of human factors. "As the thinking progressed, a new concept developed—that it was possible to view an organization such as an air-defense, man-machine system as a single organism and that it was possible to study the behavior of such an organism. It was the climate for a breakthrough." In the initial 20 years after the World War II, most activities were done by the "founding fathers": Alphonse Chapanis, Paul Fitts, and Small. Cold War The beginning of the Cold War led to a major expansion of Defense supported research laboratories. Also, many labs established during WWII started expanding. Most of the research following the war was military-sponsored. Large sums of money were granted to universities to conduct research. The scope of the research also broadened from small equipments to entire workstations and systems. Concurrently, a lot of opportunities started opening up in the civilian industry. The focus shifted from research to participation through advice to engineers in the design of equipment. After 1965, the period saw a maturation of the discipline. The field has expanded with the development of the computer and computer applications. The Space Age created new human factors issues such as weightlessness and extreme g-forces. Tolerance of the harsh environment of space and its effects on the mind and body were widely studied. Information age The dawn of the Information Age has resulted in the related field of human–computer interaction (HCI). Likewise, the growing demand for and competition among consumer goods and electronics has resulted in more companies and industries including human factors in their product design. Using advanced technologies in human kinetics, body-mapping, movement patterns and heat zones, companies are able to manufacture purpose-specific garments, including full body suits, jerseys, shorts, shoes, and even underwear. Organizations Formed in 1946 in the UK, the oldest professional body for human factors specialists and ergonomists is The Chartered Institute of Ergonomics and Human Factors, formally known as the Institute of Ergonomics and Human Factors and before that, The Ergonomics Society. The Human Factors and Ergonomics Society (HFES) was founded in 1957. The Society's mission is to promote the discovery and exchange of knowledge concerning the characteristics of human beings that are applicable to the design of systems and devices of all kinds. The Association of Canadian Ergonomists - l'Association canadienne d'ergonomie (ACE) was founded in 1968. It was originally named the Human Factors Association of Canada (HFAC), with ACE (in French) added in 1984, and the consistent, bilingual title adopted in 1999. According to it 2017 mission statement, ACE unites and advances the knowledge and skills of ergonomics and human factors practitioners to optimise human and organisational well-being. The International Ergonomics Association (IEA) is a federation of ergonomics and human factors societies from around the world. The mission of the IEA is to elaborate and advance ergonomics science and practice, and to improve the quality of life by expanding its scope of application and contribution to society. As of September 2008, the International Ergonomics Association has 46 federated societies and 2 affiliated societies. The Human Factors Transforming Healthcare (HFTH) is an international network of HF practitioners who are embedded within hospitals and health systems. The goal of the network is to provide resources for human factors practitioners and healthcare organizations looking to successfully apply HF principles to improve patient care and provider performance. The network also serves as collaborative platform for human factors practitioners, students, faculty, industry partners, and those curious about human factors in healthcare. Related organizations The Institute of Occupational Medicine (IOM) was founded by the coal industry in 1969. From the outset the IOM employed an ergonomics staff to apply ergonomics principles to the design of mining machinery and environments. To this day, the IOM continues ergonomics activities, especially in the fields of musculoskeletal disorders; heat stress and the ergonomics of personal protective equipment (PPE). Like many in occupational ergonomics, the demands and requirements of an ageing UK workforce are a growing concern and interest to IOM ergonomists. The International Society of Automotive Engineers (SAE) is a professional organization for mobility engineering professionals in the aerospace, automotive, and commercial vehicle industries. The Society is a standards development organization for the engineering of powered vehicles of all kinds, including cars, trucks, boats, aircraft, and others. The Society of Automotive Engineers has established a number of standards used in the automotive industry and elsewhere. It encourages the design of vehicles in accordance with established human factors principles. It is one of the most influential organizations with respect to ergonomics work in automotive design. This society regularly holds conferences which address topics spanning all aspects of human factors and ergonomics. Practitioners Human factors practitioners come from a variety of backgrounds, though predominantly they are psychologists (from the various subfields of industrial and organizational psychology, engineering psychology, cognitive psychology, perceptual psychology, applied psychology, and experimental psychology) and physiologists. Designers (industrial, interaction, and graphic), anthropologists, technical communication scholars and computer scientists also contribute. Typically, an ergonomist will have an undergraduate degree in psychology, engineering, design or health sciences, and usually a master's degree or doctoral degree in a related discipline. Though some practitioners enter the field of human factors from other disciplines, both M.S. and PhD degrees in Human Factors Engineering are available from several universities worldwide. Sedentary workplace Contemporary offices did not exist until the 1830s, with Wojciech Jastrzębowsk's seminal book on MSDergonomics following in 1857 and the first published study of posture appearing in 1955. As the American workforce began to shift towards sedentary employment, the prevalence of [WMSD/cognitive issues/ etc..] began to rise. In 1900, 41% of the US workforce was employed in agriculture but by 2000 that had dropped to 1.9% This coincides with an increase in growth in desk-based employment (25% of all employment in 2000) and the surveillance of non-fatal workplace injuries by OSHA and Bureau of Labor Statistics in 1971. 0–1.5 and occurs in a sitting or reclining position. Adults older than 50 years report spending more time sedentary and for adults older than 65 years this is often 80% of their awake time. Multiple studies show a dose-response relationship between sedentary time and all-cause mortality with an increase of 3% mortality per additional sedentary hour each day. High quantities of sedentary time without breaks is correlated to higher risk of chronic disease, obesity, cardiovascular disease, type 2 diabetes and cancer. Currently, there is a large proportion of the overall workforce who is employed in low physical activity occupations. Sedentary behavior, such as spending long periods of time in seated positions poses a serious threat for injuries and additional health risks. Unfortunately, even though some workplaces make an effort to provide a well designed environment for sedentary employees, any employee who is performing large amounts of sitting will likely experience discomfort. There are existing conditions that would predispose both individuals and populations to an increase in prevalence of living sedentary lifestyles, including: socioeconomic determinants, education levels, occupation, living environment, age (as mentioned above) and more. A study published by the Iranian Journal of Public Health examined socioeconomic factors and sedentary lifestyle effects for individuals in a working community. The study concluded that individuals who reported living in low income environments were more inclined to living sedentary behavior compared to those who reported being of high socioeconomic status. Individuals who achieve less education are also considered to be a high risk group to partake in sedentary lifestyles, however, each community is different and has different resources available that may vary this risk. Oftentimes, larger worksites are associated with increased occupational sitting. Those who work in environments that are classified as business and office jobs are typically more exposed to sitting and sedentary behavior while in the workplace. Additionally, occupations that are full-time, have schedule flexibility, are also included in that demographic, and are more likely to sit often throughout their workday. Policy implementation Obstacles surrounding better ergonomic features to sedentary employees include cost, time, effort and for both companies and employees. The evidence above helps establish the importance of ergonomics in a sedentary workplace, yet missing information from this problem is enforcement and policy implementation. As a modernized workplace becomes more and more technology-based more jobs are becoming primarily seated, therefore leading to a need to prevent chronic injuries and pain. This is becoming easier with the amount of research around ergonomic tools saving money companies by limiting the number of days missed from work and workers comp cases. The way to ensure that corporations prioritize these health outcomes for their employees is through policy and implementation. In the United States, there are no nationwide policies that are currently in place; however, a handful of big companies and states have taken on cultural policies to ensure the safety of all workers. For example, the state of Nevada risk management department has established a set of ground rules for both agencies' responsibilities and employees' responsibilities. The agency responsibilities include evaluating workstations, using risk management resources when necessary and keeping OSHA records. To see specific workstation ergonomic policies and responsibilities click here. Methods Until recently, methods used to evaluate human factors and ergonomics ranged from simple questionnaires to more complex and expensive usability labs. Some of the more common human factors methods are listed below: Ethnographic analysis: Using methods derived from ethnography, this process focuses on observing the uses of technology in a practical environment. It is a qualitative and observational method that focuses on "real-world" experience and pressures, and the usage of technology or environments in the workplace. The process is best used early in the design process. Focus groups are another form of qualitative research in which one individual will facilitate discussion and elicit opinions about the technology or process under investigation. This can be on a one-to-one interview basis, or in a group session. Can be used to gain a large quantity of deep qualitative data, though due to the small sample size, can be subject to a higher degree of individual bias. Can be used at any point in the design process, as it is largely dependent on the exact questions to be pursued, and the structure of the group. Can be extremely costly. Iterative design: Also known as prototyping, the iterative design process seeks to involve users at several stages of design, to correct problems as they emerge. As prototypes emerge from the design process, these are subjected to other forms of analysis as outlined in this article, and the results are then taken and incorporated into the new design. Trends among users are analyzed, and products redesigned. This can become a costly process, and needs to be done as soon as possible in the design process before designs become too concrete. Meta-analysis: A supplementary technique used to examine a wide body of already existing data or literature to derive trends or form hypotheses to aid design decisions. As part of a literature survey, a meta-analysis can be performed to discern a collective trend from individual variables. Subjects-in-tandem: Two subjects are asked to work concurrently on a series of tasks while vocalizing their analytical observations. The technique is also known as "Co-Discovery" as participants tend to feed off of each other's comments to generate a richer set of observations than is often possible with the participants separately. This is observed by the researcher, and can be used to discover usability difficulties. This process is usually recorded. Surveys and questionnaires: A commonly used technique outside of human factors as well, surveys and questionnaires have an advantage in that they can be administered to a large group of people for relatively low cost, enabling the researcher to gain a large amount of data. The validity of the data obtained is, however, always in question, as the questions must be written and interpreted correctly, and are, by definition, subjective. Those who actually respond are in effect self-selecting as well, widening the gap between the sample and the population further. Task analysis: A process with roots in activity theory, task analysis is a way of systematically describing human interaction with a system or process to understand how to match the demands of the system or process to human capabilities. The complexity of this process is generally proportional to the complexity of the task being analyzed, and so can vary in cost and time involvement. It is a qualitative and observational process. Best used early in the design process. Human performance modeling: A method of quantifying human behavior, cognition, and processes; a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction. Think aloud protocol: Also known as "concurrent verbal protocol", this is the process of asking a user to execute a series of tasks or use technology, while continuously verbalizing their thoughts so that a researcher can gain insights as to the users' analytical process. Can be useful for finding design flaws that do not affect task performance, but may have a negative cognitive effect on the user. Also useful for utilizing experts to better understand procedural knowledge of the task in question. Less expensive than focus groups, but tends to be more specific and subjective. User analysis: This process is based around designing for the attributes of the intended user or operator, establishing the characteristics that define them, creating a persona for the user. Best done at the outset of the design process, a user analysis will attempt to predict the most common users, and the characteristics that they would be assumed to have in common. This can be problematic if the design concept does not match the actual user, or if the identified are too vague to make clear design decisions from. This process is, however, usually quite inexpensive, and commonly used. "Wizard of Oz": This is a comparatively uncommon technique but has seen some use in mobile devices. Based upon the Wizard of Oz experiment, this technique involves an operator who remotely controls the operation of a device to imitate the response of an actual computer program. It has the advantage of producing a highly changeable set of reactions, but can be quite costly and difficult to undertake. Methods analysis is the process of studying the tasks a worker completes using a step-by-step investigation. Each task in broken down into smaller steps until each motion the worker performs is described. Doing so enables you to see exactly where repetitive or straining tasks occur. Time studies determine the time required for a worker to complete each task. Time studies are often used to analyze cyclical jobs. They are considered "event based" studies because time measurements are triggered by the occurrence of predetermined events. Work sampling is a method in which the job is sampled at random intervals to determine the proportion of total time spent on a particular task. It provides insight into how often workers are performing tasks which might cause strain on their bodies. Predetermined time systems are methods for analyzing the time spent by workers on a particular task. One of the most widely used predetermined time system is called Methods-Time-Measurement. Other common work measurement systems include MODAPTS and MOST. Industry specific applications based on PTS are Seweasy, MODAPTS and GSD as seen in paper: . Cognitive walkthrough: This method is a usability inspection method in which the evaluators can apply user perspective to task scenarios to identify design problems. As applied to macroergonomics, evaluators are able to analyze the usability of work system designs to identify how well a work system is organized and how well the workflow is integrated. Kansei method: This is a method that transforms consumer's responses to new products into design specifications. As applied to macroergonomics, this method can translate employee's responses to changes to a work system into design specifications. High Integration of Technology, Organization, and People: This is a manual procedure done step-by-step to apply technological change to the workplace. It allows managers to be more aware of the human and organizational aspects of their technology plans, allowing them to efficiently integrate technology in these contexts. Top modeler: This model helps manufacturing companies identify the organizational changes needed when new technologies are being considered for their process. Computer-integrated Manufacturing, Organization, and People System Design: This model allows for evaluating computer-integrated manufacturing, organization, and people system design based on knowledge of the system. Anthropotechnology: This method considers analysis and design modification of systems for the efficient transfer of technology from one culture to another. Systems analysis tool: This is a method to conduct systematic trade-off evaluations of work-system intervention alternatives. Macroergonomic analysis of structure: This method analyzes the structure of work systems according to their compatibility with unique sociotechnical aspects. Macroergonomic analysis and design: This method assesses work-system processes by using a ten-step process. Virtual manufacturing and response surface methodology: This method uses computerized tools and statistical analysis for workstation design. Weaknesses Problems related to measures of usability include the fact that measures of learning and retention of how to use an interface are rarely employed and some studies treat measures of how users interact with interfaces as synonymous with quality-in-use, despite an unclear relation. Although field methods can be extremely useful because they are conducted in the users' natural environment, they have some major limitations to consider. The limitations include: Usually take more time and resources than other methods Very high effort in planning, recruiting, and executing compared with other methods Much longer study periods and therefore requires much goodwill among the participants Studies are longitudinal in nature, therefore, attrition can become a problem. See also ISO 9241 Occupational Health Science (journal) Wojciech Jastrzębowski (1799–1882), a Polish pioneer of ergonomics References Further reading Books Thomas J. Armstrong (2008), Chapter 10: Allowances, Localized Fatigue, Musculoskeletal Disorders, and Biomechanics (not yet published) Berlin C. & Adams C. & 2017. Production Ergonomics: Designing Work Systems to Support Optimal Human Performance. London: Ubiquity Press. . Jan Dul and Bernard Weedmaster, Ergonomics for Beginners. A classic introduction on ergonomics—Original title: Vademecum Ergonomie (Dutch)—published and updated since the 1960s. Valerie J Gawron (2000), Human Performance Measures Handbook Lawrence Erlbaum Associates—A useful summary of human performance measures. Liu, Y (2007). IOE 333. Course pack. Industrial and Operations Engineering 333 (Introduction to Ergonomics), University of Michigan, Ann Arbor, MI. Winter 2007 Donald Norman, The Design of Everyday Things—An entertaining user-centered critique of nearly every gadget out there (at the time it was published) Peter Opsvik (2009), "Re-Thinking Sitting". Interesting insights on the history of the chair and how we sit from an ergonomic pioneer Computer Ergonomics & Work Related Upper Limb Disorder Prevention- Making The Business Case For Pro-active Ergonomics (Rooney et al., 2008) Stephen Pheasant, Bodyspace—A classic exploration of ergonomics Alvin R. Tilley & Henry Dreyfuss Associates (1993, 2002), The Measure of Man & Woman: Human Factors in Design A human factors design manual. Kim Vicente, The Human Factor Full of examples and statistics illustrating the gap between existing technology and the human mind, with suggestions to narrow it Wickens and Hollands (2000). Engineering Psychology and Human Performance. Discusses memory, attention, decision making, stress and human error, among other topics Wilson & Corlett, Evaluation of Human Work A practical ergonomics methodology. Warning: very technical and not a suitable 'intro' to ergonomics Zamprotta, Luigi, La qualité comme philosophie de la production.Interaction avec l'ergonomie et perspectives futures, thèse de Maîtrise ès Sciences Appliquées – Informatique, Institut d'Etudes Supérieures L'Avenir, Brussels, année universitaire 1992–93, TIU Press, Independence, Missouri (USA), 1994, Peer-reviewed Journals (Numbers between brackets are the ISI impact factor, followed by the date) Behavior & Information Technology (0.915, 2008) Ergonomics (0.747, 2001–2003) Ergonomics in Design (-) Applied Ergonomics (1.713, 2015) Human Factors (1.37, 2015) International Journal of Industrial Ergonomics (0.395, 2001–2003) Human Factors and Ergonomics in Manufacturing (0.311, 2001–2003) Travail Humain (0.260, 2001–2003) Theoretical Issues in Ergonomics Science (-) International Journal of Human Factors and Ergonomics (-) International Journal of Occupational Safety and Ergonomics (-) External links Directory of Design Support Methods Directory of Design Support Methods Engineering Data Compendium of Human Perception and Performance Index of Non-Government Standards on Human Engineering... Index of Government Standards on Human Engineering... NIOSH Topic Page on Ergonomics and Musculoskeletal Disorders Office Ergonomics Information from European Agency for Safety and Health at Work Human Factors Standards & Handbooks from the University of Maryland Department of Mechanical Engineering Human Factors and Ergonomics Resources Human Factors Engineering Collection, The University of Alabama in Huntsville Archives and Special Collections Industrial engineering Occupational safety and health Posture
0.762188
0.997599
0.760358
Habitat destruction
Habitat destruction (also termed habitat loss and habitat reduction) occurs when a natural habitat is no longer able to support its native species. The organisms once living there have either moved to elsewhere or are dead, leading to a decrease in biodiversity and species numbers. Habitat destruction is in fact the leading cause of biodiversity loss and species extinction worldwide. Humans contribute to habitat destruction through the use of natural resources, agriculture, industrial production and urbanization (urban sprawl). Other activities include mining, logging and trawling. Environmental factors can contribute to habitat destruction more indirectly. Geological processes, climate change, introduction of invasive species, ecosystem nutrient depletion, water and noise pollution are some examples. Loss of habitat can be preceded by an initial habitat fragmentation. Fragmentation and loss of habitat have become one of the most important topics of research in ecology as they are major threats to the survival of endangered species. Observations By region Biodiversity hotspots are chiefly tropical regions that feature high concentrations of endemic species and, when all hotspots are combined, may contain over half of the world's terrestrial species. These hotspots are suffering from habitat loss and destruction. Most of the natural habitat on islands and in areas of high human population density has already been destroyed (WRI, 2003). Islands suffering extreme habitat destruction include New Zealand, Madagascar, the Philippines, and Japan. South and East Asia—especially China, India, Malaysia, Indonesia, and Japan—and many areas in West Africa have extremely dense human populations that allow little room for natural habitat. Marine areas close to highly populated coastal cities also face degradation of their coral reefs or other marine habitat. Forest City, a township in southern Malaysia built on Environmentally Sensitive Area (ESA) Rank 1 wetland is one such example, with irreversible reclamation proceeding prior to environmental impact assessments and approvals. Other such areas include the eastern coasts of Asia and Africa, northern coasts of South America, and the Caribbean Sea and its associated islands. Regions of unsustainable agriculture or unstable governments, which may go hand-in-hand, typically experience high rates of habitat destruction. South Asia, Central America, Sub-Saharan Africa, and the Amazonian tropical rainforest areas of South America are the main regions with unsustainable agricultural practices and/or government mismanagement. Areas of high agricultural output tend to have the highest extent of habitat destruction. In the U.S., less than 25% of native vegetation remains in many parts of the East and Midwest. Only 15% of land area remains unmodified by human activities in all of Europe. Currently, changes occurring in different environments around the world are changing the specific geographical habitats that are suitable for plants to grow. Therefore, the ability for plants to migrate to suitable environment areas will have a strong impact on the distribution of plant diversity. However, at the moment, the rates of plant migration that are influenced by habitat loss and fragmentation are not as well understood as they could be. By type of ecosystem Tropical rainforests have received most of the attention concerning the destruction of habitat. From the approximately 16 million square kilometers of tropical rainforest habitat that originally existed worldwide, less than 9 million square kilometers remain today. The current rate of deforestation is 160,000 square kilometers per year, which equates to a loss of approximately 1% of original forest habitat each year. Other forest ecosystems have suffered as much or more destruction as tropical rainforests. Deforestation for farming and logging have severely disturbed at least 94% of temperate broadleaf forests; many old growth forest stands have lost more than 98% of their previous area because of human activities. Tropical deciduous dry forests are easier to clear and burn and are more suitable for agriculture and cattle ranching than tropical rainforests; consequently, less than 0.1% of dry forests in Central America's Pacific Coast and less than 8% in Madagascar remain from their original extents. Plains and desert areas have been degraded to a lesser extent. Only 10–20% of the world's drylands, which include temperate grasslands, savannas, and shrublands, scrub, and deciduous forests, have been somewhat degraded. But included in that 10–20% of land is the approximately 9 million square kilometers of seasonally dry-lands that humans have converted to deserts through the process of desertification. The tallgrass prairies of North America, on the other hand, have less than 3% of natural habitat remaining that has not been converted to farmland. Wetlands and marine areas have endured high levels of habitat destruction. More than 50% of wetlands in the U.S. have been destroyed in just the last 200 years. Between 60% and 70% of European wetlands have been completely destroyed. In the United Kingdom, there has been an increase in demand for coastal housing and tourism which has caused a decline in marine habitats over the last 60 years. The rising sea levels and temperatures have caused soil erosion, coastal flooding, and loss of quality in the UK marine ecosystem. About one-fifth (20%) of marine coastal areas have been highly modified by humans. One-fifth of coral reefs have also been destroyed, and another fifth has been severely degraded by overfishing, pollution, and invasive species; 90% of the Philippines' coral reefs alone have been destroyed. Finally, over 35% of the mangrove ecosystems worldwide have been destroyed. Natural causes Habitat destruction through natural processes such as volcanism, fire, and climate change is well documented in the fossil record. One study shows that habitat fragmentation of tropical rainforests in Euramerica 300 million years ago led to a great loss of amphibian diversity, but simultaneously the drier climate spurred on a burst of diversity among reptiles. Causes due to human activities Habitat destruction caused by humans includes land conversion from forests, etc. to arable land, urban sprawl, infrastructure development, and other anthropogenic changes to the characteristics of land. Habitat degradation, fragmentation, and pollution are aspects of habitat destruction caused by humans that do not necessarily involve over destruction of habitat, yet result in habitat collapse. Desertification, deforestation, and coral reef degradation are specific types of habitat destruction for those areas (deserts, forests, coral reefs). Overarching drivers The forces that cause humans to destroy habitat are known as drivers of habitat destruction. Demographic, economic, sociopolitical, scientific and technological, and cultural drivers all contribute to habitat destruction. Demographic drivers include the expanding human population; rate of population increase over time; spatial distribution of people in a given area (urban versus rural), ecosystem type, and country; and the combined effects of poverty, age, family planning, gender, and education status of people in certain areas. Most of the exponential human population growth worldwide is occurring in or close to biodiversity hotspots. This may explain why human population density accounts for 87.9% of the variation in numbers of threatened species across 114 countries, providing indisputable evidence that people play the largest role in decreasing biodiversity. The boom in human population and migration of people into such species-rich regions are making conservation efforts not only more urgent but also more likely to conflict with local human interests. The high local population density in such areas is directly correlated to the poverty status of the local people, most of whom lacking an education and family planning. According to the Geist and Lambin (2002) study, the underlying driving forces were prioritized as follows (with the percent of the 152 cases the factor played a significant role in): economic factors (81%), institutional or policy factors (78%), technological factors (70%), cultural or socio-political factors (66%), and demographic factors (61%). The main economic factors included commercialization and growth of timber markets (68%), which are driven by national and international demands; urban industrial growth (38%); low domestic costs for land, labor, fuel, and timber (32%); and increases in product prices mainly for cash crops (25%). Institutional and policy factors included formal pro-deforestation policies on land development (40%), economic growth including colonization and infrastructure improvement (34%), and subsidies for land-based activities (26%); property rights and land-tenure insecurity (44%); and policy failures such as corruption, lawlessness, or mismanagement (42%). The main technological factor was the poor application of technology in the wood industry (45%), which leads to wasteful logging practices. Within the broad category of cultural and sociopolitical factors are public attitudes and values (63%), individual/household behavior (53%), public unconcern toward forest environments (43%), missing basic values (36%), and unconcern by individuals (32%). Demographic factors were the in-migration of colonizing settlers into sparsely populated forest areas (38%) and growing population density—a result of the first factor—in those areas (25%). Forest conversion to agriculture Geist and Lambin (2002) assessed 152 case studies of net losses of tropical forest cover to determine any patterns in the proximate and underlying causes of tropical deforestation. Their results, yielded as percentages of the case studies in which each parameter was a significant factor, provide a quantitative prioritization of which proximate and underlying causes were the most significant. The proximate causes were clustered into broad categories of agricultural expansion (96%), infrastructure expansion (72%), and wood extraction (67%). Therefore, according to this study, forest conversion to agriculture is the main land use change responsible for tropical deforestation. The specific categories reveal further insight into the specific causes of tropical deforestation: transport extension (64%), commercial wood extraction (52%), permanent cultivation (48%), cattle ranching (46%), shifting (slash and burn) cultivation (41%), subsistence agriculture (40%), and fuel wood extraction for domestic use (28%). One result is that shifting cultivation is not the primary cause of deforestation in all world regions, while transport extension (including the construction of new roads) is the largest single proximate factor responsible for deforestation. Habitat size and numbers of species are systematically related. Physically larger species and those living at lower latitudes or in forests or oceans are more sensitive to reduction in habitat area. Conversion to "trivial" standardized ecosystems (e.g., monoculture following deforestation) effectively destroys habitat for the more diverse species. Even the simplest forms of agriculture affect diversity – through clearing or draining the land, discouraging weeds and pests, and encouraging just a limited set of domesticated plant and animal species. There are also feedbacks and interactions among the proximate and underlying causes of deforestation that can amplify the process. Road construction has the largest feedback effect, because it interacts with—and leads to—the establishment of new settlements and more people, which causes a growth in wood (logging) and food markets. Growth in these markets, in turn, progresses the commercialization of agriculture and logging industries. When these industries become commercialized, they must become more efficient by utilizing larger or more modern machinery that often has a worse effect on the habitat than traditional farming and logging methods. Either way, more land is cleared more rapidly for commercial markets. This common feedback example manifests just how closely related the proximate and underlying causes are to each other. Climate change Climate change contributes to destruction of some habitats, endangering various species. For example: Climate change causes rising sea levels which will threaten natural habitats and species globally. Melting sea ice destroys habitat for some species. For example, the decline of sea ice in the Arctic has been accelerating during the early twenty‐first century, with a decline rate of 4.7% per decade (it has declined over 50% since the first satellite records). One well known example of a species affected is the polar bear, whose habitat in the Artic is threatened. Algae can also be affected when it grows on the underside of sea ice. Warm-water coral reefs are very sensitive to global warming and ocean acidification. Coral reefs provide a habitat for thousands of species. They provide ecosystem services such as coastal protection and food. But 70–90% of today's warm-water coral reefs will disappear even if warming is kept to . For example, Caribbean coral reefswhich are biodiversity hotspotswill be lost within the century if global warming continues at the current rate. Habitat fragmentation Impacts On animals and plants When a habitat is destroyed, the carrying capacity for indigenous plants, animals, and other organisms is reduced so that populations decline, sometimes up to the level of extinction. Habitat loss is perhaps the greatest threat to organisms and biodiversity. Temple (1986) found that 82% of endangered bird species were significantly threatened by habitat loss. Most amphibian species are also threatened by native habitat loss, and some species are now only breeding in modified habitat. Endemic organisms with limited ranges are most affected by habitat destruction, mainly because these organisms are not found anywhere else in the world, and thus have less chance of recovering. Many endemic organisms have very specific requirements for their survival that can only be found within a certain ecosystem, resulting in their extinction. Extinction may also take place very long after the destruction of habitat, a phenomenon known as extinction debt. Habitat destruction can also decrease the range of certain organism populations. This can result in the reduction of genetic diversity and perhaps the production of infertile youths, as these organisms would have a higher possibility of mating with related organisms within their population, or different species. One of the most famous examples is the impact upon China's giant panda, once found in many areas of Sichuan. Now it is only found in fragmented and isolated regions in the southwest of the country, as a result of widespread deforestation in the 20th century. As habitat destruction of an area occurs, the species diversity offsets from a combination of habitat generalists and specialists to a population primarily consisting of generalist species. Invasive species are frequently generalists that are able to survive in much more diverse habitats. Habitat destruction leading to climate change offsets the balance of species keeping up with the extinction threshold leading to a higher likelihood of extinction. Habitat loss is one of the main environmental causes of the decline of biodiversity on local, regional, and global scales. Many believe that habitat fragmentation is also a threat to biodiversity however some believe that it is secondary to habitat loss. The reduction of the amount of habitat available results in specific landscapes that are made of isolated patches of suitable habitat throughout a hostile environment/matrix. This process is generally due to pure habitat loss as well as fragmentation effects. Pure habitat loss refers to changes occurring in the composition of the landscape that causes a decrease in individuals. Fragmentation effects refer to an addition of effects occurring due to the habitat changes. Habitat loss can result in negative effects on the dynamic of species richness. The order Hymenoptera is a diverse group of plant pollinators who are highly susceptible to the negative effects of habitat loss, this could result in a domino effect between the plant-pollinator interactions leading to major conservation implications within this group. It is observed from the worlds longest running fragmentation experiment over 35 years that habitat fragmentation has caused a decrease in biodiversity from 13% to 75%. On human population Habitat destruction can vastly increase an area's vulnerability to natural disasters like flood and drought, crop failure, spread of disease, and water contamination. On the other hand, a healthy ecosystem with good management practices can reduce the chance of these events happening, or will at least mitigate adverse impacts. Eliminating swamps—the habitat of pests such as mosquitoes—has contributed to the prevention of diseases such as malaria. Completely depriving an infectious agent (such as a virus) of its habitat—by vaccination, for example—can result in eradicating that infectious agent. Agricultural land can suffer from the destruction of the surrounding landscape. Over the past 50 years, the destruction of habitat surrounding agricultural land has degraded approximately 40% of agricultural land worldwide via erosion, salinization, compaction, nutrient depletion, pollution, and urbanization. Humans also lose direct uses of natural habitat when habitat is destroyed. Aesthetic uses such as birdwatching, recreational uses like hunting and fishing, and ecotourism usually rely upon relatively undisturbed habitat. Many people value the complexity of the natural world and express concern at the loss of natural habitats and of animal or plant species worldwide. Probably the most profound impact that habitat destruction has on people is the loss of many valuable ecosystem services. Habitat destruction has altered nitrogen, phosphorus, sulfur, and carbon cycles, which has increased the frequency and severity of acid rain, algal blooms, and fish kills in rivers and oceans and contributed tremendously to global climate change. One ecosystem service whose significance is becoming better understood is climate regulation. On a local scale, trees provide windbreaks and shade; on a regional scale, plant transpiration recycles rainwater and maintains constant annual rainfall; on a global scale, plants (especially trees in tropical rainforests) around the world counter the accumulation of greenhouse gases in the atmosphere by sequestering carbon dioxide through photosynthesis. Other ecosystem services that are diminished or lost altogether as a result of habitat destruction include watershed management, nitrogen fixation, oxygen production, pollination (see pollinator decline), waste treatment (i.e., the breaking down and immobilization of toxic pollutants), and nutrient recycling of sewage or agricultural runoff. The loss of trees from tropical rainforests alone represents a substantial diminishing of Earth's ability to produce oxygen and to use up carbon dioxide. These services are becoming even more important as increasing carbon dioxide levels is one of the main contributors to global climate change. The loss of biodiversity may not directly affect humans, but the indirect effects of losing many species as well as the diversity of ecosystems in general are enormous. When biodiversity is lost, the environment loses many species that perform valuable and unique roles in the ecosystem. The environment and all its inhabitants rely on biodiversity to recover from extreme environmental conditions. When too much biodiversity is lost, a catastrophic event such as an earthquake, flood, or volcanic eruption could cause an ecosystem to crash, and humans would obviously suffer from that. Loss of biodiversity also means that humans are losing animals that could have served as biological-control agents and plants that could potentially provide higher-yielding crop varieties, pharmaceutical drugs to cure existing or future diseases (such as cancer), and new resistant crop-varieties for agricultural species susceptible to pesticide-resistant insects or virulent strains of fungi, viruses, and bacteria. The negative effects of habitat destruction usually impact rural populations more directly than urban populations. Across the globe, poor people suffer the most when natural habitat is destroyed, because less natural habitat means fewer natural resources per capita, yet wealthier people and countries can simply pay more to continue to receive more than their per capita share of natural resources. Another way to view the negative effects of habitat destruction is to look at the opportunity cost of destroying a given habitat. In other words, what do people lose out on with the removal of a given habitat? A country may increase its food supply by converting forest land to row-crop agriculture, but the value of the same land may be much larger when it can supply natural resources or services such as clean water, timber, ecotourism, or flood regulation and drought control. Outlook The rapid expansion of the global human population is increasing the world's food requirement substantially. Simple logic dictates that more people will require more food. In fact, as the world's population increases dramatically, agricultural output will need to increase by at least 50%, over the next 30 years. In the past, continually moving to new land and soils provided a boost in food production to meet the global food demand. That easy fix will no longer be available, however, as more than 98% of all land suitable for agriculture is already in use or degraded beyond repair. The impending global food crisis will be a major source of habitat destruction. Commercial farmers are going to become desperate to produce more food from the same amount of land, so they will use more fertilizers and show less concern for the environment to meet the market demand. Others will seek out new land or will convert other land-uses to agriculture. Agricultural intensification will become widespread at the cost of the environment and its inhabitants. Species will be pushed out of their habitat either directly by habitat destruction or indirectly by fragmentation, degradation, or pollution. Any efforts to protect the world's remaining natural habitat and biodiversity will compete directly with humans' growing demand for natural resources, especially new agricultural lands. Solutions Attempts to address habitat destruction are in international policy commitments embodied by Sustainable Development Goal 15 "Life on Land" and Sustainable Development Goal 14 "Life Below Water". However, the United Nations Environment Programme report on "Making Peace with Nature" released in 2021 found that most of these efforts had failed to meet their internationally agreed upon goals. Tropical deforestation: In most cases of tropical deforestation, three to four underlying causes are driving two to three proximate causes. This means that a universal policy for controlling tropical deforestation would not be able to address the unique combination of proximate and underlying causes of deforestation in each country. Before any local, national, or international deforestation policies are written and enforced, governmental leaders must acquire a detailed understanding of the complex combination of proximate causes and underlying driving forces of deforestation in a given area or country. This concept, along with many other results of tropical deforestation from the Geist and Lambin study, can easily be applied to habitat destruction in general. Shoreline erosion: Coastal erosion is a natural process as storms, waves, tides and other water level changes occur. Shoreline stabilization can be done by barriers between land and water such as seawalls and bulkheads. Living shorelines are gaining attention as a new stabilization method. These can reduce damage and erosion while simultaneously providing ecosystem services such as food production, nutrient and sediment removal, and water quality improvement to society Preventing an area from losing its specialist species to generalist invasive species depends on the extent of the habitat destruction that has already taken place. In areas where the habitat is relatively undisturbed, halting further habitat destruction may be enough. In areas where habitat destruction is more extreme (fragmentation or patch loss), restoration ecology may be needed. Education of the general public is possibly the best way to prevent further human habitat destruction. Changing the dull creep of environmental impacts from being viewed as acceptable to being seen a reason for change to more sustainable practices. Education about the necessity of family planning to slow population growth is important as greater population leads to greater human caused habitat destruction. Habitat restoration can also take place through the following processes; extending habitats or repairing habitats. Extending habitats aims to counteract habitat loss and fragmentation whereas repairing habitats counteracts degradation. The preservation and creation of habitat corridors can link isolated populations and increase pollination. Corridors are also known to reduce the negative impacts of habitat destruction. The biggest potential to solving the issue of habitat destruction comes from solving the political, economical and social problems that go along with it such as, individual and commercial material consumption, sustainable extraction of resources, conservation areas, restoration of degraded land and addressing climate change. Governmental leaders need to take action by addressing the underlying driving forces, rather than merely regulating the proximate causes. In a broader sense, governmental bodies at a local, national, and international scale need to emphasize: Considering the irreplaceable ecosystem services provided by natural habitats. Protecting remaining intact sections of natural habitat. Finding ecological ways to increase agricultural output without increasing the total land in production. Reducing human population and expansion. Apart from improving access to contraception globally, furthering gender equality also has a great benefit. When women have the same education (decision-making power), this generally leads to smaller families. It is argued that the effects of habitat loss and fragmentation can be counteracted by including spatial processes in potential restoration management plans. However, even though spatial dynamics are incredibly important in the conservation and recovery of species, a limited amount of management plans are taking the spatial effects of habitat restoration and conservation into consideration. See also Impacts of shipping on marine wildlife and habitats in Southeast Asia Notes References Barbault, R. and S. D. Sastrapradja. 1995. Generation, maintenance and loss of biodiversity. Global Biodiversity Assessment, Cambridge Univ. Press, Cambridge pp. 193–274. Cincotta, R.P., and R. Engelman. 2000. Nature's place: human population density and the future of biological diversity. Population Action International. Washington, D.C. Kauffman, J. B. and D. A. Pyke. 2001. Range ecology, global livestock influences. In S. A. Levin (ed.), Encyclopedia of Biodiversity 5: 33–52. Academic Press, San Diego, CA. Millennium Ecosystem Assessment (Program). 2005. Ecosystems and Human Well-Being . Millennium Ecosystem Assessment. Island Press, Covelo, CA. Primack, R. B. 2006. Essentials of Conservation Biology. 4th Ed. Habitat destruction, pages 177–188. Sinauer Associates, Sunderland, MA. Ravenga, C., J. Brunner, N. Henninger, K. Kassem, and R. Payne. 2000. Pilot Analysis of Global Ecosystems: Wetland Ecosystems. World Resources Institute, Washington, D.C. Scholes, R. J. and R. Biggs (eds.). 2004. Ecosystem services in Southern Africa: a regional assessment. The regional scale component of the Southern African Millennium Ecosystem Assessment. CSIR, Pretoria, South Africa. Stein, B. A., L. S. Kutner, and J. S. Adams (eds.). 2000. Precious Heritage: The Status of Biodiversity in the United States. Oxford University Press, New York. White, R. P., S. Murray, and M. Rohweder. 2000. Pilot Assessment of Global Ecosystems: Grassland Ecosystems. World Resources Institute, Washington, D. C. WRI. 2003. World Resources 2002–2004: Decisions for the Earth: Balance, voice, and power. 328 pp. World Resources Institute, Washington, D.C. Habitat Environmental conservation Environmental terminology Environmental impact by effect
0.763881
0.995384
0.760355
Bayesian hierarchical modeling
Bayesian hierarchical modelling is a statistical model written in multiple levels (hierarchical form) that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is the posterior distribution, also known as the updated probability estimate, as additional evidence on the prior distribution is acquired. Frequentist statistics may yield conclusions seemingly incompatible with those offered by Bayesian statistics due to the Bayesian treatment of the parameters as random variables and its use of subjective information in establishing assumptions on these parameters. As the approaches answer different questions the formal results aren't technically contradictory but the two approaches disagree over which answer is relevant to particular applications. Bayesians argue that relevant information regarding decision-making and updating beliefs cannot be ignored and that hierarchical modeling has the potential to overrule classical methods in applications where respondents give multiple observational data. Moreover, the model has proven to be robust, with the posterior distribution less sensitive to the more flexible hierarchical priors. Hierarchical modeling is used when information is available on several different levels of observational units. For example, in epidemiological modeling to describe infection trajectories for multiple countries, observational units are countries, and each country has its own temporal profile of daily infected cases. In decline curve analysis to describe oil or gas production decline curve for multiple wells, observational units are oil or gas wells in a reservoir region, and each well has each own temporal profile of oil or gas production rates (usually, barrels per month). Data structure for the hierarchical modeling retains nested data structure. The hierarchical form of analysis and organization helps in the understanding of multiparameter problems and also plays an important role in developing computational strategies. Philosophy Statistical methods and models commonly involve multiple parameters that can be regarded as related or connected in such a way that the problem implies a dependence of the joint probability model for these parameters. Individual degrees of belief, expressed in the form of probabilities, come with uncertainty. Amidst this is the change of the degrees of belief over time. As was stated by Professor José M. Bernardo and Professor Adrian F. Smith, “The actuality of the learning process consists in the evolution of individual and subjective beliefs about the reality.” These subjective probabilities are more directly involved in the mind rather than the physical probabilities. Hence, it is with this need of updating beliefs that Bayesians have formulated an alternative statistical model which takes into account the prior occurrence of a particular event. Bayes' theorem The assumed occurrence of a real-world event will typically modify preferences between certain options. This is done by modifying the degrees of belief attached, by an individual, to the events defining the options. Suppose in a study of the effectiveness of cardiac treatments, with the patients in hospital j having survival probability , the survival probability will be updated with the occurrence of y, the event in which a controversial serum is created which, as believed by some, increases survival in cardiac patients. In order to make updated probability statements about , given the occurrence of event y, we must begin with a model providing a joint probability distribution for and y. This can be written as a product of the two distributions that are often referred to as the prior distribution and the sampling distribution respectively: Using the basic property of conditional probability, the posterior distribution will yield: This equation, showing the relationship between the conditional probability and the individual events, is known as Bayes' theorem. This simple expression encapsulates the technical core of Bayesian inference which aims to incorporate the updated belief, , in appropriate and solvable ways. Exchangeability The usual starting point of a statistical analysis is the assumption that the n values are exchangeable. If no information – other than data y – is available to distinguish any of the ’s from any others, and no ordering or grouping of the parameters can be made, one must assume symmetry among the parameters in their prior distribution. This symmetry is represented probabilistically by exchangeability. Generally, it is useful and appropriate to model data from an exchangeable distribution as independently and identically distributed, given some unknown parameter vector , with distribution . Finite exchangeability For a fixed number n, the set is exchangeable if the joint probability is invariant under permutations of the indices. That is, for every permutation or of (1, 2, …, n), Following is an exchangeable, but not independent and identical (iid), example: Consider an urn with a red ball and a blue ball inside, with probability of drawing either. Balls are drawn without replacement, i.e. after one ball is drawn from the n balls, there will be n − 1 remaining balls left for the next draw. Since the probability of selecting a red ball in the first draw and a blue ball in the second draw is equal to the probability of selecting a blue ball on the first draw and a red on the second draw, both of which are equal to 1/2 (i.e. ), then and are exchangeable. But the probability of selecting a red ball on the second draw given that the red ball has already been selected in the first draw is 0, and is not equal to the probability that the red ball is selected in the second draw which is equal to 1/2 (i.e. ). Thus, and are not independent. If are independent and identically distributed, then they are exchangeable, but the converse is not necessarily true. Infinite exchangeability Infinite exchangeability is the property that every finite subset of an infinite sequence , is exchangeable. That is, for any n, the sequence is exchangeable. Hierarchical models Components Bayesian hierarchical modeling makes use of two important concepts in deriving the posterior distribution, namely: Hyperparameters: parameters of the prior distribution Hyperpriors: distributions of Hyperparameters Suppose a random variable Y follows a normal distribution with parameter as the mean and 1 as the variance, that is . The tilde relation can be read as "has the distribution of" or "is distributed as". Suppose also that the parameter has a distribution given by a normal distribution with mean and variance 1, i.e. . Furthermore, follows another distribution given, for example, by the standard normal distribution, . The parameter is called the hyperparameter, while its distribution given by is an example of a hyperprior distribution. The notation of the distribution of Y changes as another parameter is added, i.e. . If there is another stage, say, follows another normal distribution with mean and variance , meaning , and can also be called hyperparameters while their distributions are hyperprior distributions as well. Framework Let be an observation and a parameter governing the data generating process for . Assume further that the parameters are generated exchangeably from a common population, with distribution governed by a hyperparameter . The Bayesian hierarchical model contains the following stages: The likelihood, as seen in stage I is , with as its prior distribution. Note that the likelihood depends on only through . The prior distribution from stage I can be broken down into: [from the definition of conditional probability] With as its hyperparameter with hyperprior distribution, . Thus, the posterior distribution is proportional to: [using Bayes' Theorem] Example To further illustrate this, consider the example: A teacher wants to estimate how well a student did on the SAT. The teacher uses information on the student’s high school grades and current grade point average (GPA) to come up with an estimate. The student's current GPA, denoted by , has a likelihood given by some probability function with parameter , i.e. . This parameter is the SAT score of the student. The SAT score is viewed as a sample coming from a common population distribution indexed by another parameter , which is the high school grade of the student (freshman, sophomore, junior or senior). That is, . Moreover, the hyperparameter follows its own distribution given by , a hyperprior. To solve for the SAT score given information on the GPA, All information in the problem will be used to solve for the posterior distribution. Instead of solving only using the prior distribution and the likelihood function, the use of hyperpriors gives more information to make more accurate beliefs in the behavior of a parameter. 2-stage hierarchical model In general, the joint posterior distribution of interest in 2-stage hierarchical models is: 3-stage hierarchical model For 3-stage hierarchical models, the posterior distribution is given by: Bayesian nonlinear mixed-effects model The framework of Bayesian hierarchical modeling is frequently used in diverse applications. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage: Stage 1: Individual-Level Model Stage 2: Population Model Stage 3: Prior Here, denotes the continuous response of the -th subject at the time point , and is the -th covariate of the -th subject. Parameters involved in the model are written in Greek letters. is a known function parameterized by the -dimensional vector . Typically, is a `nonlinear' function and describes the temporal trajectory of individuals. In the model, and describe within-individual variability and between-individual variability, respectively. If Stage 3: Prior is not considered, then the model reduces to a frequentist nonlinear mixed-effect model. A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density: The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function ; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle. References Bayesian networks
0.766766
0.991633
0.76035
Microbiological culture
A microbiological culture, or microbial culture, is a method of multiplying microbial organisms by letting them reproduce in predetermined culture medium under controlled laboratory conditions. Microbial cultures are foundational and basic diagnostic methods used as research tools in molecular biology. The term culture can also refer to the microorganisms being grown. Microbial cultures are used to determine the type of organism, its abundance in the sample being tested, or both. It is one of the primary diagnostic methods of microbiology and used as a tool to determine the cause of infectious disease by letting the agent multiply in a predetermined medium. For example, a throat culture is taken by scraping the lining of tissue in the back of the throat and blotting the sample into a medium to be able to screen for harmful microorganisms, such as Streptococcus pyogenes, the causative agent of strep throat. Furthermore, the term culture is more generally used informally to refer to "selectively growing" a specific kind of microorganism in the lab. It is often essential to isolate a pure culture of microorganisms. A pure (or axenic) culture is a population of cells or multicellular organisms growing in the absence of other species or types. A pure culture may originate from a single cell or single organism, in which case the cells are genetic clones of one another. For the purpose of gelling the microbial culture, the medium of agarose gel (agar) is used. Agar is a gelatinous substance derived from seaweed. A cheap substitute for agar is guar gum, which can be used for the isolation and maintenance of thermophiles. History The first culture media was liquid media, designed by Louis Pasteur in 1860. This was used in the laboratory until Robert Koch's development of solid media in 1881. Koch's method of using a flat plate for his solid media was replaced by Julius Richard Petri's round box in 1887. Since these foundational inventions, a diverse array of media and methods have evolved to help scientists grow, identify, and purify cultures of microorganisms. Types of microbial cultures Prokaryotic culture The culturing of prokaryotes typically involves bacteria, since archaea are difficult to culture in a laboratory setting. To obtain a pure prokaryotic culture, one must start the culture from a single cell or a single colony of the organism. Since a prokaryotic colony is the asexual offspring of a single cell, all of the cells are genetically identical and will result in a pure culture. Viral culture Virus and phage cultures require host cells in which the virus or phage multiply. For bacteriophages, cultures are grown by infecting bacterial cells. The phage can then be isolated from the resulting plaques in a lawn of bacteria on a plate. Viral cultures are obtained from their appropriate eukaryotic host cells. The streak plate method is a way to physically separate the microbial population, and is done by spreading the inoculate back and forth with an inoculating loop over the solid agar plate. Upon incubation, colonies will arise and single cells will have been isolated from the biomass. Once a microorganism has been isolated in pure culture, it is necessary to preserve it in a viable state for further study and use in cultures called stock cultures. These cultures have to be maintained, such that there is no loss of their biological, immunological and cultural characters. Eukaryotic cell culture Eukaryotic cell cultures provide a controlled environment for studying eukaryotic organisms. Single-celled eukaryotes - such as yeast, algae, and protozoans - can be cultured in similar ways to prokaryotic cultures. The same is true for multicellular microscopic eukaryotes, such as C. elegans. Although macroscopic eukaryotic organisms are too large to culture in a laboratory, cells taken from these organisms can be cultured. This allows researchers to study specific parts and processes of a macroscopic eukaryote in vitro. Culture methods Liquid cultures One method of microbiological culture is liquid culture, in which the desired organisms are suspended in a liquid nutrient medium, such as Luria broth, in an upright flask. This allows a scientist to grow up large amounts of bacteria or other microorganisms for a variety of downstream applications. Liquid cultures are ideal for preparation of an antimicrobial assay in which the liquid broth is inoculated with bacteria and let to grow overnight (a ‘shaker’ may be used to mechanically mix the broth, to encourage uniform growth). Subsequently, aliquots of the sample are taken to test for the antimicrobial activity of a specific drug or protein (antimicrobial peptides). Static liquid cultures may be used as an alternative. These cultures are not shaken, and they provide the microbes with an oxygen gradient. Agar plates Microbiological cultures can be grown in petri dishes of differing sizes that have a thin layer of agar-based growth medium. Once the growth medium in the petri dish is inoculated with the desired bacteria, the plates are incubated at the optimal temperature for the growing of the selected bacteria (for example, usually at 37 degrees Celsius, or the human body temperature, for cultures from humans or animals, or lower for environmental cultures). After the desired level of growth is achieved, agar plates can be stored upside down in a refrigerator for an extended period of time to keep bacteria for future experiments. There are a variety of additives that can be added to agar before it is poured into a plate and allowed to solidify. Some types of bacteria can only grow in the presence of certain additives. This can also be used when creating engineered strains of bacteria that contain an antibiotic-resistance gene. When the selected antibiotic is added to the agar, only bacterial cells containing the gene insert conferring resistance will be able to grow. This allows the researcher to select only the colonies that were successfully transformed. Agar based dipsticks Miniaturized version of agar plates implemented to dipstick formats, e.g. Dip Slide, Digital Dipstick show potential to be used at the point-of-care for diagnosis purposes. They have advantages over agar plates since they are cost effective and their operation does not require expertise or laboratory environment, which enable them to be used at the point-of-care. Selective and differential media Selective and differential media reveal characteristics about the microorganisms being cultured on them. This kind of media can be selective, differential, or both selective and differential. Growing a culture on multiple kinds of selective and differential media can purify mixed cultures and reveal to scientists the characteristics needed to identify unknown cultures. Selective media Selective media is used to distinguish organisms by allowing for a specific kind of organism to grow on it while inhibiting the growth of others. For example, eosin methylene blue (EMB) may be used to select against Gram-positive bacteria, most of which have hindered growth on EMB, and select for Gram-negative bacteria, whose growth is not inhibited on EMB. Differential media Scientists use differential media when culturing microorganisms to reveal certain biochemical characteristics about the organisms. These revealed traits can then be compared to attributes of known microorganisms in an effort to identify unknown cultures. An example of this is MacConkey agar (MAC), which reveals lactose-fermenting bacteria through a pH indicator that changes color when acids are produced from fermentation. Multitarget panels On multitarget panels, bacteria isolated from a previously grown colony are distributed into each well, each of which contains growth medium as well as the ingredients for a biochemical test, which will change the absorbance of the well depending on the bacterial property for the tested target. The panel will be incubated in a machine, which subsequently analyses each well with a light-based method such as colorimetry, turbidimetry, or fluorometry. The combined results will be automatically compared to a database of known results for various bacterial species, in order to generate a diagnosis of what bacterial species is present in the current panel. Simultaneously, it performs antibiotic susceptibility testing. Stab cultures Stab cultures are similar to agar plates, but are formed by solid agar in a test tube. Bacteria is introduced via an inoculation needle or a pipette tip being stabbed into the center of the agar. Bacteria grow in the punctured area. Stab cultures are most commonly used for short-term storage or shipment of cultures. Additionally, stab cultures can reveal characteristics about cultured microorganisms such as motility or oxygen requirements. Solid plate culture of thermophilic microorganisms For solid plate cultures of thermophilic microorganisms such as Bacillus acidocaldarius, Bacillus stearothermophilus, Thermus aquaticus and Thermus thermophilus etc. growing at temperatures of 50 to 70 degrees C, low acyl clarified gellan gum has been proven to be the preferred gelling agent comparing to agar for the counting or isolation or both of the above thermophilic bacteria. Cell Culture Collections Microbial culture collections focus on the acquisition, authentication, production, preservation, cataloguing and distribution of viable cultures of standard reference microorganisms, cell lines and other materials for research in microbial systematics. Culture collection are also repositories of type strains. See also Blood culture Changestat Colony-forming unit Gellan gum Microbial dark matter Microbial Food Cultures Screening cultures Sputum culture Synchronous culture References External links EFFCA - European Food and Feed Cultutes Association. Information about production and uses of microbial cultures as well as legislative aspects. Microbiology terms Cell culture
0.764223
0.99488
0.76031
Generative grammar
Generative grammar is a research tradition in linguistics that aims to explain the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists, or generativists, tend to share certain working assumptions such as the competence–performance distinction and the notion that some domain-specific aspects of grammar are partly innate in humans. These assumptions are rejected in non-generative approaches such as usage-based models of language. Generative linguistics includes work in core areas such as syntax, semantics, phonology, psycholinguistics, and language acquisition, with additional extensions to topics including biolinguistics and music cognition. Generative grammar began in the late 1950s with the work of Noam Chomsky, though its roots include earlier approaches such as structural linguistics. The earliest version of Chomsky's model was called Transformational grammar, with subsequent iterations known as Government and binding theory and the Minimalist program. Other present-day generative models include Optimality theory, Categorial grammar, and Tree-adjoining grammar. Principles Generative grammar is an umbrella term for a variety of approaches to linguistics. What unites these approaches is the goal of uncovering the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Cognitive science Generative grammar studies language as part of cognitive science. Thus, research in the generative tradition involves formulating and testing hypotheses about the mental processes that allow humans to use language. Like other approaches in linguistics, generative grammar engages in linguistic description rather than linguistic prescription. Explicitness and generality Generative grammar proposes models of language consisting of explicit rule systems, which make testable falsifiable predictions. This is different from traditional grammar where grammatical patterns are often described more loosely. These models are intended to be parsimonious, capturing generalizations in the data with as few rules as possible. For example, because English imperative tag questions obey the same restrictions that second person future declarative tags do, Paul Postal proposed that the two constructions are derived from the same underlying structure. By adopting this hypothesis, he was able to capture the restrictions on tags with a single rule. This kind of reasoning is commonplace in generative research. Particular theories within generative grammar have been expressed using a variety of formal systems, many of which are modifications or extensions of context free grammars. Competence versus performance Generative grammar generally distinguishes linguistic competence and linguistic performance. Competence is the collection of subconscious rules that one knows when one knows a language; performance is the system which puts these rules to use. This distinction is related to the broader notion of Marr's levels used in other cognitive sciences, with competence corresponding to Marr's computational level. For example, generative theories generally provide competence-based explanations for why English speakers would judge the sentence in (1) as odd. In these explanations, the sentence would be ungrammatical because the rules of English only generate sentences where demonstratives agree with the grammatical number of their associated noun. (1) *That cats is eating the mouse. By contrast, generative theories generally provide performance-based explanations for the oddness of center embedding sentences like one in (2). According to such explanations, the grammar of English could in principle generate such sentences, but doing so in practice is so taxing on working memory that the sentence ends up being unparsable. (2) *The cat that the dog that the man fed chased meowed. In general, performance-based explanations deliver a simpler theory of grammar at the cost of additional assumptions about memory and parsing. As a result, the choice between a competence-based explanation and a performance-based explanation for a given phenomenon is not always obvious and can require investigating whether the additional assumptions are supported by independent evidence. For example, while many generative models of syntax explain island effects by positing constraints within the grammar, it has also been argued that some or all of these constraints are in fact the result of limitations on performance. Non-generative approaches often do not posit any distinction between competence and performance. For instance, usage-based models of language assume that grammatical patterns arise as the result of usage. Innateness and universality A major goal of generative research is to figure out which aspects of linguistic competence are innate and which are not. Within generative grammar, it is generally accepted that at least some domain-specific aspects are innate, and the term "universal grammar" is often used as a placeholder for whichever those turn out to be. The idea that at least some aspects are innate is motivated by poverty of the stimulus arguments. For example, one famous poverty of the stimulus argument concerns the acquisition of yes-no questions in English. This argument starts from the observation that children only make mistakes compatible with rules targeting hierarchical structure even though the examples which they encounter could have been generated by a simpler rule that targets linear order. In other words, children seem to ignore the possibility that the question rule is as simple as "switch the order of the first two words" and immediately jump to alternatives that rearrange constituents in tree structures. This is taken as evidence that children are born knowing that grammatical rules involve hierarchical structure, even though they have to figure out what those rules are. The empirical basis of poverty of the stimulus arguments has been challenged by Geoffrey Pullum and others, leading to back-and-forth debate in the language acquisition literature. Recent work has also suggested that some recurrent neural network architectures are able to learn hierarchical structure without an explicit constraint. Within generative grammar, there are a variety of theories about what universal grammar consists of. One notable hypothesis proposed by Hagit Borer holds that the fundamental syntactic operations are universal and that all variation arises from different feature-specifications in the lexicon. On the other hand, a strong hypothesis adopted in some variants of Optimality Theory holds that humans are born with a universal set of constraints, and that all variation arises from differences in how these constraints are ranked. In a 2002 paper, Noam Chomsky, Marc Hauser and W. Tecumseh Fitch proposed that universal grammar consists solely of the capacity for hierarchical phrase structure. In day-to-day research, the notion that universal grammar exists motivates analyses in terms of general principles. As much as possible, facts about particular languages are derived from these general principles rather than from language-specific stipulations. Subfields Research in generative grammar spans a number of subfields. These subfields are also studied in non-generative approaches. Syntax Syntax studies the rule systems which combine smaller units such as morphemes into larger units such as phrases and sentences. Within generative syntax, prominent approaches include Minimalism, Government and binding theory, Lexical-functional grammar (LFG), and Head-driven phrase structure grammar (HPSG). Phonology Phonology studies the rule systems which organize linguistic sounds. For example, research in phonology includes work on phonotactic rules which govern which phonemes can be combined, as well as those that determine the placement of stress, tone, and other suprasegmental elements. Within generative grammar, a prominent approach to phonology is Optimality Theory. Semantics Semantics studies the rule systems that determine expressions' meanings. Within generative grammar, semantics is a species of formal semantics, providing compositional models of how the denotations of sentences are computed on the basis of the meanings of the individual morphemes and their syntactic structure. Extensions Music Generative grammar has been applied to music theory and analysis since the 1980s. One notable approach is Fred Lerdahl and Ray Jackendoff's Generative theory of tonal music, which formalized and extended ideas from Schenkerian analysis. Biolinguistics Recent work in generative-inspired biolinguistics has proposed that universal grammar consists solely of syntactic recursion, and that it arose recently in humans as the result of a random genetic mutation. Generative-inspired biolinguistics has not uncovered any particular genes responsible for language. While some prospects were raised at the discovery of the FOXP2 gene, there is not enough support for the idea that it is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech. History As a distinct research tradition, generative grammar began in the late 1950s with the work of Noam Chomsky. However, its roots include earlier structuralist approaches such as glossematics which themselves had older roots, for instance in the work of the ancient Indian grammarian Pāṇini. Military funding to generative research was an important factor in its early spread in the 1960s. The initial version of generative syntax was called transformational grammar. In transformational grammar, rules called transformations mapped a level of representation called deep structures to another level of representation called surface structure. The semantic interpretation of a sentence was represented by its deep structure, while the surface structure provided its pronunciation. For example, an active sentence such as "The doctor examined the patient" and "The patient was examined by the doctor", had the same deep structure. The difference in surface structures arises from the application of the passivization transformation, which was assumed to not affect meaning. This assumption was challenged in the 1960s by the discovery of examples such as "Everyone in the room knows two languages" and "Two languages are known by everyone in the room". After the Linguistics wars of the late 1960s and early 1970s, Chomsky developed a revised model of syntax called Government and binding theory, which eventually grew into Minimalism. In the aftermath of those disputes, a variety of other generative models of syntax were proposed including relational grammar, Lexical-functional grammar (LFG), and Head-driven phrase structure grammar (HPSG). Generative phonology originally focused on rewrite rules, in a system commonly known as SPE Phonology after the 1968 book The Sound Pattern of English by Chomsky and Morris Halle. In the 1990s, this approach was largely replaced by Optimality theory, which was able to capture generalizations called conspiracies which needed to be stipulated in SPE phonology. Semantics emerged as a subfield of generative linguistics during the late 1970s, with the pioneering work of Richard Montague. Montague proposed a system called Montague grammar which consisted of interpretation rules mapping expressions from a bespoke model of syntax to formulas of intensional logic. Subsequent work by Barbara Partee, Irene Heim, Tanya Reinhart, and others showed that the key insights of Montague Grammar could be incorporated into more syntactically plausible systems. See also Cognitive linguistics Cognitive revolution Digital infinity Formal grammar Functional theories of grammar Generative lexicon Generative metrics Generative principle Generative semantics Generative systems Parsing Phrase structure rules Syntactic Structures References Further reading Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, Massachusetts: MIT Press. Hurford, J. (1990) Nativist and functional explanations in language acquisition. In I. M. Roca (ed.), Logical Issues in Language Acquisition, 85–136. Foris, Dordrecht. Grammar Grammar frameworks Noam Chomsky Cognitive musicology
0.765063
0.993784
0.760307
Microbial genetics
Microbial genetics is a subject area within microbiology and genetic engineering. Microbial genetics studies microorganisms for different purposes. The microorganisms that are observed are bacteria and archaea. Some fungi and protozoa are also subjects used to study in this field. The studies of microorganisms involve studies of genotype and expression system. Genotypes are the inherited compositions of an organism. (Austin, "Genotype," n.d.) Genetic Engineering is a field of work and study within microbial genetics. The usage of recombinant DNA technology is a process of this work. The process involves creating recombinant DNA molecules through manipulating a DNA sequence. That DNA created is then in contact with a host organism. Cloning is also an example of genetic engineering. Since the discovery of microorganisms by Robert Hooke and Antoni van Leeuwenhoek during the period 1665-1885 they have been used to study many processes and have had applications in various areas of study in genetics. For example: Microorganisms' rapid growth rates and short generation times are used by scientists to study evolution. Robert Hooke and Antoni van Leeuwenhoek discoveries involved depictions, observations, and descriptions of microorganisms. Mucor is the microfungus that Hooke presented and gave a depiction of. His contribution being, Mucor as the first microorganism to be illustrated. Antoni van Leeuwenhoek’s contribution to the microscopic protozoa and microscopic bacteria yielded to scientific observations and descriptions. These contributions were accomplished by a simple microscope, which led to the understanding of microbes today and continues to progress scientists understanding.   Microbial genetics also has applications in being able to study processes and pathways that are similar to those found in humans such as drug metabolism. Role in understanding evolution Microbial genetics can focus on Charles Darwin's work and scientists have continued to study his work and theories by the use of microbes. Specifically, Darwin's theory of natural selection is a source used. Studying evolution by using microbial genetics involves scientists looking at evolutionary balance. An example of how they may accomplish this is studying natural selection or drift of microbes. Application of this knowledge comes from looking for the presence or absence in a variety of different ways. The ways include identifying certain pathways, genes, and functions. Once the subject is observed, scientist may compare it to a sequence of a conserved gene. The process of studying microbial evolution in this way lacks the ability to give a time scale of when the evolution took place. However, by testing evolution in this way, scientist can learn the rates and outcomes of evolution. Studying the relationship between microbes and the environment is a key component to microbial genetics evolution. Microorganisms whose study is encompassed by microbial genetics Bacteria Bacteria have been on this planet for approximately 3.5 billion years, and are classified by their shape. Bacterial genetics studies the mechanisms of their heritable information, their chromosomes, plasmids, transposons, and phages. Gene transfer systems that have been extensively studied in bacteria include genetic transformation, conjugation and transduction. Natural transformation is a bacterial adaptation for DNA transfer between two cells through the intervening medium. The uptake of donor DNA and its recombinational incorporation into the recipient chromosome depends on the expression of numerous bacterial genes whose products direct this process. In general, transformation is a complex, energy-requiring developmental process that appears to be an adaptation for repairing DNA damage. Bacterial conjugation is the transfer of genetic material between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. Bacterial conjugation has been extensively studied in Escherichia coli, but also occurs in other bacteria such as Mycobacterium smegmatis. Conjugation requires stable and extended contact between a donor and a recipient strain, is DNase resistant, and the transferred DNA is incorporated into the recipient chromosome by homologous recombination. E. coli conjugation is mediated by expression of plasmid genes, whereas mycobacterial conjugation is mediated by genes on the bacterial chromosome. Transduction is the process by which foreign DNA is introduced into a cell by a virus or viral vector. Transduction is a common tool used by molecular biologists to stably introduce a foreign gene into a host cell's genome. Archaea Archaea is a domain of organisms that are prokaryotic, single-celled, and are thought to have developed 4 billion years ago. "They have no cell nucleus or any other organelles inside their cells."Archaea replicate asexually in a process known as binary fission. The cell division cycle includes when chromosomes of daughter cells replicate. Because archea have a singular structure chromosome, the two daughter cells separate and cell divides. Archaea have motility include with flagella, which is a tail like structure. Archaeal chromosomes replicate from different origins of replication, producing two haploid daughter cells. " They share a common ancestor with bacteria, but are more closely related to eukaryotes in comparison to bacteria. Some Archaea are able to survive extreme environments, which leads to many applications in the field of genetics. One of such applications is the use of archaeal enzymes, which would be better able to survive harsh conditions in vitro. Gene transfer and genetic exchange have been studied in the halophilic archaeon Halobacterium volcanii and the hyperthermophilic archaeons Sulfolobus solfataricus and Sulfolobus acidocaldarius. H. volcani forms cytoplasmic bridges between cells that appear to be used for transfer of DNA from one cell to another in either direction. When S. solfataricus and S. acidocaldarius are exposed to DNA damaging agents, species-specific cellular aggregation is induced. Cellular aggregation mediates chromosomal marker exchange and genetic recombination with high frequency. Cellular aggregation is thought to enhance species specific DNA transfer between Sulfolobus cells in order to provide increased repair of damaged DNA by means of homologous recombination. Archaea are divided into 3 subgroups which are halophiles, methanogens, and thermoacidophiles. The first group, methanogens, are archaeabacteria that live in swamps and marshes as well as in the gut of humans. They also play a major role in decay and decomposition with dead organisms. Methanogens are anaerobic organisms, which are killed when they are exposed to oxygen. The second subgroup of archaeabacteria, halophiles are organisms that are present in areas with high salt concentration like the Great Salt Lake and the Dead Sea. The third subgroup thermoacidophiles also called thermophiles, are organisms that live in acidic areas. They are present in area with low pH levels like hot springs and geyers. Most thermophiles are found in the Yellowstone National Park. Archaeal Genetics is the study of genes that consist of single nucleus-free cells. Archaea have a single, circular chromosomes that contain multiple origins of replication for initiation of DNA synthesis. DNA replication of Archaea involves similar processes including initiation, elongation, and termination. The primase used to synthesize a RNA primer varies than in eukaryotes. The primase by archaea is highly derived version of RNA recognition motif(RRM). Archaea come from Gram positive bacteria, which both have a single lipid bilayer, which are resistant to antibiotics. Archaea are similar to mitochondria in eukaryotes in that they release energy as adenosine triphosphate (ATP) through the chemical reaction called metabolism. Some archaea known as phototrophic archaea use the sun’s energy to produce ATP. ATP synthase is used as photophosphorylation to convert chemicals into ATP. Archaea and bacteria are structurally similar even though they are not closely related in the tree of life. The shapes of both bacteria and archaea cells vary from a spherical shape known as coccus or a rod-shape known as bacillus. They are also related with no internal membrane and a cell wall that assists the cell maintaining its shape. Even though archaeal cells have cells walls, they do not contain peptidoglycan, which means archaea do not produce cellulose or chitin. Archaea are most closely related to eukaryotes due to tRNA present in archaea, but not in bacteria. Archaea have the same ribosomes as eukaryotes that synthesize into proteins. Aside from the morphology of archaea and bacteria, there are other differences between these domains. Archaea that live in extreme and harsh environments with low pH levels such as salt lakes, oceans, and in the gut of ruminants and humans are also known as extremophiles. In contrast, bacteria are found in various areas such as plants, animals, soil, and rocks. Fungi Fungi can be both multicellular and unicellular organisms, and are distinguished from other microbes by the way they obtain nutrients. Fungi secrete enzymes into their surroundings, to break down organic matter. Fungal genetics uses yeast, and filamentous fungi as model organisms for eukaryotic genetic research, including cell cycle regulation, chromatin structure and gene regulation. Studies of the fungus Neurospora crassa have contributed substantially to understanding how genes work. N. crassa is a type of red bread mold of the phylum Ascomycota. It is used as a model organism because it is easy to grow and has a haploid life cycle that makes genetic analysis simple since recessive traits will show up in the offspring. Analysis of genetic recombination is facilitated by the ordered arrangement of the products of meiosis in ascospores. In its natural environment, N. crassa lives mainly in tropical and sub-tropical regions. It often can be found growing on dead plant matter after fires. Neurospora was used by Edward Tatum and George Beadle in their experiments for which they won the Nobel Prize in Physiology or Medicine in 1958. The results of these experiments led directly to the one gene-one enzyme hypothesis that specific genes code for specific proteins. This concept proved to be the opening gun in what became molecular genetics and all the developments that have followed from that. Saccharomyces cerevisiae is a yeast of the phylum Ascomycota. During vegetative growth that ordinarily occurs when nutrients are abundant, S. cerevisiae reproduces by mitosis as diploid cells. However, when starved, these cells undergo meiosis to form haploid spores. Mating occurs when haploid cells of opposite mating types MATa and MATα come into contact. Ruderfer et al. pointed out that, in nature, such contacts are frequent between closely related yeast cells for two reasons. The first is that cells of opposite mating type are present together in the same acus, the sac that contains the cells directly produced by a single meiosis, and these cells can mate with each other. The second reason is that haploid cells of one mating type, upon cell division, often produce cells of the opposite mating type. An analysis of the ancestry of natural S. cerevisiae strains concluded that outcrossing occurs very infrequently (only about once every 50,000 cell divisions). The relative rarity in nature of meiotic events that result from outcrossing suggests that the possible long-term benefits of outcrossing (e.g. generation of diversity) are unlikely to be sufficient for generally maintaining sex from one generation to the next. Rather, a short-term benefit, such as meiotic recombinational repair of DNA damages caused by stressful conditions (such as starvation) may be the key to the maintenance of sex in S. cerevisiae. Candida albicans is a diploid fungus that grows both as a yeast and as a filament. C. albicans is the most common fungal pathogen in humans. It causes both debilitating mucosal infections and potentially life-threatening systemic infections. C. albicans has maintained an elaborate, but largely hidden, mating apparatus. Johnson suggested that mating strategies may allow C. albicans to survive in the hostile environment of a mammalian host. Among the 250 known species of aspergilli, about 33% have an identified sexual state. Among those Aspergillus species that exhibit a sexual cycle the overwhelming majority in nature are homothallic (self-fertilizing). Selfing in the homothallic fungus Aspergillus nidulans involves activation of the same mating pathways characteristic of sex in outcrossing species, i.e. self-fertilization does not bypass required pathways for outcrossing sex but instead requires activation of these pathways within a single individual. Fusion of haploid nuclei occurs within reproductive structures termed cleistothecia, in which the diploid zygote undergoes meiotic divisions to yield haploid ascospores. Protozoa Protozoa are unicellular organisms, which have nuclei, and ultramicroscopic cellular bodies within their cytoplasm. One particular aspect of protozoa that are of interest to human geneticists are their flagella, which are very similar to human sperm flagella. Studies of Paramecium have contributed to our understanding of the function of meiosis. Like all ciliates, Paramecium has a polyploid macronucleus, and one or more diploid micronuclei. The macronucleus controls non-reproductive cell functions, expressing the genes needed for daily functioning. The micronucleus is the generative, or germline nucleus, containing the genetic material that is passed along from one generation to the next. In the asexual fission phase of growth, during which cell divisions occur by mitosis rather than meiosis, clonal aging occurs leading to a gradual loss of vitality. In some species, such as the well studied Paramecium tetraurelia, the asexual line of clonally aging paramecia loses vitality and expires after about 200 fissions if the cells fail to undergo meiosis followed by either autogamy (self-fertilization) or conjugation (outcrossing) (see aging in Paramecium). DNA damage increases dramatically during successive clonal cell divisions and is a likely cause of clonal aging in P. tetraurelia. When clonally aged P. tetraurelia are stimulated to undergo meiosis in association with either autogamy or conjugation, the progeny are rejuvenated, and are able to have many more mitotic binary fission divisions. During either of these processes the micronuclei of the cell(s) undergo meiosis, the old macronucleus disintegrates and a new macronucleus is formed by replication of the micronuclear DNA that had recently undergone meiosis. There is apparently little, if any, DNA damage in the new macronucleus, suggesting that rejuvenation is associated with the repair of these damages in the micronucleus during meiosis. Viruses Viruses are capsid-encoding organisms composed of proteins and nucleic acids that can self-assemble after replication in a host cell using the host's replication machinery. There is a disagreement in science about whether viruses are living due to their lack of ribosomes. Comprehending the viral genome is important not only for studies in genetics but also for understanding their pathogenic properties. Many types of virus are capable of genetic recombination. When two or more individual viruses of the same type infect a cell, their genomes may recombine with each other to produce recombinant virus progeny. Both DNA and RNA viruses can undergo recombination. When two or more viruses, each containing lethal genomic damage infect the same host cell, the virus genomes often can pair with each other and undergo homologous recombinational repair to produce viable progeny. This process is known as multiplicity reactivation. Enzymes employed in multiplicity reactivation are functionally homologous to enzymes employed in bacterial and eukaryotic recombinational repair. Multiplicity reactivation has been found to occur with pathogenic viruses including influenza virus, HIV-1, adenovirus simian virus 40, vaccinia virus, reovirus, poliovirus and herpes simplex virus as well as numerous Bacteriophages. Any living organism can contract a virus by giving parasites the opportunity to grow. Parasites feed on the nutrients of another organism which allows the virus to thrive. Once the human body detects a virus, it then creates fighter cells that attack the parasite/virus; literally, causing a war within the body. A virus can affect any part of the body causing a wide range of illnesses such as the flu, the common cold, and sexually transmitted diseases. The flu is an airborne virus that travels through tiny droplets and is formally known as Influenza. Parasites travel through the air and attack the human respiratory system. People that are initially infected with this virus pass infection on by normal day to day activity such as talking and sneezing. When a person comes in contact with the virus, unlike the common cold, the flu virus affects people almost immediately. Symptoms of this virus are very similar to the common cold but much worse. Body aches, sore throat, headache, cold sweats, muscle aches and fatigue are among the many symptoms accompanied by the virus. A viral infection in the upper respiratory tract results in the common cold. With symptoms like sore throat, sneezing, small fever, and a cough, the common cold is usually harmless and tends to clear up within a week or so. The common cold is also a virus that is spread through the air but can also be passed through direct contact. This infection takes a few days to develop symptoms; it is a gradual process unlike the flu. Applications of microbial genetics Microbes are ideally suited for biochemical and genetics studies and have made huge contributions to these fields of science such as the demonstration that DNA is the genetic material, that the gene has a simple linear structure, that the genetic code is a triplet code, and that gene expression is regulated by specific genetic processes. Jacques Monod and François Jacob used Escherichia coli, a type of bacteria, in order to develop the operon model of gene expression, which lay down the basis of gene expression and regulation. Furthermore, the hereditary processes of single-celled eukaryotic microorganisms are similar to those in multi-cellular organisms allowing researchers to gather information on this process as well. Another bacterium which has greatly contributed to the field of genetics is Thermus aquaticus, which is a bacterium that tolerates high temperatures. From this microbe scientists isolated the enzyme Taq polymerase, which is now used in the powerful experimental technique, Polymerase chain reaction(PCR). Additionally the development of recombinant DNA technology through the use of bacteria has led to the birth of modern genetic engineering and biotechnology. Using microbes, protocols were developed to insert genes into bacterial plasmids, taking advantage of their fast reproduction, to make biofactories for the gene of interest. Such genetically engineered bacteria can produce pharmaceuticals such as insulin, human growth hormone, interferons and blood clotting factors. These biofactories are typically much cheaper to operate and maintain than the alternative procedures of producing pharmaceuticals. They're like millions of tiny pharmaceutical machines that only require basic raw materials and the right environment to produce a large amount of product. The utilization of incorporating the human insulin gene alone has had profound impacts on the medical industry. It is thought that biofactories might be the ultimate key in reducing the price of expensive life saving pharmaceutical compounds. Microbes synthesize a variety of enzymes for industrial applications, such as fermented foods, laboratory test reagents, dairy products (such as renin), and even in clothing (such as Trichoderma fungus whose enzyme is used to give jeans a stone washed appearance). There is currently potential for microbes to be used as an alternative for petroleum-based surfactants. Microbial surfactants would still have the same kind of hydrophillic and hydrophobic functional groups as their petroleum-based counterparts, but they have numerous advantages over their competition. In comparison, microbial amphiphillic compounds have robust a tendency to stay functional in extreme environments such as areas with high heat or extreme ph. all while being biodegradable and less toxic to the environment. This efficient and cheap method of production could be the solution to the ever increasing global consumption of surfactants. Ironically, the application for bio-based surfactants with the most demand is the oil industry which uses surfactants in general production as well as development of specific oil compositions. Microbes are an abundant source of lipases which have a wide variety of industrial and consumer applications. Enzymes perform a wide variety of functions inside the cells of living things, so it only makes sense that we can use them for similar purposes on a larger scale. Microbial enzymes are typically preferred for mass production due to the wide variety of functions available and their ability to be mass produced. Plant and animal enzymes are typically too expensive to be mass-produced, however this is not always the case. Especially in plants. Industrial applications of lipases generally include the enzyme as a more efficient and cost-effective catalyst in the production of commercially valuable chemicals from fats and oils, because they are able to retain their specific properties in mild easy to maintain conditions and work at an increased rate. Other already successful applications of lipolytic enzymes include the production of biofuels, polymers, non-stereoisomeric pharmaceuticals, agricultural compounds, and flavor-enhancing compounds. In regards to industrial optimization, the benefit of the biofactory method of production is the ability to direct optimization by means of directed evolution. The efficiency and specificity of production will increase over time by imposing artificial selection. This method of improving efficiency is nothing new in agriculture, but it's a relatively new concept in industrial production. It is thought that this method will be far superior to conventional industrial methods because you have optimization on multiple fronts. The first front being that the microorganisms that make up biofactories can be evolved to our needs. The second front being the conventional method of optimization brought about by the integration of advancing technologies. This combination of conventional and biological advancement is just now becoming utilized and provides a virtually limitless number of applications. See also Bacterial genetics References Microbiology Genetics by type of organism
0.782013
0.972214
0.760284
Gametophyte
A gametophyte is one of the two alternating multicellular phases in the life cycles of plants and algae. It is a haploid multicellular organism that develops from a haploid spore that has one set of chromosomes. The gametophyte is the sexual phase in the life cycle of plants and algae. It develops sex organs that produce gametes, haploid sex cells that participate in fertilization to form a diploid zygote which has a double set of chromosomes. Cell division of the zygote results in a new diploid multicellular organism, the second stage in the life cycle known as the sporophyte. The sporophyte can produce haploid spores by meiosis that on germination produce a new generation of gametophytes. Algae In some multicellular green algae (Ulva lactuca is one example), red algae and brown algae, sporophytes and gametophytes may be externally indistinguishable (isomorphic). In Ulva, the gametes are isogamous, all of one size, shape and general morphology. Land plants In land plants, anisogamy is universal. As in animals, female and male gametes are called, respectively, eggs and sperm. In extant land plants, either the sporophyte or the gametophyte may be reduced (heteromorphic). No extant gametophytes have stomata, but they have been found on fossil species like the early Devonian Aglaophyton from the Rhynie chert. Other fossil gametophytes found in the Rhynie chert shows they were much more developed than present forms, resembling the sporophyte in having a well-developed conducting strand, a cortex, an epidermis and a cuticle with stomata, but were much smaller. Bryophytes In bryophytes (mosses, liverworts, and hornworts), the gametophyte is the most visible stage of the life cycle. The bryophyte gametophyte is longer lived, nutritionally independent, and the sporophytes are attached to the gametophytes and dependent on them. When a moss spore germinates it grows to produce a filament of cells (called the protonema). The mature gametophyte of mosses develops into leafy shoots that produce sex organs (gametangia) that produce gametes. Eggs develop in archegonia and sperm in antheridia. In some bryophyte groups such as many liverworts of the order Marchantiales, the gametes are produced on specialized structures called gametophores (or gametangiophores). Vascular plants All vascular plants are sporophyte dominant, and a trend toward smaller and more sporophyte-dependent female gametophytes is evident as land plants evolved reproduction by seeds. Those vascular plants, such as clubmosses and many ferns, that produce only one type of spore are said to be homosporous. They have exosporic gametophytes — that is, the gametophyte is free-living and develops outside of the spore wall. Exosporic gametophytes can either be bisexual, capable of producing both sperm and eggs in the same thallus (monoicous), or specialized into separate male and female organisms (dioicous). In heterosporous vascular plants (plants that produce both microspores and megaspores), the gametophytes develop endosporically (within the spore wall). These gametophytes are dioicous, producing either sperm or eggs but not both. Ferns In most ferns, for example, in the leptosporangiate fern Dryopteris, the gametophyte is a photosynthetic free living autotrophic organism called a prothallus that produces gametes and maintains the sporophyte during its early multicellular development. However, in some groups, notably the clade that includes Ophioglossaceae and Psilotaceae, the gametophytes are subterranean and subsist by forming mycotrophic relationships with fungi. Homosporous ferns secrete a chemical called antheridiogen. Lycophytes Extant lycophytes produce two different types of gametophytes. In the homosporous families Lycopodiaceae and Huperziaceae, spores germinate into bisexual free-living, subterranean and mycotrophic gametophytes that derive nutrients from symbiosis with fungi. In Isoetes and Selaginella, which are heterosporous, microspores and megaspores are dispersed from sporangia either passively or by active ejection. Microspores produce microgametophytes which produce sperm. Megaspores produce reduced megagametophytes inside the spore wall. At maturity, the megaspore cracks open at the trilete suture to allow the male gametes to access the egg cells in the archegonia inside. The gametophytes of Isoetes appear to be similar in this respect to those of the extinct Carboniferous arborescent lycophytes Lepidodendron and Lepidostrobus. Seed plants The seed plant gametophyte life cycle is even more reduced than in basal taxa (ferns and lycophytes). Seed plant gametophytes are not independent organisms and depend upon the dominant sporophyte tissue for nutrients and water. With the exception of mature pollen, if the gametophyte tissue is separated from the sporophyte tissue it will not survive. Due to this complex relationship and the small size of the gametophyte tissue—in some situations single celled—differentiating with the human eye or even a microscope between seed plant gametophyte tissue and sporophyte tissue can be a challenge. While seed plant gametophyte tissue is typically composed of mononucleate haploid cells (1 x n), specific circumstances can occur in which the ploidy does vary widely despite still being considered part of the gametophyte. In gymnosperms, the male gametophytes are produced inside microspores within the microsporangia located inside male cones or microstrobili. In each microspore, a single gametophyte is produced, consisting of four haploid cells produced by meiotic division of a diploid microspore mother cell. At maturity, each microspore-derived gametophyte becomes a pollen grain. During its development, the water and nutrients that the male gametophyte requires are provided by the sporophyte tissue until they are released for pollination. The cell number of each mature pollen grain varies between the gymnosperm orders. Cycadophyta have 3 celled pollen grains while Ginkgophyta have 4 celled pollen grains. Gnetophyta may have 2 or 3 celled pollen grains depending on the species, and Coniferophyta pollen grains vary greatly ranging from single celled to 40 celled. One of these cells is typically a germ cell and other cells may consist of a single tube cell which grows to form the pollen tube, sterile cells, and/or prothallial cells which are both vegetative cells without an essential reproductive function. After pollination is successful, the male gametophyte continues to develop. If a tube cell was not developed in the microstrobilus, one is created after pollination via mitosis. The tube cell grows into the diploid tissue of the female cone and may branch out into the megastrobilus tissue or grow straight towards the egg cell. The megastrobilus sporophytic tissue provides nutrients for the male gametophyte at this stage. In some gymnosperms, the tube cell will create a direct channel from the site of pollination to the egg cell, in other gymnosperms, the tube cell will rupture in the middle of the megastrobilus sporophyte tissue. This occurs because in some gymnosperm orders, the germ cell is nonmobile and a direct pathway is needed, however, in Cycadophyta and Ginkgophyta, the germ cell is mobile due to flagella being present and a direct tube cell path from the pollination site to the egg is not needed. In most species the germ cell can be more specifically described as a sperm cell which mates with the egg cell during fertilization, though that is not always the case. In some Gnetophyta species, the germ cell will release two sperm nuclei that undergo a rare gymnosperm double fertilization process occurring solely with sperm nuclei and not with the fusion of developed cells. After fertilization is complete in all orders, the remaining male gametophyte tissue will deteriorate. The female gametophyte in gymnosperms differs from the male gametophyte as it spends its whole life cycle in one organ, the ovule located inside the megastrobilus or female cone. Similar to the male gametophyte, the female gametophyte normally is fully dependent on the surrounding sporophytic tissue for nutrients and the two organisms cannot be separated. However, the female gametophytes of Ginkgo biloba do contain chlorophyll and can produce some of their own energy, though, not enough to support itself without being supplemented by the sporophyte. The female gametophyte forms from a diploid megaspore that undergoes meiosis and starts being singled celled. The size of the mature female gametophyte varies drastically between gymnosperm orders. In Cycadophyta, Ginkgophyta, Coniferophyta, and some Gnetophyta, the single celled female gametophyte undergoes many cycles of mitosis ending up consisting of thousands of cells once mature. At a minimum, two of these cells are egg cells and the rest are haploid somatic cells, but more egg cells may be present and their ploidy, though typically haploid, may vary. In select Gnetophyta, the female gametophyte stays singled celled. Mitosis does occur, but no cell divisions are ever made. This results in the mature female gametophyte in some Gnetophyta having many free nuclei in one cell. Once mature, this single celled gametophyte is 90% smaller than the female gametophytes in other gymnosperm orders. After fertilization, the remaining female gametophyte tissue in gymnosperms serves as the nutrient source for the developing zygote (even in Gnetophyta where the diploid zygote cell is much smaller at that stage, and for a while lives within the single celled gametophyte). The precursor to the male angiosperm gametophyte is a diploid microspore mother cell located inside the anther. Once the microspore undergoes meiosis, 4 haploid cells are formed, each of which is a singled celled male gametophyte. The male gametophyte will develop via one or two rounds of mitosis inside the anther. This creates a 2 or 3 celled male gametophyte which becomes known as the pollen grain once dehiscing occurs. One cell is the tube cell, and the remaining cell/cells are the sperm cells. The development of the three celled male gametophyte prior to dehiscing has evolved multiple times and is present in about a third of angiosperm species allowing for faster fertilization after pollination. Once pollination occurs, the tube cell grows in size and if the male gametophyte is only 2 cells at this stage, the single sperm cell undergoes mitosis to create a second sperm cell. Just like in gymnosperms, the tube cell in angiosperms obtains nutrients from the sporophytic tissue, and may branch out into the pistil tissue or grow directly towards the ovule. Once double fertilization is completed, the tube cell and other vegetative cells, if present, are all that remains of the male gametophyte and soon degrade. The female gametophyte of angiosperms develops in the ovule (located inside the female or hermaphrodite flower). Its precursor is a diploid megaspore that undergoes meiosis which produces four haploid daughter cells. Three of these independent gametophyte cells degenerate and the one that remains is the gametophyte mother cell which normally contains one nucleus. In general, it will then divide by mitosis until it consists of 8 nuclei separated into 1 egg cell, 3 antipodal cells, 2 synergid cells, and a central cell that contains two nuclei. In select angiosperms, special cases occur in which the female gametophyte is not 7 celled with 8 nuclei. On the small end of the spectrum, some species have mature female gametophytes with only 4 cells, each with one nuclei. Conversely, some species have 10-celled mature female gametophytes consisting of 16 total nuclei. Once double fertilization occurs, the egg cell becomes the zygote which is then considered sporophyte tissue. Scholars still disagree on whether the fertilized central cell is considered gametophyte tissue. Some botanists consider this endospore as gametophyte tissue with typically 2/3 being female and 1/3 being male, but as the central cell before double fertilization can range from 1n to 8n in special cases, the fertilized central cells range from 2n (50% male/female) to 9n (1/9 male, 8/9th female). However, other botanists consider the fertilized endospore as sporophyte tissue. Some believe it is neither. Heterospory In heterosporic plants, there are two distinct kinds of gametophytes. Because the two gametophytes differ in form and function, they are termed heteromorphic, from hetero- "different" and morph "form". The egg-producing gametophyte is known as a megagametophyte, because it is typically larger, and the sperm producing gametophyte is known as a microgametophyte. Species which produce egg and sperm on separate gametophytes plants are termed dioicous, while those that produce both eggs and sperm on the same gametophyte are termed monoicous. In heterosporous plants (water ferns, some lycophytes, as well as all gymnosperms and angiosperms), there are two distinct types of sporangia, each of which produces a single kind of spore that germinates to produce a single kind of gametophyte. However, not all heteromorphic gametophytes come from heterosporous plants. That is, some plants have distinct egg-producing and sperm-producing gametophytes, but these gametophytes develop from the same kind of spore inside the same sporangium; Sphaerocarpos is an example of such a plant. In seed plants, the microgametophyte is called pollen. Seed plant microgametophytes consists of several (typically two to five) cells when the pollen grains exit the sporangium. The megagametophyte develops within the megaspore of extant seedless vascular plants and within the megasporangium in a cone or flower in seed plants. In seed plants, the microgametophyte (pollen) travels to the vicinity of the egg cell (carried by a physical or animal vector) and produces two sperm by mitosis. In gymnosperms, the megagametophyte consists of several thousand cells and produces one to several archegonia, each with a single egg cell. The gametophyte becomes a food storage tissue in the seed. In angiosperms, the megagametophyte is reduced to only a few cells, and is sometimes called the embryo sac. A typical embryo sac contains seven cells and eight nuclei, one of which is the egg cell. Two nuclei fuse with a sperm nucleus to form the primary endospermic nucleus which develops to form triploid endosperm, which becomes the food storage tissue in the seed. See also References Further reading Plant morphology Plant anatomy Plant reproduction
0.766105
0.992375
0.760264
Gordon's functional health patterns
Gordon’s functional health patterns is a method devised by Marjory Gordon to be used by nurses in the nursing process to provide a more comprehensive nursing assessment of the patient. The following areas are assessed through questions asked by the nurse and medical examinations to provide an overview of the individual's health status and health practices that are used to reach the current level of health or wellness. Health Perception and Management Nutritional metabolic Elimination-excretion patterns and problems need to be evaluated (constipation, incontinence, diarrhea) Activity exercise-whether one is able to do daily activities normally without any problem, self care activities Sleep rest-do they have hypersomnia, insomnia, do they have normal sleeping patterns Cognitive-perceptual-assessment of neurological function is done to assess, check the person's ability to comprehend information Self perception/self concept Role relationship—This pattern should only be used if it is appropriate for the patient's age and specific situation. Sexual reproductivity Coping-stress tolerance Value-Belief Pattern References Further reading Marjory Gordon. Manual of Nursing Diagnosis - Eleventh Edition. . Nursing theory
0.773712
0.982591
0.760243
Industrialisation
Industrialisation (UK) or industrialization (US) is the period of social and economic change that transforms a human group from an agrarian society into an industrial society. This involves an extensive reorganisation of an economy for the purpose of manufacturing. Industrialisation is associated with increase of polluting industries heavily dependent on fossil fuels. With the increasing focus on sustainable development and green industrial policy practices, industrialisation increasingly includes technological leapfrogging, with direct investment in more advanced, cleaner technologies. The reorganisation of the economy has many unintended consequences both economically and socially. As industrial workers' incomes rise, markets for consumer goods and services of all kinds tend to expand and provide a further stimulus to industrial investment and economic growth. Moreover, family structures tend to shift as extended families tend to no longer live together in one household, location or place. Background The first transformation from an agricultural to an industrial economy is known as the Industrial Revolution and took place from the mid-18th to early 19th century. It began in Great Britain, spreading to Belgium, Switzerland, Germany, and France and eventually to other areas in Europe and North America. Characteristics of this early industrialisation were technological progress, a shift from rural work to industrial labour, and financial investments in new industrial structures. Later commentators have called this the First Industrial Revolution. The "Second Industrial Revolution" labels the later changes that came about in the mid-19th century after the refinement of the steam engine, the invention of the internal combustion engine, the harnessing of electricity and the construction of canals, railways, and electric-power lines. The invention of the assembly line gave this phase a boost. Coal mines, steelworks, and textile factories replaced homes as the place of work. By the end of the 20th century, East Asia had become one of the most recently industrialised regions of the world. There is considerable literature on the factors facilitating industrial modernisation and enterprise development. Social consequences The Industrial Revolution was accompanied by significant changes in the social structure, the main change being a transition from farm work to factory-related activities. This has resulted in the concept of Social class, i.e., hierarchical social status defined by an individual's economic power. It has changed the family system as most people moved into cities, with extended family living apart becoming more common. The movement into more dense urban areas from less dense agricultural areas has consequently increased the transmission of diseases. The place of women in society has shifted from primary caregivers to breadwinners, thus reducing the number of children per household. Furthermore, industrialisation contributed to increased cases of child labour and thereafter education systems. Urbanisation As the Industrial Revolution was a shift from the agrarian society, people migrated from villages in search of jobs to places where factories were established. This shifting of rural people led to urbanisation and an increase in the population of towns. The concentration of labour in factories has increased urbanisation and the size of settlements, to serve and house the factory workers. Exploitation Changes in family structure Family structure changes with industrialisation. Sociologist Talcott Parsons noted that in pre-industrial societies there is an extended family structure spanning many generations who probably remained in the same location for generations. In industrialised societies the nuclear family, consisting of only parents and their growing children, predominates. Families and children reaching adulthood are more mobile and tend to relocate to where jobs exist. Extended family bonds become more tenuous. One of the most important criticisms of industrialisation is that it caused children to stay away from home for many hours and to use them as cheap workers in factories. Industrialisation in East Asia Between the early 1960s and 1990s, the Four Asian Tigers underwent rapid industrialisation and maintained exceptionally high growth rates. Current situation the international development community (World Bank, Organisation for Economic Co-operation and Development (OECD), many United Nations departments, FAO WHO ILO and UNESCO, endorses development policies like water purification or primary education and co-operation amongst third world communities. Some members of the economic communities do not consider contemporary industrialisation policies as being adequate to the global south (Third World countries) or beneficial in the longer term, with the perception that they may only create inefficient local industries unable to compete in the free-trade dominated political order which industrialisation has fostered. Environmentalism and Green politics may represent more visceral reactions to industrial growth. Nevertheless, repeated examples in history of apparently successful industrialisation (Britain, Soviet Union, South Korea, China, etc.) may make conventional industrialisation seem like an attractive or even natural path forward, especially as populations grow, consumerist expectations rise and agricultural opportunities diminish. The relationships among economic growth, employment, and poverty reduction are complex, and higher productivity can sometimes lead to static or even lower employment (see jobless recovery). There are differences across sectors, whereby manufacturing is less able than the tertiary sector to accommodate both increased productivity and employment opportunities; more than 40% of the world's employees are "working poor", whose incomes fail to keep themselves and their families above the $2-a-day poverty line. There is also a phenomenon of deindustrialisation, as in the former USSR countries' transition to market economies, and the agriculture sector is often the key sector in absorbing the resultant unemployment. See also Automation Deindustrialisation Division of labour Great Divergence Idea of Progress Mass production Mechanisation Newly industrialised country References Further reading Hewitt, T., Johnson, H. and Wield, D. (Eds) (1992) industrialisation and Development, Oxford University Press: Oxford. Hobsbawm, Eric (1962): The Age of Revolution. Abacus. Kemp, Tom (1993) Historical Patterns of Industrialisation, Longman: London. Kiely, R (1998) industrialisation and Development: A comparative analysis, UCL Press:London. Pomeranz, Ken (2001)The Great Divergence: China, Europe and the Making of the Modern World Economy (Princeton Economic History of the Western World) by (Princeton University Press; New Ed edition, 2001) Tilly, Richard H.: Industrialization as an Historical Process, European History Online, Main: Institute of European History, 2010, retrieved: 29 February 2011. External links Industrialisation Economic development Economic growth Industrial history Late modern economic history Secondary sector of the economy
0.762237
0.997364
0.760228
Thinking In Systems: A Primer
Thinking in Systems provides an introduction to systems thinking by Donella Meadows, the main author of the 1972 report The Limits to Growth, and describes some of the ideas behind the analysis used in that report. The book was originally circulated as a draft in 1993, and versions of this draft circulated informally within the systems dynamics community for years. After the death of Meadows in 2001, the book was restructured by her colleagues at the Sustainability Institute, edited by Diana Wright, and finally published in 2008. The work is heavily influenced by the work of Jay Forrester and the MIT Systems Dynamics Group, whose World3 model formed the basis of analysis in Limits to Growth. In addition, Meadows drew on a wide range of other sources for examples and illustrations, including ecology, management, farming and demographics; as well as taking several examples from one week's reading of the International Herald Tribune in 1992. Influence of Thinking in Systems The Post Growth Institute has ranked Donella Meadows 3rd in their list of the top 100 sustainability thinkers. Thinking in Systems is frequently cited as a key influence by programmers and computer scientists, as well as people working in other disciplines. Key Concepts The central concept is that system behaviors are not caused by exogenous events, but rather are intrinsic to the system itself. The connections and feedback loops within a system dictate the range of behaviors the system is capable of exhibiting. Therefore, it is more important to understand the internal structures of the system, than to focus on specific events that perturb it. The main part of the book walks through basic systems concepts, types of systems and the range of behaviors they exhibit. In particular, it focuses on the roles of feedback loops and the build up of "stocks" in the system which can interact in highly complex and unexpected ways. The final section of the book explores how to improve the effectiveness of interventions to improve systems behaviors. A range of common errors or policy traps are discussed, such as "the tragedy of the commons" and "rule beating", that prevent effective intervention, or lead to good intentions causing greater damage. By contrast, the key to successful intervention is identifying the leverage points where relatively minor alterations can effect a substantial change to a system's behavior. This section expands on an influential essay "Leverage Points - Places to intervene in a system" that Meadows originally published in Whole Earth in 1997. See also Systems thinking References 2008 non-fiction books Chelsea Green Publishing books
0.773323
0.983062
0.760224
The Genetical Theory of Natural Selection
The Genetical Theory of Natural Selection is a book by Ronald Fisher which combines Mendelian genetics with Charles Darwin's theory of natural selection, with Fisher being the first to argue that "Mendelism therefore validates Darwinism" and stating with regard to mutations that "The vast majority of large mutations are deleterious; small mutations are both far more frequent and more likely to be useful", thus refuting orthogenesis. First published in 1930 by The Clarendon Press, it is one of the most important books of the modern synthesis, and helped define population genetics. It had been described by J. F. Crow as the "deepest book on evolution since Darwin". It is commonly cited in biology books, outlining many concepts that are still considered important such as Fisherian runaway, Fisher's principle, reproductive value, Fisher's fundamental theorem of natural selection, Fisher's geometric model, the sexy son hypothesis, mimicry and the evolution of dominance. It was dictated to his wife in the evenings as he worked at Rothamsted Research in the day. Contents In the preface, Fisher considers some general points, including that there must be an understanding of natural selection distinct from that of evolution, and that the then-recent advances in the field of genetics (see history of genetics) now allowed this. In the first chapter, Fisher considers the nature of inheritance, rejecting blending inheritance, because it would eliminate genetic variance, in favour of particulate inheritance. The second chapter introduces Fisher's fundamental theorem of natural selection. The third considers the evolution of dominance, which Fisher believed was strongly influenced by modifiers. Other chapters discuss parental investment, Fisher's geometric model, concerning how spontaneous mutations affect biological fitness, Fisher's principle which explains why the sex ratio between males and females is almost always 1:1, reproductive value, examining the demography of having girl children. Using his knowledge of statistics, the Fisherian runaway, which explores how sexual selection can lead to a positive feedback runaway loop, producing features such as the peacock's plumage. He also wrote about the evolution of dominance, which explores genetic dominance. Eugenics The last five chapters (8-12) include Fisher's concern about dysgenics and proposals for eugenics. Fisher attributed the fall of civilizations to the fertility of their upper classes being diminished, and used British 1911 census data to show an inverse relationship between fertility and social class, partly due, he claimed, to the lower financial costs and hence increasing social status of families with fewer children. He proposed the abolition of extra allowances to large families, with the allowances proportional to the earnings of the father. He served in several official committees to promote eugenics. In 1934, he resigned from the Eugenics Society over a dispute about increasing the power of scientists within the movement. Editions A second, slightly revised edition was republished in 1958. In 1999, a third variorum edition, with the original 1930 text, annotated with the 1958 alterations, notes and alterations accidentally omitted from the second edition was published, edited by professor John Henry Bennett of the University of Adelaide. Dedication The book is dedicated to Major Leonard Darwin, Fisher's friend, correspondent and son of Charles Darwin, "In gratitude for the encouragement, given to the author, during the last fifteen years, by discussing many of the problems dealt with in this book." Reviews The book was reviewed by Charles Galton Darwin, who sent Fisher his copy of the book, with notes in the margin, starting a correspondence which lasted several years. The book also had a major influence on W. D. Hamilton's theories on the genetic basis of kin selection. John Henry Bennett gave an account of the writing and reception of the book. Sewall Wright, who had many disagreements with Fisher, reviewed the book and wrote that it was "certain to take rank as one of the major contributions to the theory of evolution." J. B. S. Haldane described it as "brilliant." Reginald Punnett was negative, however. The book was largely overlooked for 40 years, and in particular Fisher's fundamental theorem of natural selection was misunderstood. The work had a great effect on W. D. Hamilton, who discovered it as an undergraduate at the University of Cambridge and noted in these excerpts from the rear cover of the 1999 variorum edition: The publication of the variorum edition in 1999 led to renewed interest in the work and reviews by Laurence Cook, Brian Charlesworth, James F. Crow, and A. W. F. Edwards. References Bibliography "This book is based on a series of lectures delivered in January 1931 at the Prifysgol Cymru, Aberystwyth, and entitled 'A re-examination of Darwinism'." External links The Genetical Theory Of Natural Selection 1930 non-fiction books Books about evolution Eugenics books Genetics in the United Kingdom History of genetics 1930 in biology Modern synthesis (20th century) Ronald Fisher
0.786843
0.966156
0.760213
Bacterial cellular morphologies
Bacterial cellular morphologies are the shapes that are characteristic of various types of bacteria and often key to their identification. Their direct examination under a light microscope enables the classification of these bacteria (and archaea). Generally, the basic morphologies are spheres (coccus) and round-ended cylinders or rod shaped (bacillus). But, there are also other morphologies such as helically twisted cylinders (example Spirochetes), cylinders curved in one plane (selenomonads) and unusual morphologies (the square, flat box-shaped cells of the Archaean genus Haloquadratum). Other arrangements include pairs, tetrads, clusters, chains and palisades. Types Coccus A coccus (plural cocci, from the Latin coccinus (scarlet) and derived from the Greek kokkos (berry)) is any microorganism (usually bacteria) whose overall shape is spherical or nearly spherical. Coccus refers to the shape of the bacteria, and can contain multiple genera, such as staphylococci or streptococci. Cocci can grow in pairs, chains, or clusters, depending on their orientation and attachment during cell division. In contrast to many bacilli-shaped bacteria, most cocci bacteria do not have flagella and are non-motile. Cocci is an English loanword of a modern or Neo-Latin noun, which in turn stems from the Greek masculine noun meaning 'berry'. Important human diseases caused by coccoid bacteria include staphylococcal infections, some types of food poisoning, some urinary tract infections, toxic shock syndrome, gonorrhea, as well as some forms of meningitis, throat infections, pneumonias, and sinusitis. Arrangements Coccoid bacteria often occur in characteristic arrangements and these forms have specific names as well; listed here are the basic forms as well as representative bacterial genera: Diplococci are pairs of cocci. Streptococci are chains of cocci such as Streptococcus pyogenes. Staphylococci are irregular (grape-like) clusters of cocci (e.g. Staphylococcus aureus). Tetrads are clusters of four cocci arranged within the same plane such as Micrococcus sp.). Sarcina describes a pack-like cuboidal arrangement of eight cocci such as Sarcina ventriculi. Gram-positive cocci The gram-positive cocci are a large group of bacteria with similar morphology. All are spherical or nearly so, but they vary considerably in size. Members of some genera are identifiable by the way cells are attached to one another: in pockets, in chains, or grape-like clusters. These arrangements reflect patterns of cell division and that cells stick together. Sarcina cells, for example, are arranged in cubical pockets because cell division alternates regularly among the three perpendicular planes. Streptococcus spp. resemble a string of beads because division always occurs in the same plane. Some of these strings, for example, S. pneumoniae, are only two cells long. They are called diplococci. Species of Staphylococcus have no regular plane of division. They form grape-like structures. The various gram-positive cocci differ physiologically and by habitat. Micrococcus spp. are obligate aerobes that inhabit human skin. Staphylococcus spp. also inhabit human skin, but they are facultative anaerobes. They ferment sugars, producing lactic acid as an end product. Many of these species produce carotenoid pigments, which color their colonies yellow or orange. Staphylococcus aureus is a major human pathogen. It can infect almost any tissue in the body, frequently the skin. It often causes nosocomial (hospital-acquired) infections. Diplococci Diplococci are pairs of cocci. Examples of gram-negative diplococci are Neisseria spp. and Moraxella catarrhalis. Examples of gram-positive diplococci are Streptococcus pneumoniae and Enterococcus spp. Presumably, diplococcus has been implicated in encephalitis lethargica. The genus Neisseria belongs to the family Neisseriaceae. This genus, Neisseria, is divided into more than ten different species, but most of them are gram negative and coccoid. The gram-negative, coccoid species include: Neisseria cinerea, N. gonorrhoeae, N. polysaccharea, N. lactamica, N. meningitidis, N. mucosa, N. oralis and N. subflava. The most common of these species are the pathogenic N. meningitidis and N. gonorrhoeae. The genus Moraxella belongs to the family Moraxellaceae. This genus, Moraxellaceae, comprises gram-negative coccobacilli bacteria: Moraxella lacunata, M. atlantae, M. boevrei, M. bovis, M. canis, M. caprae, M. caviae, M. cuniculi, M. equi, M. lincolnii, M. nonliquefaciens, M. osloensis, M. ovis, M. saccharolytica, and M. pluranimalium. However, only one has a morphology of diplococcus, M. catarrhalis, a salient pathogen contributing to infections in the human body. The species Streptococcus pneumoniae belongs to the genus Streptococcus and the family Streptococcaceae. The genus Streptococcus has around 129 species and 23 subspecies that benefit many microbiomes on the human body. There are many species that show non-pathogenic characteristics; however, there are some, like S. pneumoniae, that exhibit pathogenic characteristics in the human body. The genus Enterococcus belongs to the family Enterococcaceae. This genus is divided into 58 species and two subspecies. These gram-positive, coccoid bacteria were once thought to be harmless to the human body. However, within the last ten years, there has been an influx of nosocomial pathogens originating from Enterococcus bacteria. Bacillus A bacillus (: bacilli), also called a bacilliform bacterium or often just a rod (when the context makes the sense clear), is a rod-shaped bacterium or archaeon. Bacilli are found in many different taxonomic groups of bacteria. However, the name Bacillus, capitalized and italicized, refers to a specific genus of bacteria. The name Bacilli, capitalized but not italicized, can also refer to a less specific taxonomic group of bacteria that includes two orders, one of which contains the genus Bacillus. When the word is formatted with lowercase and not italicized, 'bacillus', it will most likely be referring to shape and not to the genus. Bacilliform bacteria are also often simply called rods when the bacteriologic context is clear. Bacilli usually divide in the same plane and are solitary, but can combine to form diplobacilli, streptobacilli, and palisades. Diplobacilli: Two bacilli arranged side by side with each other. Streptobacilli: Bacilli arranged in chains. Coccobacillus: Oval and similar to coccus (circular shaped bacterium). There is no connection between the shape of a bacterium and its color upon Gram staining; there are both gram-positive rods and gram-negative rods. MacConkey agar can be used to distinguish among gram-negative bacilli such as E. coli and salmonella. Arrangements Bacilli usually divide in the same plane and are solitary, but can combine to form diplobacilli, streptobacilli, and palisades. Diplobacilli: Two bacilli arranged side by side with each other. Streptobacilli: Bacilli arranged in chains. Gram-positive examples Actinomyces Bacillus Clostridium Corynebacterium Listeria Propionibacterium Gram-negative examples Bacteroides Citrobacter Enterobacter Escherichia Klebsiella Pseudomonas Proteus Salmonella Serratia Shigella Vibrio Yersinia Coccobacillus A coccobacillus (plural coccobacilli), or bacillococcus, is a type of bacterium with a shape intermediate between cocci (spherical bacteria) and bacilli (rod-shaped bacteria). Coccobacilli, then, are very short rods which may be mistaken for cocci. The word coccobacillus reflects an intermediate shape between coccus (spherical) and bacillus (elongated). Haemophilus influenzae, Gardnerella vaginalis, and Chlamydia trachomatis are coccobacilli. Aggregatibacter actinomycetemcomitans is a Gram-negative coccobacillus prevalent in subgingival plaques. Acinetobacter strains may grow on solid media as coccobacilli. Bordetella pertussis is a Gram-negative coccobacillus responsible for causing whooping cough. Yersinia pestis, the bacterium that causes plague, is also coccobacillus. Coxiella burnetti is also a coccobacillus. Bacteria from the genus Brucella are medically important coccobacilli that cause brucellosis. Haemophilus ducreyi, another medically important Gram-negative coccobacillus, is observed in sexually transmitted disease, chancroid, of Third World countries. Spiral Spiral bacteria are another major bacterial cell morphology. Spiral bacteria can be sub-classified as spirilla, spirochetes, or vibrios based on the number of twists per cell, cell thickness, cell flexibility, and motility. Bacteria are known to evolve specific traits to survive in their ideal environment. Bacteria-caused illnesses hinge on the bacteria’s physiology and their ability to interact with their environment, including the ability to shapeshift. Researchers discovered a protein that allows the bacterium Vibrio cholerae to morph into a corkscrew shape that likely helps it twist into — and then escape — the protective mucus that lines the inside of the gut. Spirillum A spirillum (plural spirilla) is a rigid spiral bacterium that is gram-negative and frequently has external amphitrichous or lophotrichous flagella. Examples include: Members of the genus Spirillum Campylobacter species, such as Campylobacter jejuni, a foodborne pathogen that causes campylobacteriosis Helicobacter species, such as Helicobacter pylori, a cause of peptic ulcers Spirochetes A spirochete (plural spirochetes) is a very thin, elongate, flexible, spiral bacteria that is motile via internal periplasmic flagella inside the outer membrane. They comprise the phylum Spirochaetes. Owing to their morphological properties, spirochetes are difficult to Gram-stain but may be visualized using dark field microscopy or Warthin–Starry stain. Examples include: Leptospira species, which cause leptospirosis. Borrelia species, such as Borrelia burgdorferi, a tick-borne bacterium that causes Lyme disease Treponema species, such as Treponema pallidum, subspecies of which causes treponematoses, including syphilis Helical Helicobacter species are helically shaped, the most common example of which is Helicobacter pylori. A helical shape is seen to be better suited for movement of bacteria in a viscous medium. See also Bacterial morphological plasticity Ferdinand Cohn – gave first named shapes of bacteria References External links Bacteria Picture Gallery Bacteria Morphology (biology)
0.762504
0.996985
0.760206