title
stringlengths 3
71
| text
stringlengths 643
117k
| relevans
float64 0.76
0.83
| popularity
float64 0.94
1
| ranking
float64 0.76
0.83
|
---|---|---|---|---|
Ecology | Ecology is the natural science of the relationships among living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere levels. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history.
Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes.
Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology).
The word ecology was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory.
Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.
Levels, scope, and scale of organization
The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame.
The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes.
Hierarchy
The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales.
To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to guilds, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties."
Biodiversity
Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry.
Habitat
The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment.
Niche
Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness."
Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species.
Niche construction
Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats."
The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time.
Biome
Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans.
Biosphere
The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance.
Population ecology
Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat.
A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration.
An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by:
where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change.
Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst:
where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size will grow to approach equilibrium, where, when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity."
Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data."
Metapopulations and migration
The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population.
In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure.
Community ecology
Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals.
Ecosystem ecology
Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria).
The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity.
Food webs
A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. Food chains in an ecological community create a complex food web. Food webs are a type of concept map that is used to illustrate and study pathways of energy and material flows.
Empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems. Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems.
Food webs illustrate important principles of ecology: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Such linkages explain how ecological communities remain stable over time and eventually can illustrate a "complete" web of life.
The disruption of food webs may have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of an ant species by another (invasive) ant species has been shown to affect how elephants reduce tree cover and thus the predation of lions on zebras.
Trophic levels
A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'.
Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing.
Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."
Keystone species
A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability.
Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied.
Complexity
Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.
"Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960.
Holism
Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed."
Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells.
Relation to evolution
Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation.
Behavioural ecology
All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba.
Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness.
Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk."
Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors.
Cognitive ecology
Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...".
Social ecology
Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members.
Coevolution
Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients.
Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure.
Biogeography
Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory.
Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming.
r/K selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring.
Molecular ecology
The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography.
Human ecology
Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century.
The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth.
Restoration Ecology
Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes.
Relation to the environment
The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat.
The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem.
Disturbance and resilience
A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances are both the cause and product of natural fluctuations within an ecological community. Biodiversity can protect ecosystems from disturbances.
The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celled protozoan has been able to kill up to 100% of sea urchins in some coral reefs in the Red Sea and Western Indian Ocean. Sea urchins enable complex reef ecosystems to thrive by eating algae that would otherwise inhibit coral growth. Similarly, invasive species can wreak havoc on ecosystems. For instance, invasive Burmese pythons have caused a 98% decline of small mammals in the Everglades.
Metabolism and the early atmosphere
The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved.
Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior.
Radiation: heat, temperature and light
The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy.
There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds.
Physical environments
Water
Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water.
Gravity
The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra).
Pressure
Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations.
Wind and turbulence
Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems.
Fire
Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s.
Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems.
Soils
Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils.
Biogeochemistry and climate
Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry.
The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm.
In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles.
History
Early beginnings
Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche.
Ernst Haeckel (left) and Eugenius Warming (right), two founders of ecology
Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy.
Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous.
From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences.
Since 1900
Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892.
In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations.
The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology.
In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers.
Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s.
In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.
See also
Carrying capacity
Chemical ecology
Climate justice
Circles of Sustainability
Cultural ecology
Dialectical naturalism
Ecological death
Ecological empathy
Ecological overshoot
Ecological psychology
Ecology movement
Ecosophy
Ecopsychology
Human ecology
Industrial ecology
Information ecology
Landscape ecology
Natural resource
Normative science
Philosophy of ecology
Political ecology
Theoretical ecology
Sensory ecology
Sexecology
Spiritual ecology
Sustainable development
Lists
Glossary of ecology
Index of biology articles
List of ecologists
Outline of biology
Terminology of ecology
Notes
References
External links
The Nature Education Knowledge Project: Ecology
Biogeochemistry
Emergence | 0.832225 | 0.999069 | 0.831451 |
Evolutionary biology | Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed on to their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolutionary biology to create subfields like evolutionary ecology and evolutionary developmental biology.
More recently, the merge between biological science and applied sciences gave birth to new fields that are extensions of evolutionary biology, including evolutionary robotics, engineering, algorithms, economics, and architecture. The basic mechanisms of evolution are applied directly or indirectly to come up with novel designs or solve problems that are difficult to solve otherwise. The research generated in these applied fields, contribute towards progress, especially from work on evolution in computer science and engineering fields such as mechanical engineering.
Different types of evolution
Adaptive evolution
Adaptive evolution relates to evolutionary changes that happen due to the changes in the environment, this makes the organism suitable to its habitat. This change increases the chances of survival and reproduction of the organism (this can be referred to as an organism's fitness). For example, Darwin's Finches on Galapagos island developed different shaped beaks in order to survive for a long time. Adaptive evolution can also be convergent evolution if two distantly related species live in similar environments facing similar pressures.
Convergent evolution
Convergent evolution is the process in which related or distantly related organisms evolve similar characteristics independently. This type of evolution creates analogous structures which have a similar function, structure, or form between the two species. For example, sharks and dolphins look alike but they are not related. Likewise, birds, flying insects, and bats all have the ability to fly, but they are not related to each other. These similar traits tend to evolve from having similar environmental pressures.
Divergent evolution
Divergent evolution is the process of speciation. This can happen in several ways:
Allopatric speciation is when species are separated by a physical barrier that separates the population into two groups. Evolutionary mechanisms such as genetic drift and natural selection can then act independently on each population.
Peripatric speciation is a type of allopatric speciation that occurs when one of the new populations is considerably smaller than the other initial population. This leads to the founder's effect and the population can have different allele frequencies and phenotypes than the original population. These small populations are also more likely to see effects from genetic drift.
Parapatric speciation is allopatric speciation but occurs when the species diverge without a physical barrier separating the population. This tends to occur when a population of a species is incredibly large and occupies a vast environment.
Sympatric speciation is when a new species or subspecies sprouts from the original population while still occupying the same small environment, and without any physical barriers separating them from members of their original population. There is scientific debate as to whether sympatric speciation actually exists.
Artificial speciation is when scientists purposefully cause new species to emerge to use in laboratory procedures.
Coevolution
The influence of two closely associated species is known as coevolution. When two or more species evolve in company with each other, one species adapts to changes in other species. This type of evolution often happens in species that have symbiotic relationships. For example, predator-prey coevolution, this is the most common type of co-evolution. In this, the predator must evolve to become a more effective hunter because there is a selective pressure on the prey to steer clear of capture. The prey in turn need to develop better survival strategies. The Red Queen hypothesis is an example of predator-prey interations. The relationship between pollinating insects like bees and flowering plants, herbivores and plants, are also some common examples of diffuse or guild coevolution.
Mechanism: The process of evolution
The mechanisms of evolution focus mainly on mutation, genetic drift, gene flow, non-random mating, and natural selection.
Mutation: Mutation is a change in the DNA sequence inside a gene or a chromosome of an organism. Most mutations are deleterious, or neutral; i.e. they can neither harm nor benefit, but can also be beneficial sometimes.
Genetic drift: Genetic drift is a variational process, it happens as a result of the sampling errors from one generation to another generation where a random event that happens by chance in nature changes or influences allele frequency within a population. It has a much stronger effect on small populations than large ones.
Gene flow: Gene flow is the transfer of genetic material from the gene pool of one population to another. In a population, migration occurs from one species to another, resulting in the change of allele frequency.
Natural selection: The survival and reproductive rate of a species depends on the adaptability of the species to their environment. This process is called natural selection. Some species with certain traits in a population have higher survival and reproductive rate than others (fitness), and they pass on these genetic features to their offsprings.
Evolutionary developmental biology
In evolutionary developmental biology, scientists look at how the different processes in development play a role in how a specific organism reaches its current body plan. The genetic regulation of ontogeny and the phylogenetic process is what allows for this kind of understanding of biology to be possible. By looking at different processes during development, and going through the evolutionary tree, one can determine at which point a specific structure came about. For example, the three germ layers can be observed to not be present in cnidarians and ctenophores, which instead present in worms, being more or less developed depending on the kind of worm itself. Other structures like the development of Hox genes and sensory organs such as eyes can also be traced with this practice.
Phylogenetic Trees
Phylogenetic Trees are representations of genetic lineage. They are figures that show how related species are to one another. They formed by analyzing the physical traits as well as the similarities of the DNA between species. Then by using a molecular clock scientists can estimate when the species diverged. An example of a phylogeny would be the tree of life.
Homologs
Genes that have shared ancestry are homologs. If a speciation event occurs and one gene ends up in two different species the genes are now orthologous. If a gene is duplicated within the a singular species then it is a paralog. A molecular clock can be used to estimate when these events occurred.
History
The idea of evolution by natural selection was proposed by Charles Darwin in 1859, but evolutionary biology, as an academic discipline in its own right, emerged during the period of the modern synthesis in the 1930s and 1940s. It was not until the 1980s that many universities had departments of evolutionary biology. In the United States, many universities have created departments of molecular and cell biology or ecology and evolutionary biology, in place of the older departments of botany and zoology. Palaeontology is often grouped with earth science.
Microbiology too is becoming an evolutionary discipline now that microbial physiology and genomics are better understood. The quick generation time of bacteria and viruses such as bacteriophages makes it possible to explore evolutionary questions.
Many biologists have contributed to shaping the modern discipline of evolutionary biology. Theodosius Dobzhansky and E. B. Ford established an empirical research programme. Ronald Fisher, Sewall Wright, and J. B. S. Haldane created a sound theoretical framework. Ernst Mayr in systematics, George Gaylord Simpson in paleontology and G. Ledyard Stebbins in botany helped to form the modern synthesis. James Crow, Richard Lewontin, Dan Hartl, Marcus Feldman, and Brian Charlesworth trained a generation of evolutionary biologists.
Current research topics
Current research in evolutionary biology covers diverse topics and incorporates ideas from diverse areas, such as molecular genetics and computer science.
First, some fields of evolutionary research try to explain phenomena that were poorly accounted for in the modern evolutionary synthesis. These include speciation, the evolution of sexual reproduction, the evolution of cooperation, the evolution of ageing, and evolvability.
Second, some evolutionary biologists ask the most straightforward evolutionary question: "what happened and when?". This includes fields such as paleobiology, where paleobiologists and evolutionary biologists, including Thomas Halliday and Anjali Goswami, studied the evolution of early mammals going far back in time during the Mesozoic and Cenozoic eras (between 299 million to 12,000 years ago). Other fields related to generic exploration of evolution ("what happened and when?" ) include systematics and phylogenetics.
Third, the modern evolutionary synthesis was devised at a time when nobody understood the molecular basis of genes. Today, evolutionary biologists try to determine the genetic architecture of interesting evolutionary phenomena such as adaptation and speciation. They seek answers to questions such as how many genes are involved, how large are the effects of each gene, how interdependent are the effects of different genes, what do the genes do, and what changes happen to them (e.g., point mutations vs. gene duplication or even genome duplication). They try to reconcile the high heritability seen in twin studies with the difficulty in finding which genes are responsible for this heritability using genome-wide association studies.
One challenge in studying genetic architecture is that the classical population genetics that catalysed the modern evolutionary synthesis must be updated to take into account modern molecular knowledge. This requires a great deal of mathematical development to relate DNA sequence data to evolutionary theory as part of a theory of molecular evolution. For example, biologists try to infer which genes have been under strong selection by detecting selective sweeps.
Fourth, the modern evolutionary synthesis involved agreement about which forces contribute to evolution, but not about their relative importance. Current research seeks to determine this. Evolutionary forces include natural selection, sexual selection, genetic drift, genetic draft, developmental constraints, mutation bias and biogeography.
This evolutionary approach is key to much current research in organismal biology and ecology, such as life history theory. Annotation of genes and their function relies heavily on comparative approaches. The field of evolutionary developmental biology ("evo-devo") investigates how developmental processes work, and compares them in different organisms to determine how they evolved.
Many physicians do not have enough background in evolutionary biology, making it difficult to use it in modern medicine. However, there are efforts to gain a deeper understanding of disease through evolutionary medicine and to develop evolutionary therapies.
Drug resistance today
Evolution plays a role in resistance of drugs; for example, how HIV becomes resistant to medications and the body's immune system. The mutation of resistance of HIV is due to the natural selection of the survivors and their offspring. The few HIV that survive the immune system reproduced and had offspring that were also resistant to the immune system. Drug resistance also causes many problems for patients such as a worsening sickness or the sickness can mutate into something that can no longer be cured with medication. Without the proper medicine, a sickness can be the death of a patient. If their body has resistance to a certain number of drugs, then the right medicine will be harder and harder to find. Not completing the prescribed full course of antibiotic is also an example of resistance that will cause the bacteria against which the antibiotic is being taken to evolve and continue to spread in the body. When the full dosage of the medication does not enter the body and perform its proper job, the bacteria that survive the initial dosage will continue to reproduce. This can make for another bout of sickness later on that will be more difficult to cure because the bacteria involved will be resistant to the first medication used. Taking the full course of medicine that is prescribed is a vital step in avoiding antibiotic resistance.
Individuals with chronic illnesses, especially those that can recur throughout a lifetime, are at greater risk of antibiotic resistance than others. This is because overuse of a drug or too high of a dosage can cause a patient's immune system to weaken and the illness will evolve and grow stronger. For example, cancer patients will need a stronger and stronger dosage of medication because of their low functioning immune system.
Journals
Some scientific journals specialise exclusively in evolutionary biology as a whole, including the journals Evolution, Journal of Evolutionary Biology, and BMC Evolutionary Biology. Some journals cover sub-specialties within evolutionary biology, such as the journals Systematic Biology, Molecular Biology and Evolution and its sister journal Genome Biology and Evolution, and Cladistics.
Other journals combine aspects of evolutionary biology with other related fields. For example, Molecular Ecology, Proceedings of the Royal Society of London Series B, The American Naturalist and Theoretical Population Biology have overlap with ecology and other aspects of organismal biology. Overlap with ecology is also prominent in the review journals Trends in Ecology and Evolution and Annual Review of Ecology, Evolution, and Systematics. The journals Genetics and PLoS Genetics overlap with molecular genetics questions that are not obviously evolutionary in nature.
See also
Comparative anatomy
Computational phylogenetics
Evolutionary computation
Evolutionary dynamics
Evolutionary neuroscience
Evolutionary physiology
On the Origin of Species
Macroevolution
Phylogenetic comparative methods
Quantitative genetics
Selective breeding
Taxonomy (biology)
Speculative evolution
References
External links
Evolution and Paleobotany at the Encyclopædia Britannica
Philosophy of biology | 0.81719 | 0.996308 | 0.814174 |
Ecosystem ecology | Ecosystem ecology is the integrated study of living (biotic) and non-living (abiotic) components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals.
Ecosystem ecology examines physical and biological structures and examines how these ecosystem characteristics interact with each other. Ultimately, this helps us understand how to maintain high quality water and economically viable commodity production. A major focus of ecosystem ecology is on functional processes, ecological mechanisms that maintain the structure and services produced by ecosystems. These include primary productivity (production of biomass), decomposition, and trophic interactions.
Studies of ecosystem function have greatly improved human understanding of sustainable production of forage, fiber, fuel, and provision of water. Functional processes are mediated by regional-to-local level climate, disturbance, and management. Thus ecosystem ecology provides a powerful framework for identifying ecological mechanisms that interact with global environmental problems, especially global warming and degradation of surface water.
This example demonstrates several important aspects of ecosystems:
Ecosystem boundaries are often nebulous and may fluctuate in time
Organisms within ecosystems are dependent on ecosystem level biological and physical processes
Adjacent ecosystems closely interact and often are interdependent for maintenance of community structure and functional processes that maintain productivity and biodiversity
These characteristics also introduce practical problems into natural resource management. Who will manage which ecosystem? Will timber cutting in the forest degrade recreational fishing in the stream? These questions are difficult for land managers to address while the boundary between ecosystems remains unclear; even though decisions in one ecosystem will affect the other. We need better understanding of the interactions and interdependencies of these ecosystems and the processes that maintain them before we can begin to address these questions.
Ecosystem ecology is an inherently interdisciplinary field of study. An individual ecosystem is composed of populations of organisms, interacting within communities, and contributing to the cycling of nutrients and the flow of energy. The ecosystem is the principal unit of study in ecosystem ecology.
Population, community, and physiological ecology provide many of the underlying biological mechanisms influencing ecosystems and the processes they maintain. Flowing of energy and cycling of matter at the ecosystem level are often examined in ecosystem ecology, but, as a whole, this science is defined more by subject matter than by scale. Ecosystem ecology approaches organisms and abiotic pools of energy and nutrients as an integrated system which distinguishes it from associated sciences such as biogeochemistry.
Biogeochemistry and hydrology focus on several fundamental ecosystem processes such as biologically mediated chemical cycling of nutrients and physical-biological cycling of water. Ecosystem ecology forms the mechanistic basis for regional or global processes encompassed by landscape-to-regional hydrology, global biogeochemistry, and earth system science.
History
Ecosystem ecology is philosophically and historically rooted in terrestrial ecology. The ecosystem concept has evolved rapidly during the last 100 years with important ideas developed by Frederic Clements, a botanist who argued for specific definitions of ecosystems and that physiological processes were responsible for their development and persistence. Although most of Clements ecosystem definitions have been greatly revised, initially by Henry Gleason and Arthur Tansley, and later by contemporary ecologists, the idea that physiological processes are fundamental to ecosystem structure and function remains central to ecology.
Later work by Eugene Odum and Howard T. Odum quantified flows of energy and matter at the ecosystem level, thus documenting the general ideas proposed by Clements and his contemporary Charles Elton.
In this model, energy flows through the whole system were dependent on biotic and abiotic interactions of each individual component (species, inorganic pools of nutrients, etc.). Later work demonstrated that these interactions and flows applied to nutrient cycles, changed over the course of succession, and held powerful controls over ecosystem productivity. Transfers of energy and nutrients are innate to ecological systems regardless of whether they are aquatic or terrestrial. Thus, ecosystem ecology has emerged from important biological studies of plants, animals, terrestrial, aquatic, and marine ecosystems.
Ecosystem services
Ecosystem services are ecologically mediated functional processes essential to sustaining healthy human societies. Water provision and filtration, production of biomass in forestry, agriculture, and fisheries, and removal of greenhouse gases such as carbon dioxide (CO2) from the atmosphere are examples of ecosystem services essential to public health and economic opportunity. Nutrient cycling is a process fundamental to agricultural and forest production.
However, like most ecosystem processes, nutrient cycling is not an ecosystem characteristic which can be “dialed” to the most desirable level. Maximizing production in degraded systems is an overly simplistic solution to the complex problems of hunger and economic security. For instance, intensive fertilizer use in the midwestern United States has resulted in degraded fisheries in the Gulf of Mexico. Regrettably, a “Green Revolution” of intensive chemical fertilization has been recommended for agriculture in developed and developing countries. These strategies risk alteration of ecosystem processes that may be difficult to restore, especially when applied at broad scales without adequate assessment of impacts. Ecosystem processes may take many years to recover from significant disturbance.
For instance, large-scale forest clearance in the northeastern United States during the 18th and 19th centuries has altered soil texture, dominant vegetation, and nutrient cycling in ways that impact forest productivity in the present day. An appreciation of the importance of ecosystem function in maintenance of productivity, whether in agriculture or forestry, is needed in conjunction with plans for restoration of essential processes. Improved knowledge of ecosystem function will help to achieve long-term sustainability and stability in the poorest parts of the world.
Operation
Biomass productivity is one of the most apparent and economically important ecosystem functions. Biomass accumulation begins at the cellular level via photosynthesis. Photosynthesis requires water and consequently global patterns of annual biomass production are correlated with annual precipitation. Amounts of productivity are also dependent on the overall capacity of plants to capture sunlight which is directly correlated with plant leaf area and N content.
Net primary productivity (NPP) is the primary measure of biomass accumulation within an ecosystem. Net primary productivity can be calculated by a simple formula where the total amount of productivity is adjusted for total productivity losses through maintenance of biological processes:
NPP = GPP – Rproducer
Where GPP is gross primary productivity and Rproducer is photosynthate (Carbon) lost via cellular respiration.
NPP is difficult to measure but a new technique known as eddy co-variance has shed light on how natural ecosystems influence the atmosphere. Figure 4 shows seasonal and annual changes in CO2 concentration measured at Mauna Loa, Hawaii from 1987 to 1990. CO2 concentration steadily increased, but within-year variation has been greater than the annual increase since measurements began in 1957.
These variations were thought to be due to seasonal uptake of CO2 during summer months. A newly developed technique for assessing ecosystem NPP has confirmed seasonal variation are driven by seasonal changes in CO2 uptake by vegetation. This has led many scientists and policy makers to speculate that ecosystems can be managed to ameliorate problems with global warming. This type of management may include reforesting or altering forest harvest schedules for many parts of the world.
Decomposition and nutrient cycling
Decomposition and nutrient cycling are fundamental to ecosystem biomass production. Most natural ecosystems are nitrogen (N) limited and biomass production is closely correlated with N turnover.
Typically external input of nutrients is very low and efficient recycling of nutrients maintains productivity. Decomposition of plant litter accounts for the majority of nutrients recycled through ecosystems (Figure 3). Rates of plant litter decomposition are highly dependent on litter quality; high concentration of phenolic compounds, especially lignin, in plant litter has a retarding effect on litter decomposition. More complex C compounds are decomposed more slowly and may take many years to completely breakdown. Decomposition is typically described with exponential decay and has been related to the mineral concentrations, especially manganese, in the leaf litter.
Globally, rates of decomposition are mediated by litter quality and climate. Ecosystems dominated by plants with low-lignin concentration often have rapid rates of decomposition and nutrient cycling (Chapin et al. 1982). Simple carbon (C) containing compounds are preferentially metabolized by decomposer microorganisms which results in rapid initial rates of decomposition, see Figure 5A, models that depend on constant rates of decay; so called “k” values, see Figure 5B. In addition to litter quality and climate, the activity of soil fauna is very important
However, these models do not reflect simultaneous linear and non-linear decay processes which likely occur during decomposition. For instance, proteins, sugars and lipids decompose exponentially, but lignin decays at a more linear rate Thus, litter decay is inaccurately predicted by simplistic models.
A simple alternative model presented in Figure 5C shows significantly more rapid decomposition that the standard model of figure 4B. Better understanding of decomposition models is an important research area of ecosystem ecology because this process is closely tied to nutrient supply and the overall capacity of ecosystems to sequester CO2 from the atmosphere.
Trophic dynamics
Trophic dynamics refers to process of energy and nutrient transfer between organisms. Trophic dynamics is an important part of the structure and function of ecosystems. Figure 3 shows energy transferred for an ecosystem at Silver Springs, Florida. Energy gained by primary producers (plants, P) is consumed by herbivores (H), which are consumed by carnivores (C), which are themselves consumed by “top- carnivores”(TC).
One of the most obvious patterns in Figure 3 is that as one moves up to higher trophic levels (i.e. from plants to top-carnivores) the total amount of energy decreases. Plants exert a “bottom-up” control on the energy structure of ecosystems by determining the total amount of energy that enters the system.
However, predators can also influence the structure of lower trophic levels from the top-down. These influences can dramatically shift dominant species in terrestrial and marine systems The interplay and relative strength of top-down vs. bottom-up controls on ecosystem structure and function is an important area of research in the greater field of ecology.
Trophic dynamics can strongly influence rates of decomposition and nutrient cycling in time and in space. For example, herbivory can increase litter decomposition and nutrient cycling via direct changes in litter quality and altered dominant vegetation. Insect herbivory has been shown to increase rates of decomposition and nutrient turnover due to changes in litter quality and increased frass inputs.
However, insect outbreak does not always increase nutrient cycling. Stadler showed that C rich honeydew produced during aphid outbreak can result in increased N immobilization by soil microbes thus slowing down nutrient cycling and potentially limiting biomass production. North atlantic marine ecosystems have been greatly altered by overfishing of cod. Cod stocks crashed in the 1990s which resulted in increases in their prey such as shrimp and snow crab Human intervention in ecosystems has resulted in dramatic changes to ecosystem structure and function. These changes are occurring rapidly and have unknown consequences for economic security and human well-being.
Applications and importance
Lessons from two Central American cities
The biosphere has been greatly altered by the demands of human societies. Ecosystem ecology plays an important role in understanding and adapting to the most pressing current environmental problems. Restoration ecology and ecosystem management are closely associated with ecosystem ecology. Restoring highly degraded resources depends on integration of functional mechanisms of ecosystems.
Without these functions intact, economic value of ecosystems is greatly reduced and potentially dangerous conditions may develop in the field. For example, areas within the mountainous western highlands of Guatemala are more susceptible to catastrophic landslides and crippling seasonal water shortages due to loss of forest resources. In contrast, cities such as Totonicapán that have preserved forests through strong social institutions have greater local economic stability and overall greater human well-being.
This situation is striking considering that these areas are close to each other, the majority of inhabitants are of Mayan descent, and the topography and overall resources are similar. This is a case of two groups of people managing resources in fundamentally different ways. Ecosystem ecology provides the basic science needed to avoid degradation and to restore ecosystem processes that provide for basic human needs.
See also
Biogeochemistry
Community ecology
Earth system science
Holon (philosophy)
Landscape ecology
Systems ecology
MuSIASEM
References
Systems ecology
Global natural environment
Ecological processes
Ecosystems | 0.830456 | 0.978849 | 0.812891 |
Physiology | Physiology (; ) is the scientific study of functions and mechanisms in a living system. As a subdiscipline of biology, physiology focuses on how organisms, organ systems, individual organs, cells, and biomolecules carry out chemical and physical functions in a living system. According to the classes of organisms, the field can be divided into medical physiology, animal physiology, plant physiology, cell physiology, and comparative physiology.
Central to physiological functioning are biophysical and biochemical processes, homeostatic control mechanisms, and communication between cells. Physiological state is the condition of normal function. In contrast, pathological state refers to abnormal conditions, including human diseases.
The Nobel Prize in Physiology or Medicine is awarded by the Royal Swedish Academy of Sciences for exceptional scientific achievements in physiology related to the field of medicine.
Foundations
Because physiology focuses on the functions and mechanisms of living organisms at all levels, from the molecular and cellular level to the level of whole organisms and populations, its foundations span a range of key disciplines:
Anatomy is the study of the structure and organization of living organisms, from the microscopic level of cells and tissues to the macroscopic level of organs and systems. Anatomical knowledge is important in physiology because the structure and function of an organism are often dictated by one another.
Biochemistry is the study of the chemical processes and substances that occur within living organisms. Knowledge of biochemistry provides the foundation for understanding cellular and molecular processes that are essential to the functioning of organisms.
Biophysics is the study of the physical properties of living organisms and their interactions with their environment. It helps to explain how organisms sense and respond to different stimuli, such as light, sound, and temperature, and how they maintain homeostasis, or a stable internal environment.
Genetics is the study of heredity and the variation of traits within and between populations. It provides insights into the genetic basis of physiological processes and the ways in which genes interact with the environment to influence an organism's phenotype.
Evolutionary biology is the study of the processes that have led to the diversity of life on Earth. It helps to explain the origin and adaptive significance of physiological processes and the ways in which organisms have evolved to cope with their environment.
Subdisciplines
There are many ways to categorize the subdisciplines of physiology:
based on the taxa studied: human physiology, animal physiology, plant physiology, microbial physiology, viral physiology
based on the level of organization: cell physiology, molecular physiology, systems physiology, organismal physiology, ecological physiology, integrative physiology
based on the process that causes physiological variation: developmental physiology, environmental physiology, evolutionary physiology
based on the ultimate goals of the research: applied physiology (e.g., medical physiology), non-applied (e.g., comparative physiology)
Subdisciplines by level of organisation
Cell physiology
Although there are differences between animal, plant, and microbial cells, the basic physiological functions of cells can be divided into the processes of cell division, cell signaling, cell growth, and cell metabolism.
Subdisciplines by taxa
Plant physiology
Plant physiology is a subdiscipline of botany concerned with the functioning of plants. Closely related fields include plant morphology, plant ecology, phytochemistry, cell biology, genetics, biophysics, and molecular biology. Fundamental processes of plant physiology include photosynthesis, respiration, plant nutrition, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, seed germination, dormancy, and stomata function and transpiration. Absorption of water by roots, production of food in the leaves, and growth of shoots towards light are examples of plant physiology.
Animal physiology
Human physiology
Human physiology is the study of how the human body's systems and functions work together to maintain a stable internal environment. It includes the study of the nervous, endocrine, cardiovascular, respiratory, digestive, and urinary systems, as well as cellular and exercise physiology. Understanding human physiology is essential for diagnosing and treating health conditions and promoting overall wellbeing.
It seeks to understand the mechanisms that work to keep the human body alive and functioning, through scientific enquiry into the nature of mechanical, physical, and biochemical functions of humans, their organs, and the cells of which they are composed. The principal level of focus of physiology is at the level of organs and systems within systems. The endocrine and nervous systems play major roles in the reception and transmission of signals that integrate function in animals. Homeostasis is a major aspect with regard to such interactions within plants as well as animals. The biological basis of the study of physiology, integration refers to the overlap of many functions of the systems of the human body, as well as its accompanied form. It is achieved through communication that occurs in a variety of ways, both electrical and chemical.
Changes in physiology can impact the mental functions of individuals. Examples of this would be the effects of certain medications or toxic levels of substances. Change in behavior as a result of these substances is often used to assess the health of individuals.
Much of the foundation of knowledge in human physiology was provided by animal experimentation. Due to the frequent connection between form and function, physiology and anatomy are intrinsically linked and are studied in tandem as part of a medical curriculum.
Subdisciplines by research objective
Comparative physiology
Involving evolutionary physiology and environmental physiology, comparative physiology considers the diversity of functional characteristics across organisms.
History
The classical era
The study of human physiology as a medical field originates in classical Greece, at the time of Hippocrates (late 5th century BC). Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China, India and elsewhere. Hippocrates incorporated the theory of humorism, which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece. Like Hippocrates, Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. Galen (–200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy. Galen also saw the human body consisting of three connected systems: the brain and nerves, which are responsible for thoughts and sensations; the heart and arteries, which give life; and the liver and veins, which can be attributed to nutrition and growth. Galen was also the founder of experimental physiology. And for the next 1,400 years, Galenic physiology was a powerful and influential tool in medicine.
Early modern period
Jean Fernel (1497–1558), a French physician, introduced the term "physiology". Galen, Ibn al-Nafis, Michael Servetus, Realdo Colombo, Amato Lusitano and William Harvey, are credited as making important discoveries in the circulation of the blood. Santorio Santorio in 1610s was the first to use a device to measure the pulse rate (the pulsilogium), and a thermoscope to measure temperature.
In 1791 Luigi Galvani described the role of electricity in the nerves of dissected frogs. In 1811, César Julien Jean Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata. In the same year, Charles Bell finished work on what would later become known as the Bell–Magendie law, which compared functional differences between dorsal and ventral roots of the spinal cord. In 1824, François Magendie described the sensory roots and produced the first evidence of the cerebellum's role in equilibration to complete the Bell–Magendie law.
In the 1820s, the French physiologist Henri Milne-Edwards introduced the notion of physiological division of labor, which allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils).
In 1858, Joseph Lister studied the cause of blood coagulation and inflammation that resulted after previous injuries and surgical wounds. He later discovered and implemented antiseptics in the operating room, and as a result, decreased the death rate from surgery by a substantial amount.
The Physiological Society was founded in London in 1876 as a dining club. The American Physiological Society (APS) is a nonprofit organization that was founded in 1887. The Society is, "devoted to fostering education, scientific research, and dissemination of information in the physiological sciences."
In 1891, Ivan Pavlov performed research on "conditional responses" that involved dogs' saliva production in response to a bell and visual stimuli.
In the 19th century, physiological knowledge began to accumulate at a rapid rate, in particular with the 1838 appearance of the Cell theory of Matthias Schleiden and Theodor Schwann. It radically stated that organisms are made up of units called cells. Claude Bernard's (1813–1878) further discoveries ultimately led to his concept of milieu interieur (internal environment), which would later be taken up and championed as "homeostasis" by American physiologist Walter B. Cannon in 1929. By homeostasis, Cannon meant "the maintenance of steady states in the body and the physiological processes through which they are regulated." In other words, the body's ability to regulate its internal environment. William Beaumont was the first American to utilize the practical application of physiology.
Nineteenth-century physiologists such as Michael Foster, Max Verworn, and Alfred Binet, based on Haeckel's ideas, elaborated what came to be called "general physiology", a unified science of life based on the cell actions, later renamed in the 20th century as cell biology.
Late modern period
In the 20th century, biologists became interested in how organisms other than human beings function, eventually spawning the fields of comparative physiology and ecophysiology. Major figures in these fields include Knut Schmidt-Nielsen and George Bartholomew. Most recently, evolutionary physiology has become a distinct subdiscipline.
In 1920, August Krogh won the Nobel Prize for discovering how, in capillaries, blood flow is regulated.
In 1954, Andrew Huxley and Hugh Huxley, alongside their research team, discovered the sliding filaments in skeletal muscle, known today as the sliding filament theory.
Recently, there have been intense debates about the vitality of physiology as a discipline (Is it dead or alive?). If physiology is perhaps less visible nowadays than during the golden age of the 19th century, it is in large part because the field has given birth to some of the most active domains of today's biological sciences, such as neuroscience, endocrinology, and immunology. Furthermore, physiology is still often seen as an integrative discipline, which can put together into a coherent framework data coming from various different domains.
Notable physiologists
Women in physiology
Initially, women were largely excluded from official involvement in any physiological society. The American Physiological Society, for example, was founded in 1887 and included only men in its ranks. In 1902, the American Physiological Society elected Ida Hyde as the first female member of the society. Hyde, a representative of the American Association of University Women and a global advocate for gender equality in education, attempted to promote gender equality in every aspect of science and medicine.
Soon thereafter, in 1913, J.S. Haldane proposed that women be allowed to formally join The Physiological Society, which had been founded in 1876. On 3 July 1915, six women were officially admitted: Florence Buchanan, Winifred Cullis, Ruth Skelton, Sarah C. M. Sowton, Constance Leetham Terry, and Enid M. Tribe. The centenary of the election of women was celebrated in 2015 with the publication of the book "Women Physiologists: Centenary Celebrations And Beyond For The Physiological Society."
Prominent women physiologists include:
Bodil Schmidt-Nielsen, the first woman president of the American Physiological Society in 1975.
Gerty Cori, along with her husband Carl Cori, received the Nobel Prize in Physiology or Medicine in 1947 for their discovery of the phosphate-containing form of glucose known as glycogen, as well as its function within eukaryotic metabolic mechanisms for energy production. Moreover, they discovered the Cori cycle, also known as the Lactic acid cycle, which describes how muscle tissue converts glycogen into lactic acid via lactic acid fermentation.
Barbara McClintock was rewarded the 1983 Nobel Prize in Physiology or Medicine for the discovery of genetic transposition. McClintock is the only female recipient who has won an unshared Nobel Prize.
Gertrude Elion, along with George Hitchings and Sir James Black, received the Nobel Prize for Physiology or Medicine in 1988 for their development of drugs employed in the treatment of several major diseases, such as leukemia, some autoimmune disorders, gout, malaria, and viral herpes.
Linda B. Buck, along with Richard Axel, received the Nobel Prize in Physiology or Medicine in 2004 for their discovery of odorant receptors and the complex organization of the olfactory system.
Françoise Barré-Sinoussi, along with Luc Montagnier, received the Nobel Prize in Physiology or Medicine in 2008 for their work on the identification of the Human Immunodeficiency Virus (HIV), the cause of Acquired Immunodeficiency Syndrome (AIDS).
Elizabeth Blackburn, along with Carol W. Greider and Jack W. Szostak, was awarded the 2009 Nobel Prize for Physiology or Medicine for the discovery of the genetic composition and function of telomeres and the enzyme called telomerase.
See also
Outline of physiology
Biochemistry
Biophysics
Cytoarchitecture
Defense physiology
Ecophysiology
Exercise physiology
Fish physiology
Insect physiology
Human body
Molecular biology
Metabolome
Neurophysiology
Pathophysiology
Pharmacology
Physiome
American Physiological Society
International Union of Physiological Sciences
The Physiological Society
Brazilian Society of Physiology
References
Bibliography
Human physiology
Widmaier, E.P., Raff, H., Strang, K.T. Vander's Human Physiology. 11th Edition, McGraw-Hill, 2009.
Marieb, E.N. Essentials of Human Anatomy and Physiology. 10th Edition, Benjamin Cummings, 2012.
Animal physiology
Hill, R.W., Wyse, G.A., Anderson, M. Animal Physiology, 3rd ed. Sinauer Associates, Sunderland, 2012.
Moyes, C.D., Schulte, P.M. Principles of Animal Physiology, second edition. Pearson/Benjamin Cummings. Boston, MA, 2008.
Randall, D., Burggren, W., and French, K. Eckert Animal Physiology: Mechanism and Adaptation, 5th Edition. W.H. Freeman and Company, 2002.
Schmidt-Nielsen, K. Animal Physiology: Adaptation and Environment. Cambridge & New York: Cambridge University Press, 1997.
Withers, P.C. Comparative animal physiology. Saunders College Publishing, New York, 1992.
Plant physiology
Larcher, W. Physiological plant ecology (4th ed.). Springer, 2001.
Salisbury, F.B, Ross, C.W. Plant physiology. Brooks/Cole Pub Co., 1992
Taiz, L., Zieger, E. Plant Physiology (5th ed.), Sunderland, Massachusetts: Sinauer, 2010.
Fungal physiology
Griffin, D.H. Fungal Physiology, Second Edition. Wiley-Liss, New York, 1994.
Protistan physiology
Levandowsky, M. Physiological Adaptations of Protists. In: Cell physiology sourcebook: essentials of membrane biophysics. Amsterdam; Boston: Elsevier/AP, 2012.
Levandowski, M., Hutner, S.H. (eds). Biochemistry and physiology of protozoa. Volumes 1, 2, and 3. Academic Press: New York, NY, 1979; 2nd ed.
Laybourn-Parry J. A Functional Biology of Free-Living Protozoa. Berkeley, California: University of California Press; 1984.
Algal physiology
Lobban, C.S., Harrison, P.J. Seaweed ecology and physiology. Cambridge University Press, 1997.
Stewart, W. D. P. (ed.). Algal Physiology and Biochemistry. Blackwell Scientific Publications, Oxford, 1974.
Bacterial physiology
El-Sharoud, W. (ed.). Bacterial Physiology: A Molecular Approach. Springer-Verlag, Berlin-Heidelberg, 2008.
Kim, B.H., Gadd, M.G. Bacterial Physiology and Metabolism. Cambridge, 2008.
Moat, A.G., Foster, J.W., Spector, M.P. Microbial Physiology, 4th ed. Wiley-Liss, Inc. New York, NY, 2002.
External links
physiologyINFO.org – public information site sponsored by the American Physiological Society
Branches of biology | 0.808949 | 0.99859 | 0.807808 |
Systematics | Systematics is the study of the diversification of living forms, both past and present, and the relationships among living things through time. Relationships are visualized as evolutionary trees (synonyms: phylogenetic trees, phylogenies). Phylogenies have two components: branching order (showing group relationships, graphically represented in cladograms) and branch length (showing amount of evolution). Phylogenetic trees of species and higher taxa are used to study the evolution of traits (e.g., anatomical or molecular characteristics) and the distribution of organisms (biogeography). Systematics, in other words, is used to understand the evolutionary history of life on Earth.
The word systematics is derived from the Latin word of Ancient Greek origin systema, which means systematic arrangement of organisms. Carl Linnaeus used 'Systema Naturae' as the title of his book.
Branches and applications
In the study of biological systematics, researchers use the different branches to further understand the relationships between differing organisms. These branches are used to determine the applications and uses for modern day systematics.
Biological systematics classifies species by using three specific branches. Numerical systematics, or biometry, uses biological statistics to identify and classify animals. Biochemical systematics classifies and identifies animals based on the analysis of the material that makes up the living part of a cell—such as the nucleus, organelles, and cytoplasm. Experimental systematics identifies and classifies animals based on the evolutionary units that comprise a species, as well as their importance in evolution itself. Factors such as mutations, genetic divergence, and hybridization all are considered evolutionary units.
With the specific branches, researchers are able to determine the applications and uses for modern-day systematics. These applications include:
Studying the diversity of organisms and the differentiation between extinct and living creatures. Biologists study the well-understood relationships by making many different diagrams and "trees" (cladograms, phylogenetic trees, phylogenies, etc.).
Including the scientific names of organisms, species descriptions and overviews, taxonomic orders, and classifications of evolutionary and organism histories.
Explaining the biodiversity of the planet and its organisms. The systematic study is that of conservation.
Manipulating and controlling the natural world. This includes the practice of 'biological control', the intentional introduction of natural predators and disease.
Definition and relation with taxonomy
John Lindley provided an early definition of systematics in 1830, although he wrote of "systematic botany" rather than using the term "systematics".
In 1970 Michener et al. defined "systematic biology" and "taxonomy" (terms that are often confused and used interchangeably) in relationship to one another as follows:
Systematic biology (hereafter called simply systematics) is the field that (a) provides scientific names for organisms, (b) describes them, (c) preserves collections of them, (d) provides classifications for the organisms, keys for their identification, and data on their distributions, (e) investigates their evolutionary histories, and (f) considers their environmental adaptations. This is a field with a long history that in recent years has experienced a notable renaissance, principally with respect to theoretical content. Part of the theoretical material has to do with evolutionary areas (topics e and f above), the rest relates especially to the problem of classification. Taxonomy is that part of Systematics concerned with topics (a) to (d) above.
The term "taxonomy" was coined by Augustin Pyramus de Candolle while the term "systematic" was coined by Carl Linnaeus the father of taxonomy.
Taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, phylogenetics: At various times in history, all these words have had overlapping, related meanings. However, in modern usage, they can all be considered synonyms of each other.
For example, Webster's 9th New Collegiate Dictionary of 1987 treats "classification", "taxonomy", and "systematics" as synonyms. According to this work, the terms originated in 1790, c. 1828, and in 1888 respectively. Some claim systematics alone deals specifically with relationships through time, and that it can be synonymous with phylogenetics, broadly dealing with the inferred hierarchy of organisms. This means it would be a subset of taxonomy as it is sometimes regarded, but the inverse is claimed by others.
Europeans tend to use the terms "systematics" and "biosystematics" for the study of biodiversity as a whole, whereas North Americans tend to use "taxonomy" more frequently. However, taxonomy, and in particular alpha taxonomy, is more specifically the identification, description, and naming (i.e. nomenclature) of organisms,
while "classification" focuses on placing organisms within hierarchical groups that show their relationships to other organisms. All of these biological disciplines can deal with both extinct and extant organisms.
Systematics uses taxonomy as a primary tool in understanding, as nothing about an organism's relationships with other living things can be understood without it first being properly studied and described in sufficient detail to identify and classify it correctly. Scientific classifications are aids in recording and reporting information to other scientists and to laymen. The systematist, a scientist who specializes in systematics, must, therefore, be able to use existing classification systems, or at least know them well enough to skilfully justify not using them.
Phenetics was an attempt to determine the relationships of organisms through a measure of overall similarity, making no distinction between plesiomorphies (shared ancestral traits) and apomorphies (derived traits). From the late-20th century onwards, it was superseded by cladistics, which rejects plesiomorphies in attempting to resolve the phylogeny of Earth's various organisms through time. systematists generally make extensive use of molecular biology and of computer programs to study organisms.
Taxonomic characters
Taxonomic characters are the taxonomic attributes that can be used to provide the evidence from which relationships (the phylogeny) between taxa are inferred. Kinds of taxonomic characters include:
Morphological characters
General external morphology
Special structures (e.g. genitalia)
Internal morphology (anatomy)
Embryology
Karyology and other cytological factors
Physiological characters
Metabolic factors
Body secretions
Genic sterility factors
Molecular characters
Immunological distance
Electrophoretic differences
Amino acid sequences of proteins
DNA hybridization
DNA and RNA sequences
Restriction endonuclease analyses
Other molecular differences
Behavioral characters
Courtship and other ethological isolating mechanisms
Other behavior patterns
Ecological characters
Habit and habitats
Food
Seasonal variations
Parasites and hosts
Geographic characters
General biogeographic distribution patterns
Sympatric-allopatric relationship of populations
See also
Cladistics – a methodology in systematics
Evolutionary systematics – a school of systematics
Global biodiversity
Phenetics – a methodology in systematics that does not infer phylogeny
Phylogeny – the historical relationships between lineages of organism
16S ribosomal RNA – an intensively studied nucleic acid that has been useful in phylogenetics
Phylogenetic comparative methods – use of evolutionary trees in other studies, such as biodiversity, comparative biology. adaptation, or evolutionary mechanisms
References
Notes
Further reading
Brower, Andrew V. Z. and Randall T. Schuh. 2021. Biological Systematics: Principles and Applications, 3rd edn.
Simpson, Michael G. 2005. Plant Systematics.
Wiley, Edward O. and Bruce S. Lieberman. 2011. "Phylogenetics: Theory and Practice of Phylogenetic Systematics, 2nd edn."
External links
Society of Australian Systematic Biologists
Society of Systematic Biologists
The Willi Hennig Society
Evolutionary biology
Biological classification | 0.81374 | 0.992589 | 0.807709 |
Biophysics | Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology.
The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry.
Overview
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain.
Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom.
History
The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller.
William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery.
The popularity of the field rose when the book What Is Life?'' by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world.
Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena.
Focus as a subfield
While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments.
Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics.
Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof.
Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships.
Computer science – Neural networks, biomolecular and drug databases.
Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry
Bioinformatics – sequence alignment, structural alignment, protein structure prediction
Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics.
Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe.
Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity.
Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides.
Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application.
Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing.
Agronomy and agriculture
Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training.
See also
Biophysical Society
Index of biophysics articles
List of publications in biology – Biophysics
List of publications in physics – Biophysics
List of biophysicists
Outline of biophysics
Biophysical chemistry
European Biophysical Societies' Association
Mathematical and theoretical biology
Medical biophysics
Membrane biophysics
Molecular biophysics
Neurophysics
Physiomics
Virophysics
Single-particle trajectory
References
Sources
External links
Biophysical Society
Journal of Physiology: 2012 virtual issue Biophysics and Beyond
bio-physics-wiki
Link archive of learning resources for students: biophysika.de (60% English, 40% German)
Applied and interdisciplinary physics | 0.810677 | 0.994899 | 0.806542 |
Evolution | Evolution is the change in the heritable characteristics of biological populations over successive generations. It occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
The scientific theory of evolution by natural selection was conceived independently by two British naturalists, Charles Darwin and Alfred Russel Wallace, in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment.
In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow.
All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today.
Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science.
Heredity
Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.
Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner.
Sources of variation
Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species.
An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.
Mutation
Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect.
About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial.
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth.
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line.
One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring.
Sex and recombination
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution.
The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial.
Gene flow
Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.
Epigenetics
Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.
Evolutionary forces
From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias.
Natural selection
Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles:
Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation).
Different traits confer different rates of survival and reproduction (differential fitness).
These traits can be passed from generation to generation (heritability of fitness).
More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking.
The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are "selected against."
Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms.
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height.
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation.
Genetic drift
Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles.
According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities.
The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population.
It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.
Mutation bias
Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution.
Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.
For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.
However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation.
Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates.
Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation.
Genetic hitchhiking
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.
Sexual selection
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.
Natural outcomes
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction, whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution is the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.
A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically, however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing, and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.
Adaptation
Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky:
Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.
Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.
An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.
During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes.
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.
Coevolution
Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.
Cooperation
Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.
Speciation
Speciation is the process where a species diverges into two or more descendant species.
There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.
Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.
The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.
The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.
One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.
Extinction
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described.
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.
Applications
Concepts and models used in evolutionary biology, such as natural selection, have many applications.
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.
Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation.
Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.
Evolutionary history of life
Origin of life
The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.
More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described.
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells.
Common descent
All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.
Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.
Evolution of life
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.
Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.
History of evolutionary thought
Classical antiquity
The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura.
Middle Ages
In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.
A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous".
Pre-Darwinian
The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.
Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.
Darwinian revolution
The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.
Pangenesis and heredity
The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.
The 'modern synthesis'
In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology.
Further syntheses
Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations.
The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.
One extension, known as evolutionary developmental biology and informally called "evo-devo", emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.
Social and cultural responses
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists.
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China.
See also
Chronospecies
References
Bibliography
The notebook is available from The Complete Work of Charles Darwin Online . Retrieved 2019-10-09.
The book is available from The Complete Work of Charles Darwin Online . Retrieved 2014-11-21.
"Proceedings of a symposium held at the American Museum of Natural History in New York, 2002."
. Retrieved 2014-11-29.
"Papers from the Symposium on the Limits of Reductionism in Biology, held at the Novartis Foundation, London, May 13–15, 1997."
"Based on a conference held in Bellagio, Italy, June 25–30, 1989"
Further reading
Introductory reading
American version.
Advanced reading
External links
General information
Adobe Flash required.
"History of Evolution in the United States". Salon. Retrieved 2021-08-24.
Experiments
Online lectures
Biology theories | 0.806902 | 0.999394 | 0.806413 |
Developmental biology | Developmental biology is the study of the process by which animals and plants grow and develop. Developmental biology also encompasses the biology of regeneration, asexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism.
Perspectives
The main processes involved in the embryonic development of animals are: tissue patterning (via regional specification and patterned cell differentiation); tissue growth; and tissue morphogenesis.
Regional specification refers to the processes that create the spatial patterns in a ball or sheet of initially similar cells. This generally involves the action of cytoplasmic determinants, located within parts of the fertilized egg, and of inductive signals emitted from signaling centers in the embryo. The early stages of regional specification do not generate functional differentiated cells, but cell populations committed to developing to a specific region or part of the organism. These are defined by the expression of specific combinations of transcription factors.
Cell differentiation relates specifically to the formation of functional cell types such as nerve, muscle, secretory epithelia, etc. Differentiated cells contain large amounts of specific proteins associated with cell function.
Morphogenesis relates to the formation of a three-dimensional shape. It mainly involves the orchestrated movements of cell sheets and of individual cells. Morphogenesis is important for creating the three germ layers of the early embryo (ectoderm, mesoderm, and endoderm) and for building up complex structures during organ development.
Tissue growth involves both an overall increase in tissue size, and also the differential growth of parts (allometry) which contributes to morphogenesis. Growth mostly occurs through cell proliferation but also through changes in cell size or the deposition of extracellular materials.
The development of plants involves similar processes to that of animals. However, plant cells are mostly immotile so morphogenesis is achieved by differential growth, without cell movements. Also, the inductive signals and the genes involved are different from those that control animal development.
Generative biology
Generative biology is the generative science that explores the dynamics guiding the development and evolution of a biological morphological form.
Developmental processes
Cell differentiation
Cell differentiation is the process whereby different functional cell types arise in development. For example, neurons, muscle fibers and hepatocytes (liver cells) are well known types of differentiated cells. Differentiated cells usually produce large amounts of a few proteins that are required for their specific function and this gives them the characteristic appearance that enables them to be recognized under the light microscope. The genes encoding these proteins are highly active. Typically their chromatin structure is very open, allowing access for the transcription enzymes, and specific transcription factors bind to regulatory sequences in the DNA in order to activate gene expression. For example, NeuroD is a key transcription factor for neuronal differentiation, myogenin for muscle differentiation, and HNF4 for hepatocyte differentiation.
Cell differentiation is usually the final stage of development, preceded by several states of commitment which are not visibly differentiated. A single tissue, formed from a single type of progenitor cell or stem cell, often consists of several differentiated cell types. Control of their formation involves a process of lateral inhibition, based on the properties of the Notch signaling pathway. For example, in the neural plate of the embryo this system operates to generate a population of neuronal precursor cells in which NeuroD is highly expressed.
Regeneration
Regeneration indicates the ability to regrow a missing part. This is very prevalent amongst plants, which show continuous growth, and also among colonial animals such as hydroids and ascidians. But most interest by developmental biologists has been shown in the regeneration of parts in free living animals. In particular four models have been the subject of much investigation. Two of these have the ability to regenerate whole bodies: Hydra, which can regenerate any part of the polyp from a small fragment, and planarian worms, which can usually regenerate both heads and tails. Both of these examples have continuous cell turnover fed by stem cells and, at least in planaria, at least some of the stem cells have been shown to be pluripotent. The other two models show only distal regeneration of appendages. These are the insect appendages, usually the legs of hemimetabolous insects such as the cricket, and the limbs of urodele amphibians. Considerable information is now available about amphibian limb regeneration and it is known that each cell type regenerates itself, except for connective tissues where there is considerable interconversion between cartilage, dermis and tendons. In terms of the pattern of structures, this is controlled by a re-activation of signals active in the embryo.
There is still debate about the old question of whether regeneration is a "pristine" or an "adaptive" property. If the former is the case, with improved knowledge, we might expect to be able to improve regenerative ability in humans. If the latter, then each instance of regeneration is presumed to have arisen by natural selection in circumstances particular to the species, so no general rules would be expected.
Embryonic development of animals
The sperm and egg fuse in the process of fertilization to form a fertilized egg, or zygote. This undergoes a period of divisions to form a ball or sheet of similar cells called a blastula or blastoderm. These cell divisions are usually rapid with no growth so the daughter cells are half the size of the mother cell and the whole embryo stays about the same size. They are called cleavage divisions.
Mouse epiblast primordial germ cells (see Figure: “The initial stages of human embryogenesis”) undergo extensive epigenetic reprogramming. This process involves genome-wide DNA demethylation, chromatin reorganization and epigenetic imprint erasure leading to totipotency. DNA demethylation is carried out by a process that utilizes the DNA base excision repair pathway.
Morphogenetic movements convert the cell mass into a three layered structure consisting of multicellular sheets called ectoderm, mesoderm and endoderm. These sheets are known as germ layers. This is the process of gastrulation. During cleavage and gastrulation the first regional specification events occur. In addition to the formation of the three germ layers themselves, these often generate extraembryonic structures, such as the mammalian placenta, needed for support and nutrition of the embryo, and also establish differences of commitment along the anteroposterior axis (head, trunk and tail).
Regional specification is initiated by the presence of cytoplasmic determinants in one part of the zygote. The cells that contain the determinant become a signaling center and emit an inducing factor. Because the inducing factor is produced in one place, diffuses away, and decays, it forms a concentration gradient, high near the source cells and low further away. The remaining cells of the embryo, which do not contain the determinant, are competent to respond to different concentrations by upregulating specific developmental control genes. This results in a series of zones becoming set up, arranged at progressively greater distance from the signaling center. In each zone a different combination of developmental control genes is upregulated. These genes encode transcription factors which upregulate new combinations of gene activity in each region. Among other functions, these transcription factors control expression of genes conferring specific adhesive and motility properties on the cells in which they are active. Because of these different morphogenetic properties, the cells of each germ layer move to form sheets such that the ectoderm ends up on the outside, mesoderm in the middle, and endoderm on the inside.
Morphogenetic movements not only change the shape and structure of the embryo, but by bringing cell sheets into new spatial relationships they also make possible new phases of signaling and response between them. In addition, first morphogenetic movements of embryogenesis, such as gastrulation, epiboly and twisting, directly activate pathways involved in endomesoderm specification through mechanotransduction processes. This property was suggested to be evolutionary inherited from endomesoderm specification as mechanically stimulated by marine environmental hydrodynamic flow in first animal organisms (first metazoa). Twisting along the body axis by a left-handed chirality is found in all chordates (including vertebrates) and is addressed by the axial twist theory.
Growth in embryos is mostly autonomous. For each territory of cells the growth rate is controlled by the combination of genes that are active. Free-living embryos do not grow in mass as they have no external food supply. But embryos fed by a placenta or extraembryonic yolk supply can grow very fast, and changes to relative growth rate between parts in these organisms help to produce the final overall anatomy.
The whole process needs to be coordinated in time and how this is controlled is not understood. There may be a master clock able to communicate with all parts of the embryo that controls the course of events, or timing may depend simply on local causal sequences of events.
Metamorphosis
Developmental processes are very evident during the process of metamorphosis. This occurs in various types of animal. Well-known examples are seen in frogs, which usually hatch as a tadpole and metamorphoses to an adult frog, and certain insects which hatch as a larva and then become remodeled to the adult form during a pupal stage.
All the developmental processes listed above occur during metamorphosis. Examples that have been especially well studied include tail loss and other changes in the tadpole of the frog Xenopus, and the biology of the imaginal discs, which generate the adult body parts of the fly Drosophila melanogaster.
Plant development
Plant development is the process by which structures originate and mature as a plant grows. It is studied in plant anatomy and plant physiology as well as plant morphology.
Plants constantly produce new tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature.
The properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts."
Growth
A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin its life.
Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialized tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium.
In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism).
Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin.
Morphological variation
Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility.
Evolution of plant morphology
Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants.
Most land plants share a common ancestor, multicellular algae. An example of the evolution of plant morphology is seen in charophytes. Studies have shown that charophytes have traits that are homologous to land plants. There are two main theories of the evolution of plant morphology, these theories are the homologous theory and the antithetic theory. The commonly accepted theory for the evolution of plant morphology is the antithetic theory. The antithetic theory states that the multiple mitotic divisions that take place before meiosis, cause the development of the sporophyte. Then the sporophyte will development as an independent organism.
Developmental model organisms
Much of developmental biology research in recent decades has focused on the use of a small number of model organisms. It has turned out that there is much conservation of developmental mechanisms across the animal kingdom. In early development different vertebrate species all use essentially the same inductive signals and the same genes encoding regional identity. Even invertebrates use a similar repertoire of signals and genes although the body parts formed are significantly different. Model organisms each have some particular experimental advantages which have enabled them to become popular among researchers. In one sense they are "models" for the whole animal kingdom, and in another sense they are "models" for human development, which is difficult to study directly for both ethical and practical reasons. Model organisms have been most useful for elucidating the broad nature of developmental mechanisms. The more detail is sought, the more they differ from each other and from humans.
Plants
Thale cress (Arabidopsis thaliana)
Vertebrates
Frog: Xenopus (X. laevis and X. tropicalis). Good embryo supply. Especially suitable for microsurgery.
Zebrafish: Danio rerio. Good embryo supply. Well developed genetics.
Chicken: Gallus gallus. Early stages similar to mammal, but microsurgery easier. Low cost.
Mouse: Mus musculus. A mammal with well developed genetics.
Invertebrates
Fruit fly: Drosophila melanogaster. Good embryo supply. Well developed genetics.
Nematode: Caenorhabditis elegans. Good embryo supply. Well developed genetics. Low cost.
Unicellular
Algae: Chlamydomonas
Yeast: Saccharomyces
Others
Also popular for some purposes have been sea urchins and ascidians. For studies of regeneration urodele amphibians such as the axolotl Ambystoma mexicanum are used, and also planarian worms such as Schmidtea mediterranea. Organoids have also been demonstrated as an efficient model for development. Plant development has focused on the thale cress Arabidopsis thaliana as a model organism.
See also
References
Further reading
External links
Society for Developmental Biology
Collaborative resources
Developmental Biology - 10th edition
Essential Developmental Biology 3rd edition
Embryo Project Encyclopedia
Philosophy of biology | 0.810087 | 0.992775 | 0.804234 |
Bioenergetics | Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs harness energy that was initially transformed by the plants during photosynthesis.
In a living organism, chemical bonds are broken and made as part of the exchange and transformation of energy. Energy is available for work (such as mechanical work) or for other processes (such as chemical synthesis and anabolic processes in growth), when weak bonds are broken and stronger bonds are made. The production of stronger bonds allows release of usable energy.
Adenosine triphosphate (ATP) is the main "energy currency" for organisms; the goal of metabolic and catabolic processes are to synthesize ATP from available starting materials (from the environment), and to break- down ATP (into adenosine diphosphate (ADP) and inorganic phosphate) by utilizing it in biological processes. In a cell, the ratio of ATP to ADP concentrations is known as the "energy charge" of the cell. A cell can use this energy charge to relay information about cellular needs; if there is more ATP than ADP available, the cell can use ATP to do work, but if there is more ADP than ATP available, the cell must synthesize ATP via oxidative phosphorylation.
Living organisms produce ATP from energy sources via oxidative phosphorylation. The terminal phosphate bonds of ATP are relatively weak compared with the stronger bonds formed when ATP is hydrolyzed (broken down by water) to adenosine diphosphate and inorganic phosphate. Here it is the thermodynamically favorable free energy of hydrolysis that results in energy release; the phosphoanhydride bond between the terminal phosphate group and the rest of the ATP molecule does not itself contain this energy. An organism's stockpile of ATP is used as a battery to store energy in cells. Utilization of chemical energy from such molecular bond rearrangement powers biological processes in every biological organism.
Living organisms obtain energy from organic and inorganic materials; i.e. ATP can be synthesized from a variety of biochemical precursors. For example, lithotrophs can oxidize minerals such as nitrates or forms of sulfur, such as elemental sulfur, sulfites, and hydrogen sulfide to produce ATP. In photosynthesis, autotrophs produce ATP using light energy, whereas heterotrophs must consume organic compounds, mostly including carbohydrates, fats, and proteins. The amount of energy actually obtained by the organism is lower than the amount present in the food; there are losses in digestion, metabolism, and thermogenesis.
Environmental materials that an organism intakes are generally combined with oxygen to release energy, although some nutrients can also be oxidized anaerobically by various organisms. The utilization of these materials is a form of slow combustion because the nutrients are reacted with oxygen (the materials are oxidized slowly enough that the organisms do not produce fire). The oxidation releases energy, which may evolve as heat or be used by the organism for other purposes, such as breaking chemical bonds.
Types of reactions
An exergonic reaction is a spontaneous chemical reaction that releases energy. It is thermodynamically favored, indexed by a negative value of ΔG (Gibbs free energy). Over the course of a reaction, energy needs to be put in, and this activation energy drives the reactants from a stable state to a highly energetically unstable transition state to a more stable state that is lower in energy (see: reaction coordinate). The reactants are usually complex molecules that are broken into simpler products. The entire reaction is usually catabolic. The release of energy (called Gibbs free energy) is negative (i.e. −ΔG) because energy is released from the reactants to the products.
An endergonic reaction is an anabolic chemical reaction that consumes energy. It is the opposite of an exergonic reaction. It has a positive ΔG because it takes more energy to break the bonds of the reactant than the energy of the products offer, i.e. the products have weaker bonds than the reactants. Thus, endergonic reactions are thermodynamically unfavorable. Additionally, endergonic reactions are usually anabolic.
The free energy (ΔG) gained or lost in a reaction can be calculated as follows: ΔG = ΔH − TΔS
where ∆G = Gibbs free energy, ∆H = enthalpy, T = temperature (in kelvins), and ∆S = entropy.
Examples of major bioenergetic processes
Glycolysis is the process of breaking down glucose into pyruvate, producing two molecules of ATP (per 1 molecule of glucose) in the process. When a cell has a higher concentration of ATP than ADP (i.e. has a high energy charge), the cell cannot undergo glycolysis, releasing energy from available glucose to perform biological work. Pyruvate is one product of glycolysis, and can be shuttled into other metabolic pathways (gluconeogenesis, etc.) as needed by the cell. Additionally, glycolysis produces reducing equivalents in the form of NADH (nicotinamide adenine dinucleotide), which will ultimately be used to donate electrons to the electron transport chain.
Gluconeogenesis is the opposite of glycolysis; when the cell's energy charge is low (the concentration of ADP is higher than that of ATP), the cell must synthesize glucose from carbon- containing biomolecules such as proteins, amino acids, fats, pyruvate, etc. For example, proteins can be broken down into amino acids, and these simpler carbon skeletons are used to build/ synthesize glucose.
The citric acid cycle is a process of cellular respiration in which acetyl coenzyme A, synthesized from pyruvate dehydrogenase, is first reacted with oxaloacetate to yield citrate. The remaining eight reactions produce other carbon-containing metabolites. These metabolites are successively oxidized, and the free energy of oxidation is conserved in the form of the reduced coenzymes FADH2 and NADH. These reduced electron carriers can then be re-oxidized when they transfer electrons to the electron transport chain.
Ketosis is a metabolic process where the body prioritizes ketone bodies, produced from fat, as its primary fuel source instead of glucose. This shift often occurs when glucose levels are low: during prolonged fasting, strenuous exercise, or specialized diets like ketogenic plans, the body may also adopt ketosis as an efficient alternative for energy production. This metabolic adaptation allows the body to conserve precious glucose for organs that depend on it, like the brain, while utilizing readily available fat stores for fuel.
Oxidative phosphorylation and the electron transport chain is the process where reducing equivalents such as NADPH, FADH2 and NADH can be used to donate electrons to a series of redox reactions that take place in electron transport chain complexes. These redox reactions take place in enzyme complexes situated within the mitochondrial membrane. These redox reactions transfer electrons "down" the electron transport chain, which is coupled to the proton motive force. This difference in proton concentration between the mitochondrial matrix and inner membrane space is used to drive ATP synthesis via ATP synthase.
Photosynthesis, another major bioenergetic process, is the metabolic pathway used by plants in which solar energy is used to synthesize glucose from carbon dioxide and water. This reaction takes place in the chloroplast. After glucose is synthesized, the plant cell can undergo photophosphorylation to produce ATP.
Additional information
During energy transformations in living systems, order and organization must be compensated by releasing energy which will increase entropy of the surrounding.
Organisms are open systems that exchange materials and energy with the environment. They are never at equilibrium with the surrounding.
Energy is spent to create and maintain order in the cells, and surplus energy and other simpler by-products are released to create disorder such that there is an increase in entropy of the surrounding.
In a reversible process, entropy remains constant where as in an irreversible process (more common to real-world scenarios), entropy tends to increase.
During phase changes (from solid to liquid, or to gas), entropy increases because the number of possible arrangements of particles increases.
If ∆G<0, the chemical reaction is spontaneous and favourable in that direction.
If ∆G=0, the reactants and products of chemical reaction are at equilibrium.
If ∆G>0, the chemical reaction is non-spontaneous and unfavorable in that direction.
∆G is not an indicator for velocity or rate of chemical reaction at which equilibrium is reached. It depends on amount of enzyme and energy activation.
Reaction coupling
Is the linkage of chemical reactions in a way that the product of one reaction becomes the substrate of another reaction.
This allows organisms to utilize energy and resources efficiently. For example, in cellular respiration, energy released by the breakdown of glucose is coupled in the synthesis of ATP.
Cotransport
In August 1960, Robert K. Crane presented for the first time his discovery of the sodium-glucose cotransport as the mechanism for intestinal glucose absorption. Crane's discovery of cotransport was the first ever proposal of flux coupling in biology and was the most important event concerning carbohydrate absorption in the 20th century.
Chemiosmotic theory
One of the major triumphs of bioenergetics is Peter D. Mitchell's chemiosmotic theory of how protons in aqueous solution function in the production of ATP in cell organelles such as mitochondria. This work earned Mitchell the 1978 Nobel Prize for Chemistry. Other cellular sources of ATP such as glycolysis were understood first, but such processes for direct coupling of enzyme activity to ATP production are not the major source of useful chemical energy in most cells. Chemiosmotic coupling is the major energy producing process in most cells, being utilized in chloroplasts and several single celled organisms in addition to mitochondria.
Binding Change Mechanism
The binding change mechanism, proposed by Paul Boyer and John E. Walker, who were awarded the Nobel Prize in Chemistry in 1997, suggests that ATP synthesis is linked to a conformational change in ATP synthase. This change is triggered by the rotation of the gamma subunit. ATP synthesis can be achieved through several mechanisms. The first mechanism postulates that the free energy of the proton gradient is utilized to alter the conformation of polypeptide molecules in the ATP synthesis active centers. The second mechanism suggests that the change in the conformational state is also produced by the transformation of mechanical energy into chemical energy using biological mechanoemission.
Energy balance
Energy homeostasis is the homeostatic control of energy balance – the difference between energy obtained through food consumption and energy expenditure – in living systems.
See also
Bioenergetic systems
Cellular respiration
Photosynthesis
ATP synthase
Active transport
Myosin
Exercise physiology
Table of standard Gibbs free energies
References
Further reading
Juretic, D., 2021. Bioenergetics: a bridge across life and universe. CRC Press.
External links
The Molecular & Cellular Bioenergetics Gordon Research Conference (see).
American Society of Exercise Physiologists
Biochemistry
Biophysics
Cell biology
Energy (physics) | 0.811713 | 0.990302 | 0.803841 |
Theoretical ecology | Theoretical ecology is the scientific discipline devoted to the study of ecological systems using theoretical methods such as simple conceptual models, mathematical models, computational simulations, and advanced data analysis. Effective models improve understanding of the natural world by revealing how the dynamics of species populations are often based on fundamental biological conditions and processes. Further, the field aims to unify a diverse range of empirical observations by assuming that common, mechanistic processes generate observable phenomena across species and ecological environments. Based on biologically realistic assumptions, theoretical ecologists are able to uncover novel, non-intuitive insights about natural processes. Theoretical results are often verified by empirical and observational studies, revealing the power of theoretical methods in both predicting and understanding the noisy, diverse biological world.
The field is broad and includes foundations in applied mathematics, computer science, biology, statistical physics, genetics, chemistry, evolution, and conservation biology. Theoretical ecology aims to explain a diverse range of phenomena in the life sciences, such as population growth and dynamics, fisheries, competition, evolutionary theory, epidemiology, animal behavior and group dynamics, food webs, ecosystems, spatial ecology, and the effects of climate change.
Theoretical ecology has further benefited from the advent of fast computing power, allowing the analysis and visualization of large-scale computational simulations of ecological phenomena. Importantly, these modern tools provide quantitative predictions about the effects of human induced environmental change on a diverse variety of ecological phenomena, such as: species invasions, climate change, the effect of fishing and hunting on food network stability, and the global carbon cycle.
Modelling approaches
As in most other sciences, mathematical models form the foundation of modern ecological theory.
Phenomenological models: distill the functional and distributional shapes from observed patterns in the data, or researchers decide on functions and distribution that are flexible enough to match the patterns they or others (field or experimental ecologists) have found in the field or through experimentation.
Mechanistic models: model the underlying processes directly, with functions and distributions that are based on theoretical reasoning about ecological processes of interest.
Ecological models can be deterministic or stochastic.
Deterministic models always evolve in the same way from a given starting point. They represent the average, expected behavior of a system, but lack random variation. Many system dynamics models are deterministic.
Stochastic models allow for the direct modeling of the random perturbations that underlie real world ecological systems. Markov chain models are stochastic.
Species can be modelled in continuous or discrete time.
Continuous time is modelled using differential equations.
Discrete time is modelled using difference equations. These model ecological processes that can be described as occurring over discrete time steps. Matrix algebra is often used to investigate the evolution of age-structured or stage-structured populations. The Leslie matrix, for example, mathematically represents the discrete time change of an age structured population.
Models are often used to describe real ecological reproduction processes of single or multiple species.
These can be modelled using stochastic branching processes. Examples are the dynamics of interacting populations (predation competition and mutualism), which, depending on the species of interest, may best be modeled over either continuous or discrete time. Other examples of such models may be found in the field of mathematical epidemiology where the dynamic relationships that are to be modeled are host–pathogen interactions.
Bifurcation theory is used to illustrate how small changes in parameter values can give rise to dramatically different long run outcomes, a mathematical fact that may be used to explain drastic ecological differences that come about in qualitatively very similar systems. Logistic maps are polynomial mappings, and are often cited as providing archetypal examples of how chaotic behaviour can arise from very simple non-linear dynamical equations. The maps were popularized in a seminal 1976 paper by the theoretical ecologist Robert May. The difference equation is intended to capture the two effects of reproduction and starvation.
In 1930, R.A. Fisher published his classic The Genetical Theory of Natural Selection, which introduced the idea that frequency-dependent fitness brings a strategic aspect to evolution, where the payoffs to a particular organism, arising from the interplay of all of the relevant organisms, are the number of this organism' s viable offspring. In 1961, Richard Lewontin applied game theory to evolutionary biology in his Evolution and the Theory of Games,
followed closely by John Maynard Smith, who in his seminal 1972 paper, “Game Theory and the Evolution of Fighting", defined the concept of the evolutionarily stable strategy.
Because ecological systems are typically nonlinear, they often cannot be solved analytically and in order to obtain sensible results, nonlinear, stochastic and computational techniques must be used. One class of computational models that is becoming increasingly popular are the agent-based models. These models can simulate the actions and interactions of multiple, heterogeneous, organisms where more traditional, analytical techniques are inadequate. Applied theoretical ecology yields results which are used in the real world. For example, optimal harvesting theory draws on optimization techniques developed in economics, computer science and operations research, and is widely used in fisheries.
Population ecology
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment. It is the study of how the population sizes of species living together in groups change over time and space, and was one of the first aspects of ecology to be studied and modelled mathematically.
Exponential growth
The most basic way of modeling population dynamics is to assume that the rate of growth of a population depends only upon the population size at that time and the per capita growth rate of the organism. In other words, if the number of individuals in a population at a time t, is N(t), then the rate of population growth is given by:
where r is the per capita growth rate, or the intrinsic growth rate of the organism. It can also be described as r = b-d, where b and d are the per capita time-invariant birth and death rates, respectively. This first order linear differential equation can be solved to yield the solution
,
a trajectory known as Malthusian growth, after Thomas Malthus, who first described its dynamics in 1798. A population experiencing Malthusian growth follows an exponential curve, where N(0) is the initial population size. The population grows when r > 0, and declines when r < 0. The model is most applicable in cases where a few organisms have begun a colony and are rapidly growing without any limitations or restrictions impeding their growth (e.g. bacteria inoculated in rich media).
Logistic growth
The exponential growth model makes a number of assumptions, many of which often do not hold. For example, many factors affect the intrinsic growth rate and is often not time-invariant. A simple modification of the exponential growth is to assume that the intrinsic growth rate varies with population size. This is reasonable: the larger the population size, the fewer resources available, which can result in a lower birth rate and higher death rate. Hence, we can replace the time-invariant r with r’(t) = (b –a*N(t)) – (d + c*N(t)), where a and c are constants that modulate birth and death rates in a population dependent manner (e.g. intraspecific competition). Both a and c will depend on other environmental factors which, we can for now, assume to be constant in this approximated model. The differential equation is now:
This can be rewritten as:
where r = b-d and K = (b-d)/(a+c).
The biological significance of K becomes apparent when stabilities of the equilibria of the system are considered. The constant K is the carrying capacity of the population. The equilibria of the system are N = 0 and N = K. If the system is linearized, it can be seen that N = 0 is an unstable equilibrium while K is a stable equilibrium.
Structured population growth
Another assumption of the exponential growth model is that all individuals within a population are identical and have the same probabilities of surviving and of reproducing. This is not a valid assumption for species with complex life histories. The exponential growth model can be modified to account for this, by tracking the number of individuals in different age classes (e.g. one-, two-, and three-year-olds) or different stage classes (juveniles, sub-adults, and adults) separately, and allowing individuals in each group to have their own survival and reproduction rates.
The general form of this model is
where Nt is a vector of the number of individuals in each class at time t and L is a matrix that contains the survival probability and fecundity for each class. The matrix L is referred to as the Leslie matrix for age-structured models, and as the Lefkovitch matrix for stage-structured models.
If parameter values in L are estimated from demographic data on a specific population, a structured model can then be used to predict whether this population is expected to grow or decline in the long-term, and what the expected age distribution within the population will be. This has been done for a number of species including loggerhead sea turtles and right whales.
Community ecology
An ecological community is a group of trophically similar, sympatric species that actually or potentially compete in a local area for the same or similar resources. Interactions between these species form the first steps in analyzing more complex dynamics of ecosystems. These interactions shape the distribution and dynamics of species. Of these interactions, predation is one of the most widespread population activities.
Taken in its most general sense, predation comprises predator–prey, host–pathogen, and host–parasitoid interactions.
Predator–prey interaction
Predator–prey interactions exhibit natural oscillations in the populations of both predator and the prey. In 1925, the American mathematician Alfred J. Lotka developed simple equations for predator–prey interactions in his book on biomathematics. The following year, the Italian mathematician Vito Volterra, made a statistical analysis of fish catches in the Adriatic and independently developed the same equations. It is one of the earliest and most recognised ecological models, known as the Lotka-Volterra model:
where N is the prey and P is the predator population sizes, r is the rate for prey growth, taken to be exponential in the absence of any predators, α is the prey mortality rate for per-capita predation (also called ‘attack rate’), c is the efficiency of conversion from prey to predator, and d is the exponential death rate for predators in the absence of any prey.
Volterra originally used the model to explain fluctuations in fish and shark populations after fishing was curtailed during the First World War. However, the equations have subsequently been applied more generally. Other examples of these models include the Lotka-Volterra model of the snowshoe hare and Canadian lynx in North America, any infectious disease modeling such as the recent outbreak of SARS
and biological control of California red scale by the introduction of its parasitoid, Aphytis melinus
.
A credible, simple alternative to the Lotka-Volterra predator–prey model and their common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model. The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka–Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio-dependent extreme, so if a simple model is needed one can use the Arditi–Ginzburg model as the first approximation.
Host–pathogen interaction
The second interaction, that of host and pathogen, differs from predator–prey interactions in that pathogens are much smaller, have much faster generation times, and require a host to reproduce. Therefore, only the host population is tracked in host–pathogen models. Compartmental models that categorize host population into groups such as susceptible, infected, and recovered (SIR) are commonly used.
Host–parasitoid interaction
The third interaction, that of host and parasitoid, can be analyzed by the Nicholson–Bailey model, which differs from Lotka-Volterra and SIR models in that it is discrete in time. This model, like that of Lotka-Volterra, tracks both populations explicitly. Typically, in its general form, it states:
where f(Nt, Pt) describes the probability of infection (typically, Poisson distribution), λ is the per-capita growth rate of hosts in the absence of parasitoids, and c is the conversion efficiency, as in the Lotka-Volterra model.
Competition and mutualism
In studies of the populations of two species, the Lotka-Volterra system of equations has been extensively used to describe dynamics of behavior between two species, N1 and N2. Examples include relations between D. discoiderum and E. coli,
as well as theoretical analysis of the behavior of the system.
The r coefficients give a “base” growth rate to each species, while K coefficients correspond to the carrying capacity. What can really change the dynamics of a system, however are the α terms. These describe the nature of the relationship between the two species. When α12 is negative, it means that N2 has a negative effect on N1, by competing with it, preying on it, or any number of other possibilities. When α12 is positive, however, it means that N2 has a positive effect on N1, through some kind of mutualistic interaction between the two.
When both α12 and α21 are negative, the relationship is described as competitive. In this case, each species detracts from the other, potentially over competition for scarce resources.
When both α12 and α21 are positive, the relationship becomes one of mutualism. In this case, each species provides a benefit to the other, such that the presence of one aids the population growth of the other.
See Competitive Lotka–Volterra equations for further extensions of this model.
Neutral theory
Unified neutral theory is a hypothesis proposed by Stephen P. Hubbell in 2001. The hypothesis aims to explain the diversity and relative abundance of species in ecological communities, although like other neutral theories in ecology, Hubbell's hypothesis assumes that the differences between members of an ecological community of trophically similar species are "neutral," or irrelevant to their success. Neutrality means that at a given trophic level in a food web, species are equivalent in birth rates, death rates, dispersal rates and speciation rates, when measured on a per-capita basis. This implies that biodiversity arises at random, as each species follows a random walk. This can be considered a null hypothesis to niche theory. The hypothesis has sparked controversy, and some authors consider it a more complex version of other null models that fit the data better.
Under unified neutral theory, complex ecological interactions are permitted among individuals of an ecological community (such as competition and cooperation), providing all individuals obey the same rules. Asymmetric phenomena such as parasitism and predation are ruled out by the terms of reference; but cooperative strategies such as swarming, and negative interaction such as competing for limited food or light are allowed, so long as all individuals behave the same way. The theory makes predictions that have implications for the management of biodiversity, especially the management of rare species. It predicts the existence of a fundamental biodiversity constant, conventionally written θ, that appears to govern species richness on a wide variety of spatial and temporal scales.
Hubbell built on earlier neutral concepts, including MacArthur & Wilson's theory of island biogeography and Gould's concepts of symmetry and null models.
Spatial ecology
Biogeography
Biogeography is the study of the distribution of species in space and time. It aims to reveal where organisms live, at what abundance, and why they are (or are not) found in a certain geographical area.
Biogeography is most keenly observed on islands, which has led to the development of the subdiscipline of island biogeography. These habitats are often a more manageable areas of study because they are more condensed than larger ecosystems on the mainland. In 1967, Robert MacArthur and E.O. Wilson published The Theory of Island Biogeography. This showed that the species richness in an area could be predicted in terms of factors such as habitat area, immigration rate and extinction rate. The theory is considered one of the fundamentals of ecological theory. The application of island biogeography theory to habitat fragments spurred the development of the fields of conservation biology and landscape ecology.
r/K-selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
Niche theory
Metapopulations
Spatial analysis of ecological systems often reveals that assumptions that are valid for spatially homogenous populations – and indeed, intuitive – may no longer be valid when migratory subpopulations moving from one patch to another are considered. In a simple one-species formulation, a subpopulation may occupy a patch, move from one patch to another empty patch, or die out leaving an empty patch behind. In such a case, the proportion of occupied patches may be represented as
where m is the rate of colonization, and e is the rate of extinction. In this model, if e < m, the steady state value of p is 1 – (e/m) while in the other case, all the patches will eventually be left empty. This model may be made more complex by addition of another species in several different ways, including but not limited to game theoretic approaches, predator–prey interactions, etc. We will consider here an extension of the previous one-species system for simplicity. Let us denote the proportion of patches occupied by the first population as p1, and that by the second as p2. Then,
In this case, if e is too high, p1 and p2 will be zero at steady state. However, when the rate of extinction is moderate, p1 and p2 can stably coexist. The steady state value of p2 is given by
(p*1 may be inferred by symmetry).
If e is zero, the dynamics of the system favor the species that is better at colonizing (i.e. has the higher m value). This leads to a very important result in theoretical ecology known as the Intermediate Disturbance Hypothesis, where the biodiversity (the number of species that coexist in the population) is maximized when the disturbance (of which e is a proxy here) is not too high or too low, but at intermediate levels.
The form of the differential equations used in this simplistic modelling approach can be modified. For example:
Colonization may be dependent on p linearly (m*(1-p)) as opposed to the non-linear m*p*(1-p) regime described above. This mode of replication of a species is called the “rain of propagules”, where there is an abundance of new individuals entering the population at every generation. In such a scenario, the steady state where the population is zero is usually unstable.
Extinction may depend non-linearly on p (e*p*(1-p)) as opposed to the linear (e*p) regime described above. This is referred to as the “rescue effect” and it is again harder to drive a population extinct under this regime.
The model can also be extended to combinations of the four possible linear or non-linear dependencies of colonization and extinction on p are described in more detail in.
Ecosystem ecology
Introducing new elements, whether biotic or abiotic, into ecosystems can be disruptive. In some cases, it leads to ecological collapse, trophic cascades and the death of many species within the ecosystem. The abstract notion of ecological health attempts to measure the robustness and recovery capacity for an ecosystem; i.e. how far the ecosystem is away from its steady state. Often, however, ecosystems rebound from a disruptive agent. The difference between collapse or rebound depends on the toxicity of the introduced element and the resiliency of the original ecosystem.
If ecosystems are governed primarily by stochastic processes, through which its subsequent state would be determined by both predictable and random actions, they may be more resilient to sudden change than each species individually. In the absence of a balance of nature, the species composition of ecosystems would undergo shifts that would depend on the nature of the change, but entire ecological collapse would probably be infrequent events. In 1997, Robert Ulanowicz used information theory tools to describe the structure of ecosystems, emphasizing mutual information (correlations) in studied systems. Drawing on this methodology and prior observations of complex ecosystems, Ulanowicz depicts approaches to determining the stress levels on ecosystems and predicting system reactions to defined types of alteration in their settings (such as increased or reduced energy flow), and eutrophication.
Ecopath is a free ecosystem modelling software suite, initially developed by NOAA, and widely used in fisheries management as a tool for modelling and visualising the complex relationships that exist in real world marine ecosystems.
Food webs
Food webs provide a framework within which a complex network of predator–prey interactions can be organised. A food web model is a network of food chains. Each food chain starts with a primary producer or autotroph, an organism, such as a plant, which is able to manufacture its own food. Next in the chain is an organism that feeds on the primary producer, and the chain continues in this way as a string of successive predators. The organisms in each chain are grouped into trophic levels, based on how many links they are removed from the primary producers. The length of the chain, or trophic level, is a measure of the number of species encountered as energy or nutrients move from plants to top predators. Food energy flows from one organism to the next and to the next and so on, with some energy being lost at each level. At a given trophic level there may be one species or a group of species with the same predators and prey.
In 1927, Charles Elton published an influential synthesis on the use of food webs, which resulted in them becoming a central concept in ecology. In 1966, interest in food webs increased after Robert Paine's experimental and descriptive study of intertidal shores, suggesting that food web complexity was key to maintaining species diversity and ecological stability. Many theoretical ecologists, including Sir Robert May and Stuart Pimm, were prompted by this discovery and others to examine the mathematical properties of food webs. According to their analyses, complex food webs should be less stable than simple food webs. The apparent paradox between the complexity of food webs observed in nature and the mathematical fragility of food web models is currently an area of intensive study and debate. The paradox may be due partially to conceptual differences between persistence of a food web and equilibrial stability of a food web.
Systems ecology
Systems ecology can be seen as an application of general systems theory to ecology. It takes a holistic and interdisciplinary approach to the study of ecological systems, and particularly ecosystems. Systems ecology is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. Like other fields in theoretical ecology, it uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems. It also takes account of the energy flows through the different trophic levels in the ecological networks. Systems ecology also considers the external influence of ecological economics, which usually is not otherwise considered in ecosystem ecology. For the most part, systems ecology is a subfield of ecosystem ecology.
Ecophysiology
This is the study of how "the environment, both physical and biological, interacts with the physiology of an organism. It includes the effects of climate and nutrients on physiological processes in both plants and animals, and has a particular focus on how physiological processes scale with organism size".
Behavioral ecology
Swarm behaviour
Swarm behaviour is a collective behaviour exhibited by animals of similar size which aggregate together, perhaps milling about the same spot or perhaps migrating in some direction. Swarm behaviour is commonly exhibited by insects, but it also occurs in the flocking of birds, the schooling of fish and the herd behaviour of quadrupeds. It is a complex emergent behaviour that occurs when individual agents follow simple behavioral rules.
Recently, a number of mathematical models have been discovered which explain many aspects of the emergent behaviour. Swarm algorithms follow a Lagrangian approach or an Eulerian approach. The Eulerian approach views the swarm as a field, working with the density of the swarm and deriving mean field properties. It is a hydrodynamic approach, and can be useful for modelling the overall dynamics of large swarms.<ref>Toner J and Tu Y (1995) "Long-range order in a two-dimensional xy model: how birds fly together" Physical Revue Letters, '75 (23)(1995), 4326–4329.</ref> However, most models work with the Lagrangian approach, which is an agent-based model following the individual agents (points or particles) that make up the swarm. Individual particle models can follow information on heading and spacing that is lost in the Eulerian approach. Examples include ant colony optimization, self-propelled particles and particle swarm optimization.
On cellular levels, individual organisms also demonstrated swarm behavior. Decentralized systems are where individuals act based on their own decisions without overarching guidance. Studies have shown that individual Trichoplax adhaerens behave like self-propelled particles (SPPs) and collectively display phase transition from ordered movement to disordered movements. Previously, it was thought that the surface-to-volume ratio was what limited the animal size in the evolutionary game. Considering the collective behaviour of the individuals, it was suggested that order is another limiting factor. Central nervous systems were indicated to be vital for large multicellular animals in the evolutionary pathway.
Synchronization Photinus carolinus firefly will synchronize their shining frequencies in a collective setting. Individually, there are no apparent patterns for the flashing. In a group setting, periodicity emerges in the shining pattern. The coexistence of the synchronization and asynchronization in the flashings in the system composed of multiple fireflies could be characterized by the chimera states. Synchronization could spontaneously occur. The agent-based model has been useful in describing this unique phenomenon. The flashings of individual fireflies could be viewed as oscillators and the global coupling models were similar to the ones used in condensed matter physics.
Evolutionary ecology
The British biologist Alfred Russel Wallace is best known for independently proposing a theory of evolution due to natural selection that prompted Charles Darwin to publish his own theory. In his famous 1858 paper, Wallace proposed natural selection as a kind of feedback mechanism which keeps species and varieties adapted to their environment.
The action of this principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident; and in like manner no unbalanced deficiency in the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow.
The cybernetician and anthropologist Gregory Bateson observed in the 1970s that, though writing it only as an example, Wallace had "probably said the most powerful thing that’d been said in the 19th Century". Subsequently, the connection between natural selection and systems theory has become an area of active research.
Other theories
In contrast to previous ecological theories which considered floods to be catastrophic events, the river flood pulse concept argues that the annual flood pulse is the most important aspect and the most biologically productive feature of a river's ecosystem.Benke, A. C., Chaubey, I., Ward, G. M., & Dunn, E. L. (2000). Flood Pulse Dynamics of an Unregulated River Floodplain in the Southeastern U.S. Coastal Plain. Ecology, 2730-2741.
History
Theoretical ecology draws on pioneering work done by G. Evelyn Hutchinson and his students. Brothers H.T. Odum and E.P. Odum are generally recognised as the founders of modern theoretical ecology. Robert MacArthur brought theory to community ecology. Daniel Simberloff was the student of E.O. Wilson, with whom MacArthur collaborated on The Theory of Island Biogeography, a seminal work in the development of theoretical ecology.
Simberloff added statistical rigour to experimental ecology and was a key figure in the SLOSS debate, about whether it is preferable to protect a single large or several small reserves. This resulted in the supporters of Jared Diamond's community assembly rules defending their ideas through Neutral Model Analysis. Simberloff also played a key role in the (still ongoing) debate on the utility of corridors for connecting isolated reserves.
Stephen P. Hubbell and Michael Rosenzweig combined theoretical and practical elements into works that extended MacArthur and Wilson's Island Biogeography Theory - Hubbell with his Unified Neutral Theory of Biodiversity and Biogeography and Rosenzweig with his Species Diversity in Space and Time.
Theoretical and mathematical ecologists
A tentative distinction can be made between mathematical ecologists, ecologists who apply mathematics to ecological problems, and mathematicians who develop the mathematics itself that arises out of ecological problems.
Some notable theoretical ecologists can be found in these categories:
:Category:Mathematical ecologists
:Category:Theoretical biologists
Journals
The American Naturalist Journal of Mathematical Biology Journal of Theoretical Biology Theoretical Ecology Theoretical Population Biology Ecological ModellingSee also
Butterfly effect
Complex system biology
Ecological systems theory
Ecosystem model
Integrodifference equation – widely used to model the dispersal and growth of populations
Limiting similarity
Mathematical biology
Population dynamics
Population modeling
Quantitative ecology
Taylor's law
Theoretical biology
References
Further reading
The classic text is Theoretical Ecology: Principles and Applications, by Angela McLean and Robert May. The 2007 edition is published by the Oxford University Press. .
Bolker BM (2008) Ecological Models and Data in R Princeton University Press. .
Case TJ (2000) An illustrated guide to theoretical ecology Oxford University Press. .
Caswell H (2000) Matrix Population Models: Construction, Analysis, and Interpretation'', Sinauer, 2nd Ed. .
Edelstein-Keshet L (2005) Mathematical Models in Biology Society for Industrial and Applied Mathematics. .
Gotelli NJ (2008) A Primer of Ecology Sinauer Associates, 4th Ed. .
Gotelli NJ & A Ellison (2005) A Primer Of Ecological Statistics Sinauer Associates Publishers. .
Hastings A (1996) Population Biology: Concepts and Models Springer. .
Hilborn R & M Clark (1997) The Ecological Detective: Confronting Models with Data Princeton University Press.
Kokko H (2007) Modelling for field biologists and other interesting people Cambridge University Press. .
Kot M (2001) Elements of Mathematical Ecology Cambridge University Press. .
Murray JD (2002) Mathematical Biology, Volume 1 Springer, 3rd Ed. .
Murray JD (2003) Mathematical Biology, Volume 2 Springer, 3rd Ed. .
Pastor J (2008) Mathematical Ecology of Populations and Ecosystems Wiley-Blackwell. .
Roughgarden J (1998) Primer of Ecological Theory Prentice Hall. .
Ulanowicz R (1997) Ecology: The Ascendant Perspective Columbia University Press.
Ecology | 0.828175 | 0.969862 | 0.803216 |
Biological process | Biological processes are those processes that are necessary for an organism to live and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Interaction between organisms. the processes by which an organism has an observable effect on another organism of the same or different species.
Also: cellular differentiation, fermentation, fertilisation, germination, tropism, hybridisation, metamorphosis, morphogenesis, photosynthesis, transpiration.
See also
Chemical process
Life
Organic reaction
References
Biological concepts | 0.807842 | 0.993705 | 0.802757 |
Introduction to evolution | In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits.
The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution.
All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This process is responsible for the many diverse life forms in the world.
The modern understanding of evolution began with the 1859 publication of Charles Darwin's On the Origin of Species. In addition, Gregor Mendel's work with plants helped to explain the hereditary patterns of genetics. Fossil discoveries in palaeontology, advances in population genetics and a global network of scientific research have provided further details into the mechanisms of evolution. Scientists now have a good understanding of the origin of new species (speciation) and have observed the speciation process in the laboratory and in the wild. Evolution is the principal scientific theory that biologists use to understand life and is used in many disciplines, including medicine, psychology, conservation biology, anthropology, forensics, agriculture and other social-cultural applications.
Simple overview
The main ideas of evolution may be summarised as follows:
Life forms reproduce and therefore have a tendency to become more numerous.
Factors such as predation and competition work against the survival of individuals.
Each offspring differs from their parent(s) in minor, random ways.
If these differences are beneficial, the offspring is more likely to survive and reproduce.
This makes it likely that more offspring in the next generation will have beneficial differences and fewer will have detrimental differences.
These differences accumulate over generations, resulting in changes within the population.
Over time, populations can split or branch off into new species.
These processes, collectively known as evolution, are responsible for the many diverse life forms seen in the world.
Natural selection
In the 19th century, natural history collections and museums were popular. The European expansion and naval expeditions employed naturalists, while curators of grand museums showcased preserved and live specimens of the varieties of life. Charles Darwin was an English graduate educated and trained in the disciplines of natural history. Such natural historians would collect, catalogue, describe and study the vast collections of specimens stored and managed by curators at these museums. Darwin served as a ship's naturalist on board HMS Beagle, assigned to a five-year research expedition around the world. During his voyage, he observed and collected an abundance of organisms, being very interested in the diverse forms of life along the coasts of South America and the neighbouring Galápagos Islands.
Darwin gained extensive experience as he collected and studied the natural history of life forms from distant places. Through his studies, he formulated the idea that each species had developed from ancestors with similar features. In 1838, he described how a process he called natural selection would make this happen.
The size of a population depends on how much and how many resources are able to support it. For the population to remain the same size year after year, there must be an equilibrium or balance between the population size and available resources. Since organisms produce more offspring than their environment can support, not all individuals can survive out of each generation. There must be a competitive struggle for resources that aid in survival. As a result, Darwin realised that it was not chance alone that determined survival. Instead, survival of an organism depends on the differences of each individual organism, or "traits," that aid or hinder survival and reproduction. Well-adapted individuals are likely to leave more offspring than their less well-adapted competitors. Traits that hinder survival and reproduction would disappear over generations. Traits that help an organism survive and reproduce would accumulate over generations. Darwin realised that the unequal ability of individuals to survive and reproduce could cause gradual changes in the population and used the term natural selection to describe this process.
Observations of variations in animals and plants formed the basis of the theory of natural selection. For example, Darwin observed that orchids and insects have a close relationship that allows the pollination of the plants. He noted that orchids have a variety of structures that attract insects, so that pollen from the flowers gets stuck to the insects' bodies. In this way, insects transport the pollen from a male to a female orchid. In spite of the elaborate appearance of orchids, these specialised parts are made from the same basic structures that make up other flowers. In his book, Fertilisation of Orchids (1862), Darwin proposed that the orchid flowers were adapted from pre-existing parts, through natural selection.
Darwin was still researching and experimenting with his ideas on natural selection when he received a letter from Alfred Russel Wallace describing a theory very similar to his own. This led to an immediate joint publication of both theories. Both Wallace and Darwin saw the history of life like a family tree, with each fork in the tree's limbs being a common ancestor. The tips of the limbs represented modern species and the branches represented the common ancestors that are shared amongst many different species. To explain these relationships, Darwin said that all living things were related, and this meant that all life must be descended from a few forms, or even from a single common ancestor. He called this process descent with modification.
Darwin published his theory of evolution by natural selection in On the Origin of Species in 1859. His theory means that all life, including humanity, is a product of continuing natural processes. The implication that all life on Earth has a common ancestor has been met with objections from some religious groups. Their objections are in contrast to the level of support for the theory by more than 99 percent of those within the scientific community today.
Natural selection is commonly equated with survival of the fittest, but this expression originated in Herbert Spencer's Principles of Biology in 1864, five years after Charles Darwin published his original works. Survival of the fittest describes the process of natural selection incorrectly, because natural selection is not only about survival and it is not always the fittest that survives.
Source of variation
Darwin's theory of natural selection laid the groundwork for modern evolutionary theory, and his experiments and observations showed that the organisms in populations varied from each other, that some of these variations were inherited, and that these differences could be acted on by natural selection. However, he could not explain the source of these variations. Like many of his predecessors, Darwin mistakenly thought that heritable traits were a product of use and disuse, and that features acquired during an organism's lifetime could be passed on to its offspring. He looked for examples, such as large ground feeding birds getting stronger legs through exercise, and weaker wings from not flying until, like the ostrich, they could not fly at all. This misunderstanding was called the inheritance of acquired characters and was part of the theory of transmutation of species put forward in 1809 by Jean-Baptiste Lamarck. In the late 19th century this theory became known as Lamarckism. Darwin produced an unsuccessful theory he called pangenesis to try to explain how acquired characteristics could be inherited. In the 1880s August Weismann's experiments indicated that changes from use and disuse could not be inherited, and Lamarckism gradually fell from favour.
The missing information needed to help explain how new features could pass from a parent to its offspring was provided by the pioneering genetics work of Gregor Mendel. Mendel's experiments with several generations of pea plants demonstrated that inheritance works by separating and reshuffling hereditary information during the formation of sex cells and recombining that information during fertilisation. This is like mixing different hands of playing cards, with an organism getting a random mix of half of the cards from one parent, and half of the cards from the other. Mendel called the information factors; however, they later became known as genes. Genes are the basic units of heredity in living organisms. They contain the information that directs the physical development and behaviour of organisms.
Genes are made of DNA. DNA is a long molecule made up of individual molecules called nucleotides. Genetic information is encoded in the sequence of nucleotides, that make up the DNA, just as the sequence of the letters in words carries information on a page. The genes are like short instructions built up of the "letters" of the DNA alphabet. Put together, the entire set of these genes gives enough information to serve as an "instruction manual" of how to build and run an organism. The instructions spelled out by this DNA alphabet can be changed, however, by mutations, and this may alter the instructions carried within the genes. Within the cell, the genes are carried in chromosomes, which are packages for carrying the DNA. It is the reshuffling of the chromosomes that results in unique combinations of genes in offspring. Since genes interact with one another during the development of an organism, novel combinations of genes produced by sexual reproduction can increase the genetic variability of the population even without new mutations. The genetic variability of a population can also increase when members of that population interbreed with individuals from a different population causing gene flow between the populations. This can introduce genes into a population that were not present before.
Evolution is not a random process. Although mutations in DNA are random, natural selection is not a process of chance: the environment determines the probability of reproductive success. Evolution is an inevitable result of imperfectly copying, self-replicating organisms reproducing over billions of years under the selective pressure of the environment. The outcome of evolution is not a perfectly designed organism. The end products of natural selection are organisms that are adapted to their present environments. Natural selection does not involve progress towards an ultimate goal. Evolution does not strive for more advanced, more intelligent, or more sophisticated life forms. For example, fleas (wingless parasites) are descended from a winged, ancestral scorpionfly, and snakes are lizards that no longer require limbs—although pythons still grow tiny structures that are the remains of their ancestor's hind legs. Organisms are merely the outcome of variations that succeed or fail, dependent upon the environmental conditions at the time.
Rapid environmental changes typically cause extinctions. Of all species that have existed on Earth, 99.9 percent are now extinct. Since life began on Earth, five major mass extinctions have led to large and sudden drops in the variety of species. The most recent, the Cretaceous–Paleogene extinction event, occurred 66 million years ago.
Genetic drift
Genetic drift is a cause of allelic frequency change within populations of a species. Alleles are different variations of specific genes. They determine things like hair colour, skin tone, eye colour and blood type; in other words, all the genetic traits that vary between individuals. Genetic drift does not introduce new alleles to a population, but it can reduce variation within a population by removing an allele from the gene pool. Genetic drift is caused by random sampling of alleles. A truly random sample is a sample in which no outside forces affect what is selected. It is like pulling marbles of the same size and weight but of different colours from a brown paper bag. In any offspring, the alleles present are samples of the previous generations alleles, and chance plays a role in whether an individual survives to reproduce and to pass a sample of their generation onward to the next. The allelic frequency of a population is the ratio of the copies of one specific allele that share the same form compared to the number of all forms of the allele present in the population.
Genetic drift affects smaller populations more than it affects larger populations.
Hardy–Weinberg principle
The Hardy–Weinberg principle states that under certain idealised conditions, including the absence of selection pressures, a large population will have no change in the frequency of alleles as generations pass. A population that satisfies these conditions is said to be in Hardy–Weinberg equilibrium. In particular, Hardy and Weinberg showed that dominant and recessive alleles do not automatically tend to become more and less frequent respectively, as had been thought previously.
The conditions for Hardy-Weinberg equilibrium include that there must be no mutations, immigration, or emigration, all of which can directly change allelic frequencies. Additionally, mating must be totally random, with all males (or females in some cases) being equally desirable mates. This ensures a true random mixing of alleles. A population that is in Hardy–Weinberg equilibrium is analogous to a deck of cards; no matter how many times the deck is shuffled, no new cards are added and no old ones are taken away. Cards in the deck represent alleles in a population's gene pool.
In practice, no population can be in perfect Hardy-Weinberg equilibrium. The population's finite size, combined with natural selection and many other effects, cause the allelic frequencies to change over time.
Population bottleneck
A population bottleneck occurs when the population of a species is reduced drastically over a short period of time due to external forces. In a true population bottleneck, the reduction does not favour any combination of alleles; it is totally random chance which individuals survive. A bottleneck can reduce or eliminate genetic variation from a population. Further drift events after the bottleneck event can also reduce the population's genetic diversity. The lack of diversity created can make the population at risk to other selective pressures.
A common example of a population bottleneck is the Northern elephant seal. Due to excessive hunting throughout the 19th century, the population of the northern elephant seal was reduced to 30 individuals or less. They have made a full recovery, with the total number of individuals at around 100,000 and growing. The effects of the bottleneck are visible, however. The seals are more likely to have serious problems with disease or genetic disorders, because there is almost no diversity in the population.
Founder effect
The founder effect occurs when a small group from one population splits off and forms a new population, often through geographic isolation. This new population's allelic frequency is probably different from the original population's, and will change how common certain alleles are in the populations. The founders of the population will determine the genetic makeup, and potentially the survival, of the new population for generations.
One example of the founder effect is found in the Amish migration to Pennsylvania in 1744. Two of the founders of the colony in Pennsylvania carried the recessive allele for Ellis–van Creveld syndrome. Because the Amish tend to be religious isolates, they interbreed, and through generations of this practice the frequency of Ellis–van Creveld syndrome in the Amish people is much higher than the frequency in the general population.
Modern synthesis
The modern evolutionary synthesis is based on the concept that populations of organisms have significant genetic variation caused by mutation and by the recombination of genes during sexual reproduction. It defines evolution as the change in allelic frequencies within a population caused by genetic drift, gene flow between sub populations, and natural selection. Natural selection is emphasised as the most important mechanism of evolution; large changes are the result of the gradual accumulation of small changes over long periods of time.
The modern evolutionary synthesis is the outcome of a merger of several different scientific fields to produce a more cohesive understanding of evolutionary theory. In the 1920s, Ronald Fisher, J.B.S. Haldane and Sewall Wright combined Darwin's theory of natural selection with statistical models of Mendelian genetics, founding the discipline of population genetics. In the 1930s and 1940s, efforts were made to merge population genetics, the observations of field naturalists on the distribution of species and sub species, and analysis of the fossil record into a unified explanatory model. The application of the principles of genetics to naturally occurring populations, by scientists such as Theodosius Dobzhansky and Ernst Mayr, advanced the understanding of the processes of evolution. Dobzhansky's 1937 work Genetics and the Origin of Species helped bridge the gap between genetics and field biology by presenting the mathematical work of the population geneticists in a form more useful to field biologists, and by showing that wild populations had much more genetic variability with geographically isolated subspecies and reservoirs of genetic diversity in recessive genes than the models of the early population geneticists had allowed for. Mayr, on the basis of an understanding of genes and direct observations of evolutionary processes from field research, introduced the biological species concept, which defined a species as a group of interbreeding or potentially interbreeding populations that are reproductively isolated from all other populations. Both Dobzhansky and Mayr emphasised the importance of subspecies reproductively isolated by geographical barriers in the emergence of new species. The palaeontologist George Gaylord Simpson helped to incorporate palaeontology with a statistical analysis of the fossil record that showed a pattern consistent with the branching and non-directional pathway of evolution of organisms predicted by the modern synthesis.
Evidence for evolution
Scientific evidence for evolution comes from many aspects of biology and includes fossils, homologous structures, and molecular similarities between species' DNA.
Fossil record
Research in the field of palaeontology, the study of fossils, supports the idea that all living organisms are related. Fossils provide evidence that accumulated changes in organisms over long periods of time have led to the diverse forms of life we see today. A fossil itself reveals the organism's structure and the relationships between present and extinct species, allowing palaeontologists to construct a family tree for all of the life forms on Earth.
Modern palaeontology began with the work of Georges Cuvier. Cuvier noted that, in sedimentary rock, each layer contained a specific group of fossils. The deeper layers, which he proposed to be older, contained simpler life forms. He noted that many forms of life from the past are no longer present today. One of Cuvier's successful contributions to the understanding of the fossil record was establishing extinction as a fact. In an attempt to explain extinction, Cuvier proposed the idea of "revolutions" or catastrophism in which he speculated that geological catastrophes had occurred throughout the Earth's history, wiping out large numbers of species. Cuvier's theory of revolutions was later replaced by uniformitarian theories, notably those of James Hutton and Charles Lyell who proposed that the Earth's geological changes were gradual and consistent. However, current evidence in the fossil record supports the concept of mass extinctions. As a result, the general idea of catastrophism has re-emerged as a valid hypothesis for at least some of the rapid changes in life forms that appear in the fossil records.
A very large number of fossils have now been discovered and identified. These fossils serve as a chronological record of evolution. The fossil record provides examples of transitional species that demonstrate ancestral links between past and present life forms. One such transitional fossil is Archaeopteryx, an ancient organism that had the distinct characteristics of a reptile (such as a long, bony tail and conical teeth) yet also had characteristics of birds (such as feathers and a wishbone). The implication from such a find is that modern reptiles and birds arose from a common ancestor.
Comparative anatomy
The comparison of similarities between organisms of their form or appearance of parts, called their morphology, has long been a way to classify life into closely related groups. This can be done by comparing the structure of adult organisms in different species or by comparing the patterns of how cells grow, divide and even migrate during an organism's development.
Taxonomy
Taxonomy is the branch of biology that names and classifies all living things. Scientists use morphological and genetic similarities to assist them in categorising life forms based on ancestral relationships. For example, orangutans, gorillas, chimpanzees and humans all belong to the same taxonomic grouping referred to as a family—in this case the family called Hominidae. These animals are grouped together because of similarities in morphology that come from common ancestry (called homology).
Strong evidence for evolution comes from the analysis of homologous structures: structures in different species that no longer perform the same task but which share a similar structure. Such is the case of the forelimbs of mammals. The forelimbs of a human, cat, whale, and bat all have strikingly similar bone structures. However, each of these four species' forelimbs performs a different task. The same bones that construct a bat's wings, which are used for flight, also construct a whale's flippers, which are used for swimming. Such a "design" makes little sense if they are unrelated and uniquely constructed for their particular tasks. The theory of evolution explains these homologous structures: all four animals shared a common ancestor, and each has undergone change over many generations. These changes in structure have produced forelimbs adapted for different tasks.
However, anatomical comparisons can be misleading, as not all anatomical similarities indicate a close relationship. Organisms that share similar environments will often develop similar physical features, a process known as convergent evolution. Both sharks and dolphins have similar body forms, yet are only distantly related—sharks are fish and dolphins are mammals. Such similarities are a result of both populations being exposed to the same selective pressures. Within both groups, changes that aid swimming have been favoured. Thus, over time, they developed similar appearances (morphology), even though they are not closely related.
Embryology
In some cases, anatomical comparison of structures in the embryos of two or more species provides evidence for a shared ancestor that may not be obvious in the adult forms. As the embryo develops, these homologies can be lost to view, and the structures can take on different functions. Part of the basis of classifying the vertebrate group (which includes humans), is the presence of a tail (extending beyond the anus) and pharyngeal slits. Both structures appear during some stage of embryonic development but are not always obvious in the adult form.
Because of the morphological similarities present in embryos of different species during development, it was once assumed that organisms re-enact their evolutionary history as an embryo. It was thought that human embryos passed through an amphibian then a reptilian stage before completing their development as mammals. Such a re-enactment, often called recapitulation theory, is not supported by scientific evidence. What does occur, however, is that the first stages of development are similar in broad groups of organisms. At very early stages, for instance, all vertebrates appear extremely similar, but do not exactly resemble any ancestral species. As development continues, specific features emerge from this basic pattern.
Vestigial structures
Homology includes a unique group of shared structures referred to as vestigial structures. Vestigial refers to anatomical parts that are of minimal, if any, value to the organism that possesses them. These apparently illogical structures are remnants of organs that played an important role in ancestral forms. Such is the case in whales, which have small vestigial bones that appear to be remnants of the leg bones of their ancestors which walked on land. Humans also have vestigial structures, including the ear muscles, the wisdom teeth, the appendix, the tail bone, body hair (including goose bumps), and the semilunar fold in the corner of the eye.
Biogeography
Biogeography is the study of the geographical distribution of species. Evidence from biogeography, especially from the biogeography of oceanic islands, played a key role in convincing both Darwin and Alfred Russel Wallace that species evolved with a branching pattern of common descent. Islands often contain endemic species, species not found anywhere else, but those species are often related to species found on the nearest continent. Furthermore, islands often contain clusters of closely related species that have very different ecological niches, that is have different ways of making a living in the environment. Such clusters form through a process of adaptive radiation where a single ancestral species colonises an island that has a variety of open ecological niches and then diversifies by evolving into different species adapted to fill those empty niches. Well-studied examples include Darwin's finches, a group of 13 finch species endemic to the Galápagos Islands, and the Hawaiian honeycreepers, a group of birds that once, before extinctions caused by humans, numbered 60 species filling diverse ecological roles, all descended from a single finch like ancestor that arrived on the Hawaiian Islands some 4 million years ago. Another example is the Silversword alliance, a group of perennial plant species, also endemic to the Hawaiian Islands, that inhabit a variety of habitats and come in a variety of shapes and sizes that include trees, shrubs, and ground hugging mats, but which can be hybridised with one another and with certain tarweed species found on the west coast of North America; it appears that one of those tarweeds colonised Hawaii in the past, and gave rise to the entire Silversword alliance.
Molecular biology
Every living organism (with the possible exception of RNA viruses) contains molecules of DNA, which carries genetic information. Genes are the pieces of DNA that carry this information, and they influence the properties of an organism. Genes determine an individual's general appearance and to some extent their behaviour. If two organisms are closely related, their DNA will be very similar. On the other hand, the more distantly related two organisms are, the more differences they will have. For example, brothers are closely related and have very similar DNA, while cousins share a more distant relationship and have far more differences in their DNA. Similarities in DNA are used to determine the relationships between species in much the same manner as they are used to show relationships between individuals. For example, comparing chimpanzees with gorillas and humans shows that there is as much as a 96 percent similarity between the DNA of humans and chimps. Comparisons of DNA indicate that humans and chimpanzees are more closely related to each other than either species is to gorillas.
The field of molecular systematics focuses on measuring the similarities in these molecules and using this information to work out how different types of organisms are related through evolution. These comparisons have allowed biologists to build a relationship tree of the evolution of life on Earth. They have even allowed scientists to unravel the relationships between organisms whose common ancestors lived such a long time ago that no real similarities remain in the appearance of the organisms.
Artificial selection
Artificial selection is the controlled breeding of domestic plants and animals. Humans determine which animal or plant will reproduce and which of the offspring will survive; thus, they determine which genes will be passed on to future generations. The process of artificial selection has had a significant impact on the evolution of domestic animals. For example, people have produced different types of dogs by controlled breeding. The differences in size between the Chihuahua and the Great Dane are the result of artificial selection. Despite their dramatically different physical appearance, they and all other dogs evolved from a few wolves domesticated by humans in what is now China less than 15,000 years ago.
Artificial selection has produced a wide variety of plants. In the case of maize (corn), recent genetic evidence suggests that domestication occurred 10,000 years ago in central Mexico. Prior to domestication, the edible portion of the wild form was small and difficult to collect. Today The Maize Genetics Cooperation • Stock Center maintains a collection of more than 10,000 genetic variations of maize that have arisen by random mutations and chromosomal variations from the original wild type.
In artificial selection the new breed or variety that emerges is the one with random mutations attractive to humans, while in natural selection the surviving species is the one with random mutations useful to it in its non-human environment. In both natural and artificial selection the variations are a result of random mutations, and the underlying genetic processes are essentially the same. Darwin carefully observed the outcomes of artificial selection in animals and plants to form many of his arguments in support of natural selection. Much of his book On the Origin of Species was based on these observations of the many varieties of domestic pigeons arising from artificial selection. Darwin proposed that if humans could achieve dramatic changes in domestic animals in short periods, then natural selection, given millions of years, could produce the differences seen in living things today.
Coevolution
Coevolution is a process in which two or more species influence the evolution of each other. All organisms are influenced by life around them; however, in coevolution there is evidence that genetically determined traits in each species directly resulted from the interaction between the two organisms.
An extensively documented case of coevolution is the relationship between Pseudomyrmex, a type of ant, and the acacia, a plant that the ant uses for food and shelter. The relationship between the two is so intimate that it has led to the evolution of special structures and behaviours in both organisms. The ant defends the acacia against herbivores and clears the forest floor of the seeds from competing plants. In response, the plant has evolved swollen thorns that the ants use as shelter and special flower parts that the ants eat.
Such coevolution does not imply that the ants and the tree choose to behave in an altruistic manner. Rather, across a population small genetic changes in both ant and tree benefited each. The benefit gave a slightly higher chance of the characteristic being passed on to the next generation. Over time, successive mutations created the relationship we observe today.
Speciation
Given the right circumstances, and enough time, evolution leads to the emergence of new species. Scientists have struggled to find a precise and all-inclusive definition of species. Ernst Mayr defined a species as a population or group of populations whose members have the potential to interbreed naturally with one another to produce viable, fertile offspring. (The members of a species cannot produce viable, fertile offspring with members of other species). Mayr's definition has gained wide acceptance among biologists, but does not apply to organisms such as bacteria, which reproduce asexually.
Speciation is the lineage-splitting event that results in two separate species forming from a single common ancestral population. A widely accepted method of speciation is called allopatric speciation. Allopatric speciation begins when a population becomes geographically separated. Geological processes, such as the emergence of mountain ranges, the formation of canyons, or the flooding of land bridges by changes in sea level may result in separate populations. For speciation to occur, separation must be substantial, so that genetic exchange between the two populations is completely disrupted. In their separate environments, the genetically isolated groups follow their own unique evolutionary pathways. Each group will accumulate different mutations as well as be subjected to different selective pressures. The accumulated genetic changes may result in separated populations that can no longer interbreed if they are reunited. Barriers that prevent interbreeding are either prezygotic (prevent mating or fertilisation) or postzygotic (barriers that occur after fertilisation). If interbreeding is no longer possible, then they will be considered different species. The result of four billion years of evolution is the diversity of life around us, with an estimated 1.75 million different species in existence today.
Usually the process of speciation is slow, occurring over very long time spans; thus direct observations within human life-spans are rare. However speciation has been observed in present-day organisms, and past speciation events are recorded in fossils. Scientists have documented the formation of five new species of cichlid fishes from a single common ancestor that was isolated fewer than 5,000 years ago from the parent stock in Lake Nagubago. The evidence for speciation in this case was morphology (physical appearance) and lack of natural interbreeding. These fish have complex mating rituals and a variety of colorations; the slight modifications introduced in the new species have changed the mate selection process and the five forms that arose could not be convinced to interbreed.
Mechanism
The theory of evolution is widely accepted among the scientific community, serving to link the diverse speciality areas of biology. Evolution provides the field of biology with a solid scientific base. The significance of evolutionary theory is summarised by Theodosius Dobzhansky as "nothing in biology makes sense except in the light of evolution." Nevertheless, the theory of evolution is not static. There is much discussion within the scientific community concerning the mechanisms behind the evolutionary process. For example, the rate at which evolution occurs is still under discussion. In addition, there are conflicting opinions as to which is the primary unit of evolutionary change—the organism or the gene.
Rate of change
Darwin and his contemporaries viewed evolution as a slow and gradual process. Evolutionary trees are based on the idea that profound differences in species are the result of many small changes that accumulate over long periods.
Gradualism had its basis in the works of the geologists James Hutton and Charles Lyell. Hutton's view suggests that profound geological change was the cumulative product of a relatively slow continuing operation of processes which can still be seen in operation today, as opposed to catastrophism which promoted the idea that sudden changes had causes which can no longer be seen at work. A uniformitarian perspective was adopted for biological changes. Such a view can seem to contradict the fossil record, which often shows evidence of new species appearing suddenly, then persisting in that form for long periods. In the 1970s palaeontologists Niles Eldredge and Stephen Jay Gould developed a theoretical model that suggests that evolution, although a slow process in human terms, undergoes periods of relatively rapid change (ranging between 50,000 and 100,000 years) alternating with long periods of relative stability. Their theory is called punctuated equilibrium and explains the fossil record without contradicting Darwin's ideas.
Unit of change
A common unit of selection in evolution is the organism. Natural selection occurs when the reproductive success of an individual is improved or reduced by an inherited characteristic, and reproductive success is measured by the number of an individual's surviving offspring. The organism view has been challenged by a variety of biologists as well as philosophers. Evolutionary biologist Richard Dawkins proposes that much insight can be gained if we look at evolution from the gene's point of view; that is, that natural selection operates as an evolutionary mechanism on genes as well as organisms. In his 1976 book, The Selfish Gene, he explains:
Others view selection working on many levels, not just at a single level of organism or gene; for example, Stephen Jay Gould called for a hierarchical perspective on selection.
See also
Abiogenesis
Creation–evolution controversy
Evidence of common descent
Evolution as fact and theory
Level of support for evolution
Misconceptions about evolution
References
Bibliography
"Revised Proceedings of the BSCS, AIBS Symposium November 2004, Chicago, IL"
Further reading
External links
Biology theories | 0.808581 | 0.992309 | 0.802361 |
Microevolution | Microevolution is the change in allele frequencies that occurs over time within a population. This change is due to four different processes: mutation, selection (natural and artificial), gene flow and genetic drift. This change happens over a relatively short (in evolutionary terms) amount of time compared to the changes termed macroevolution.
Population genetics is the branch of biology that provides the mathematical structure for the study of the process of microevolution. Ecological genetics concerns itself with observing microevolution in the wild. Typically, observable instances of evolution are examples of microevolution; for example, bacterial strains that have antibiotic resistance.
Microevolution provides the raw material for macroevolution.
Difference from macroevolution
Macroevolution is guided by sorting of interspecific variation ("species selection"), as opposed to sorting of intraspecific variation in microevolution. Species selection may occur as (a) effect-macroevolution, where organism-level traits (aggregate traits) affect speciation and extinction rates, and (b) strict-sense species selection, where species-level traits (e.g. geographical range) affect speciation and extinction rates. Macroevolution does not produce evolutionary novelties, but it determines their proliferation within the clades in which they evolved, and it adds species-level traits as non-organismic factors of sorting to this process.
Four processes
Mutation
Mutations are changes in the DNA sequence of a cell's genome and are caused by radiation, viruses, transposons and mutagenic chemicals, as well as errors that occur during meiosis or DNA replication. Errors are introduced particularly often in the process of DNA replication, in the polymerization of the second strand. These errors can also be induced by the organism itself, by cellular processes such as hypermutation. Mutations can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the proofreading ability of DNA polymerases. (Without proofreading error rates are a thousandfold higher; because many viruses rely on DNA and RNA polymerases that lack proofreading ability, they experience higher mutation rates.) Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well, and cells use DNA repair mechanisms to repair mismatches and breaks in DNA—nevertheless, the repair sometimes fails to return the DNA to its original sequence.
In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment making some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions or deletions of entire regions, or the accidental exchanging of whole parts between different chromosomes (called translocation).
Mutation can result in several different types of change in DNA sequences; these can either have no effect, alter the product of a gene, or prevent the gene from functioning. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. Due to the damaging effects that mutations can have on cells, organisms have evolved mechanisms such as DNA repair to remove mutations. Therefore, the optimal mutation rate for a species is a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate, such as DNA repair enzymes. Viruses that use RNA as their genetic material have rapid mutation rates, which can be an advantage since these viruses will evolve constantly and rapidly, and thus evade the defensive responses of e.g. the human immune system.
Mutations can involve large sections of DNA becoming duplicated, usually through genetic recombination. These duplications are a major source of raw material for evolving new genes, with tens to hundreds of genes duplicated in animal genomes every million years. Most genes belong to larger families of genes of shared ancestry. Novel genes are produced by several methods, commonly through the duplication and mutation of an ancestral gene, or by recombining parts of different genes to form new combinations with new functions.
Here, domains act as modules, each with a particular and independent function, that can be mixed together to produce genes encoding new proteins with novel properties. For example, the human eye uses four genes to make structures that sense light: three for color vision and one for night vision; all four arose from a single ancestral gene. Another advantage of duplicating a gene (or even an entire genome) is that this increases redundancy; this allows one gene in the pair to acquire a new function while the other copy performs the original function. Other types of mutation occasionally create new genes from previously noncoding DNA.
Selection
Selection is the process by which heritable traits that make it more likely for an organism to survive and successfully reproduce become more common in a population over successive generations.
It is sometimes valuable to distinguish between naturally occurring selection, natural selection, and selection that is a manifestation of choices made by humans, artificial selection. This distinction is rather diffuse. Natural selection is nevertheless the dominant part of selection.
The natural genetic variation within a population of organisms means that some individuals will survive more successfully than others in their current environment. Factors which affect reproductive success are also important, an issue which Charles Darwin developed in his ideas on sexual selection.
Natural selection acts on the phenotype, or the observable characteristics of an organism, but the genetic (heritable) basis of any phenotype which gives a reproductive advantage will become more common in a population (see allele frequency). Over time, this process can result in adaptations that specialize organisms for particular ecological niches and may eventually result in the speciation (the emergence of new species).
Natural selection is one of the cornerstones of modern biology. The term was introduced by Darwin in his groundbreaking 1859 book On the Origin of Species, in which natural selection was described by analogy to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favored for reproduction. The concept of natural selection was originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, nothing was known of modern genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical and molecular genetics is termed the modern evolutionary synthesis. Natural selection remains the primary explanation for adaptive evolution.
Genetic drift
Genetic drift is the change in the relative frequency in which a gene variant (allele) occurs in a population due to random sampling. That is, the alleles in the offspring in the population are a random sample of those in the parents. And chance has a role in determining whether a given individual survives and reproduces. A population's allele frequency is the fraction or percentage of its gene copies compared to the total number of gene alleles that share a particular form.
Genetic drift is an evolutionary process which leads to changes in allele frequencies over time. It may cause gene variants to disappear completely, and thereby reduce genetic variability. In contrast to natural selection, which makes gene variants more common or less common depending on their reproductive success, the changes due to genetic drift are not driven by environmental or adaptive pressures, and may be beneficial, neutral, or detrimental to reproductive success.
The effect of genetic drift is larger in small populations, and smaller in large populations. Vigorous debates wage among scientists over the relative importance of genetic drift compared with natural selection. Ronald Fisher held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. In 1968 Motoo Kimura rekindled the debate with his neutral theory of molecular evolution which claims that most of the changes in the genetic material are caused by genetic drift. The predictions of neutral theory, based on genetic drift, do not fit recent data on whole genomes well: these data suggest that the frequencies of neutral alleles change primarily due to selection at linked sites, rather than due to genetic drift by means of sampling error.
Gene flow
Gene flow is the exchange of genes between populations, which are usually of the same species. Examples of gene flow within a species include the migration and then breeding of organisms, or the exchange of pollen. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer.
Migration into or out of a population can change allele frequencies, as well as introducing genetic variation into a population. Immigration may add new genetic material to the established gene pool of a population. Conversely, emigration may remove genetic material. As barriers to reproduction between two diverging populations are required for the populations to become new species, gene flow may slow this process by spreading genetic differences between the populations. Gene flow is hindered by mountain ranges, oceans and deserts or even man-made structures such as the Great Wall of China, which has hindered the flow of plant genes.
Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile, due to the two different sets of chromosomes being unable to pair up during meiosis. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridization in developing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.
Hybridization is, however, an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals. Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. Polyploid hybrids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations.
Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis may also have occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which appear to have received a range of genes from bacteria, fungi, and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and prokaryotes, during the acquisition of chloroplasts and mitochondria.
Gene flow is the transfer of alleles from one population to another.
Migration into or out of a population may be responsible for a marked change in allele frequencies. Immigration may also result in the addition of new genetic variants to the established gene pool of a particular species or population.
There are a number of factors that affect the rate of gene flow between different populations. One of the most significant factors is mobility, as greater mobility of an individual tends to give it greater migratory potential. Animals tend to be more mobile than plants, although pollen and seeds may be carried great distances by animals or wind.
Maintained gene flow between two populations can also lead to a combination of the two gene pools, reducing the genetic variation between the two groups. It is for this reason that gene flow strongly acts against speciation, by recombining the gene pools of the groups, and thus, repairing the developing differences in genetic variation that would have led to full speciation and creation of daughter species.
For example, if a species of grass grows on both sides of a highway, pollen is likely to be transported from one side to the other and vice versa. If this pollen is able to fertilise the plant where it ends up and produce viable offspring, then the alleles in the pollen have effectively been able to move from the population on one side of the highway to the other.
Origin and extended use of the term
Origin
The term microevolution was first used by botanist Robert Greenleaf Leavitt in the journal Botanical Gazette in 1909, addressing what he called the "mystery" of how formlessness gives rise to form.
..The production of form from formlessness in the egg-derived individual, the multiplication of parts and the orderly creation of diversity among them, in an actual evolution, of which anyone may ascertain the facts, but of which no one has dissipated the mystery in any significant measure. This microevolution forms an integral part of the grand evolution problem and lies at the base of it, so that we shall have to understand the minor process before we can thoroughly comprehend the more general one...
However, Leavitt was using the term to describe what we would now call developmental biology; it was not until Russian Entomologist Yuri Filipchenko used the terms "macroevolution" and "microevolution" in 1927 in his German language work, Variabilität und Variation, that it attained its modern usage. The term was later brought into the English-speaking world by Filipchenko's student Theodosius Dobzhansky in his book Genetics and the Origin of Species (1937).
Use in creationism
In young Earth creationism and baraminology a central tenet is that evolution can explain diversity in a limited number of created kinds which can interbreed (which they call "microevolution") while the formation of new "kinds" (which they call "macroevolution") is impossible. This acceptance of "microevolution" only within a "kind" is also typical of old Earth creationism.
Scientific organizations such as the American Association for the Advancement of Science describe microevolution as small scale change within species, and macroevolution as the formation of new species, but otherwise not being different from microevolution. In macroevolution, an accumulation of microevolutionary changes leads to speciation. The main difference between the two processes is that one occurs within a few generations, whilst the other takes place over thousands of years (i.e. a quantitative difference). Essentially they describe the same process; although evolution beyond the species level results in beginning and ending generations which could not interbreed, the intermediate generations could.
Opponents to creationism argue that changes in the number of chromosomes can be accounted for by intermediate stages in which a single chromosome divides in generational stages, or multiple chromosomes fuse, and cite the chromosome difference between humans and the other great apes as an example. Creationists insist that since the actual divergence between the other great apes and humans was not observed, the evidence is circumstantial.
Describing the fundamental similarity between macro and microevolution in his authoritative textbook "Evolutionary Biology," biologist Douglas Futuyma writes,
Contrary to the claims of some antievolution proponents, evolution of life forms beyond the species level (i.e. speciation) has indeed been observed and documented by scientists on numerous occasions. In creation science, creationists accepted speciation as occurring within a "created kind" or "baramin", but objected to what they called "third level-macroevolution" of a new genus or higher rank in taxonomy. There is ambiguity in the ideas as to where to draw a line on "species", "created kinds", and what events and lineages fall within the rubric of microevolution or macroevolution.
See also
Punctuated equilibrium - due to gene flow, major evolutionary changes may be rare
References
External links
Microevolution (UC Berkeley)
Microevolution vs Macroevolution
Evolutionary biology concepts
Population genetics | 0.811898 | 0.98612 | 0.800629 |
Tinbergen's four questions | Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function and evolution are often presented as separate and distinct explanations of behaviour. On the other hand, the common definition of adaptation is a central concept in evolution: a trait that was functional to the reproductive success of the organism and that is thus now present due to being selected for; that is, function and evolution are inseparable. However, a trait can have a current function that is adaptive without being an adaptation in this sense, if for instance the environment has changed. Imagine an environment in which having a small body suddenly conferred benefit on an organism when previously body size had had no effect on survival. A small body's function in the environment would then be adaptive, but it would not become an adaptation until enough generations had passed in which small bodies were advantageous to reproduction for small bodies to be selected for. Given this, it is best to understand that presently functional traits might not all have been produced by natural selection. The term "function" is preferable to "adaptation", because adaptation is often construed as implying that it was selected for due to past function. This corresponds to Aristotle's final cause.
Second question: Phylogeny (evolution)
Evolution captures both the history of an organism via its phylogeny, and the history of natural selection working on function to produce adaptations. There are several reasons why natural selection may fail to achieve optimal design (Mayr 2001:140–143; Buss et al. 1998). One entails random processes such as mutation and environmental events acting on small populations. Another entails the constraints resulting from early evolutionary development. Each organism harbors traits, both anatomical and behavioural, of previous phylogenetic stages, since many traits are retained as species evolve.
Reconstructing the phylogeny of a species often makes it possible to understand the "uniqueness" of recent characteristics: Earlier phylogenetic stages and (pre-) conditions which persist often also determine the form of more modern characteristics. For instance, the vertebrate eye (including the human eye) has a blind spot, whereas octopus eyes do not. In those two lineages, the eye was originally constructed one way or the other. Once the vertebrate eye was constructed, there were no intermediate forms that were both adaptive and would have enabled it to evolve without a blind spot.
It corresponds to Aristotle's formal cause.
Proximate explanations
Third question: Mechanism (causation)
Some prominent classes of Proximate causal mechanisms include:
The brain: For example, Broca's area, a small section of the human brain, has a critical role in linguistic capability.
Hormones: Chemicals used to communicate among cells of an individual organism. Testosterone, for instance, stimulates aggressive behaviour in a number of species.
Pheromones: Chemicals used to communicate among members of the same species. Some species (e.g., dogs and some moths) use pheromones to attract mates.
In examining living organisms, biologists are confronted with diverse levels of complexity (e.g. chemical, physiological, psychological, social). They therefore investigate causal and functional relations within and between these levels. A biochemist might examine, for instance, the influence of social and ecological conditions on the release of certain neurotransmitters and hormones, and the effects of such releases on behaviour, e.g. stress during birth has a tocolytic (contraction-suppressing) effect.
However, awareness of neurotransmitters and the structure of neurons is not by itself enough to understand higher levels of neuroanatomic structure or behaviour: "The whole is more than the sum of its parts." All levels must be considered as being equally important: cf. transdisciplinarity, Nicolai Hartmann's "Laws about the Levels of Complexity."
It corresponds to Aristotle's efficient cause.
Fourth question: Ontogeny (development)
Ontogeny is the process of development of an individual organism from the zygote through the embryo to the adult form.
In the latter half of the twentieth century, social scientists debated whether human behaviour was the product of nature (genes) or nurture (environment in the developmental period, including culture).
An example of interaction (as distinct from the sum of the components) involves familiarity from childhood. In a number of species, individuals prefer to associate with familiar individuals but prefer to mate with unfamiliar ones (Alcock 2001:85–89, Incest taboo, Incest). By inference, genes affecting living together interact with the environment differently from genes affecting mating behaviour. A simple example of interaction involves plants: Some plants grow toward the light (phototropism) and some away from gravity (gravitropism).
Many forms of developmental learning have a critical period, for instance, for imprinting among geese and language acquisition among humans. In such cases, genes determine the timing of the environmental impact.
A related concept is labeled "biased learning" (Alcock 2001:101–103) and "prepared learning" (Wilson, 1998:86–87). For instance, after eating food that subsequently made them sick, rats are predisposed to associate that food with smell, not sound (Alcock 2001:101–103). Many primate species learn to fear snakes with little experience (Wilson, 1998:86–87).
See developmental biology and developmental psychology.
It corresponds to Aristotle's material cause.
Causal relationships
The figure shows the causal relationships among the categories of explanations. The left-hand side represents the evolutionary explanations at the species level; the right-hand side represents the proximate explanations at the individual level. In the middle are those processes' end products—genes (i.e., genome) and behaviour, both of which can be analyzed at both levels.
Evolution, which is determined by both function and phylogeny, results in the genes of a population. The genes of an individual interact with its developmental environment, resulting in mechanisms, such as a nervous system. A mechanism (which is also an end-product in its own right) interacts with the individual's immediate environment, resulting in its behaviour.
Here we return to the population level. Over many generations, the success of the species' behaviour in its ancestral environment—or more technically, the environment of evolutionary adaptedness (EEA) may result in evolution as measured by a change in its genes.
In sum, there are two processes—one at the population level and one at the individual level—which are influenced by environments in three time periods.
Examples
Vision
Four ways of explaining visual perception:
Function: To find food and avoid danger.
Phylogeny: The vertebrate eye initially developed with a blind spot, but the lack of adaptive intermediate forms prevented the loss of the blind spot.
Mechanism: The lens of the eye focuses light on the retina.
Development: Neurons need the stimulation of light to wire the eye to the brain (Moore, 2001:98–99).
Westermarck effect
Four ways of explaining the Westermarck effect, the lack of sexual interest in one's siblings (Wilson, 1998:189–196):
Function: To discourage inbreeding, which decreases the number of viable offspring.
Phylogeny: Found in a number of mammalian species, suggesting initial evolution tens of millions of years ago.
Mechanism: Little is known about the neuromechanism.
Ontogeny: Results from familiarity with another individual early in life, especially in the first 30 months for humans. The effect is manifested in nonrelatives raised together, for instance, in kibbutzs.
Romantic love
Four ways of explaining romantic love have been used to provide a comprehensive biological definition (Bode & Kushnick, 2021):
Function: Mate choice, courtship, sex, pair-bonding.
Phylogeny: Evolved by co-opting mother-infant bonding mechanisms sometime in the recent evolutionary history of humans.
Mechanisms: Social, psychological mate choice, genetic, neurobiological, and endocrinological mechanisms cause romantic love.
Ontogeny: Romantic love can first manifest in childhood, manifests with all its characteristics following puberty, but can manifest across the lifespan.
Sleep
Sleep has been described using Tinbergen's four questions as a framework (Bode & Kuula, 2021):
Function: Energy restoration, metabolic regulation, thermoregulation, boosting immune system, detoxification, brain maturation, circuit reorganization, synaptic optimization, avoiding danger.
Phylogeny: Sleep exists in invertebrates, lower vertebrates, and higher vertebrates. NREM and REM sleep exist in eutheria, marsupialiformes, and also evolved in birds.
Mechanisms: Mechanisms regulate wakefulness, sleep onset, and sleep. Specific mechanisms involve neurotransmitters, genes, neural structures, and the circadian rhythm.
Ontogeny: Sleep manifests differently in babies, infants, children, adolescents, adults, and older adults. Differences include the stages of sleep, sleep duration, and sex differences.
Use of the four-question schema as "periodic table"
Konrad Lorenz, Julian Huxley and Niko Tinbergen were familiar with both conceptual categories (i.e. the central questions of biological research: 1. - 4. and the levels of inquiry: a. - g.), the tabulation was made by Gerhard Medicus. The tabulated schema is used as the central organizing device in many animal behaviour, ethology, behavioural ecology and evolutionary psychology textbooks (e.g., Alcock, 2001). One advantage of this organizational system, what might be called the "periodic table of life sciences," is that it highlights gaps in knowledge, analogous to the role played by the periodic table of elements in the early years of chemistry.
This "biopsychosocial" framework clarifies and classifies the associations between the various levels of the natural and social sciences, and it helps to integrate the social and natural sciences into a "tree of knowledge" (see also Nicolai Hartmann's "Laws about the Levels of Complexity"). Especially for the social sciences, this model helps to provide an integrative, foundational model for interdisciplinary collaboration, teaching and research (see The Four Central Questions of Biological Research Using Ethology as an Example – PDF).
References
Sources
Alcock, John (2001) Animal Behaviour: An Evolutionary Approach, Sinauer, 7th edition. .
Buss, David M., Martie G. Haselton, Todd K. Shackelford, et al. (1998) "Adaptations, Exaptations, and Spandrels," American Psychologist, 53:533–548. http://www.sscnet.ucla.edu/comm/haselton/webdocs/spandrels.html
Buss, David M. (2004) Evolutionary Psychology: The New Science of the Mind, Pearson Education, 2nd edition. .
Cartwright, John (2000) Evolution and Human Behaviour, MIT Press, .
Krebs, John R., Davies N.B. (1993) An Introduction to Behavioural Ecology, Blackwell Publishing, .
Lorenz, Konrad (1937) Biologische Fragestellungen in der Tierpsychologie (I.e. Biological Questions in Animal Psychology). Zeitschrift für Tierpsychologie, 1: 24–32.
Mayr, Ernst (2001) What Evolution Is, Basic Books. .
Gerhard Medicus (2017, chapter 1). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB
Medicus, Gerhard (2017) Being Human – Bridging the Gap between the Sciences of Body and Mind. Berlin: VWB 2015,
Nesse, Randolph M (2013) "Tinbergen's Four Questions, Organized," Trends in Ecology and Evolution, 28:681-682.
Moore, David S. (2001) The Dependent Gene: The Fallacy of 'Nature vs. Nurture''', Henry Holt. .
Pinker, Steven (1994) The Language Instinct: How the Mind Creates Language, Harper Perennial. .
Tinbergen, Niko (1963) "On Aims and Methods of Ethology," Zeitschrift für Tierpsychologie, 20: 410–433.
Wilson, Edward O. (1998) Consilience: The Unity of Knowledge'', Vintage Books. .
External links
Diagrams
The Four Areas of Biology pdf
The Four Areas and Levels of Inquiry pdf
Tinbergen's four questions within the "Fundamental Theory of Human Sciences" ppt
Tinbergen's Four Questions, organized pdf
Derivative works
On aims and methods of cognitive ethology (pdf) by Jamieson and Bekoff.
Behavioral ecology
Ethology
Evolutionary psychology
Sociobiology | 0.813923 | 0.982909 | 0.800012 |
Anabolism | Anabolism is the set of metabolic pathways that construct macromolecules like DNA or RNA from smaller units. These reactions require energy, known also as an endergonic process. Anabolism is the building-up aspect of metabolism, whereas catabolism is the breaking-down aspect. Anabolism is usually synonymous with biosynthesis.
Pathway
Polymerization, an anabolic pathway used to build macromolecules such as nucleic acids, proteins, and polysaccharides, uses condensation reactions to join monomers. Macromolecules are created from smaller molecules using enzymes and cofactors.
Energy source
Anabolism is powered by catabolism, where large molecules are broken down into smaller parts and then used up in cellular respiration. Many anabolic processes are powered by the cleavage of adenosine triphosphate (ATP). Anabolism usually involves reduction and decreases entropy, making it unfavorable without energy input. The starting materials, called the precursor molecules, are joined using the chemical energy made available from hydrolyzing ATP, reducing the cofactors NAD+, NADP+, and FAD, or performing other favorable side reactions. Occasionally it can also be driven by entropy without energy input, in cases like the formation of the phospholipid bilayer of a cell, where hydrophobic interactions aggregate the molecules.
Cofactors
The reducing agents NADH, NADPH, and FADH2, as well as metal ions, act as cofactors at various steps in anabolic pathways. NADH, NADPH, and FADH2 act as electron carriers, while charged metal ions within enzymes stabilize charged functional groups on substrates.
Substrates
Substrates for anabolism are mostly intermediates taken from catabolic pathways during periods of high energy charge in the cell.
Functions
Anabolic processes build organs and tissues. These processes produce growth and differentiation of cells and increase in body size, a process that involves synthesis of complex molecules. Examples of anabolic processes include the growth and mineralization of bone and increases in muscle mass.
Anabolic hormones
Endocrinologists have traditionally classified hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The classic anabolic hormones are the anabolic steroids, which stimulate protein synthesis and muscle growth, and insulin.
Photosynthetic carbohydrate synthesis
Photosynthetic carbohydrate synthesis in plants and certain bacteria is an anabolic process that produces glucose, cellulose, starch, lipids, and proteins from CO2. It uses the energy produced from the light-driven reactions of photosynthesis, and creates the precursors to these large molecules via carbon assimilation in the photosynthetic carbon reduction cycle, a.k.a. the Calvin cycle.
Amino acid biosynthesis
All amino acids are formed from intermediates in the catabolic processes of glycolysis, the citric acid cycle, or the pentose phosphate pathway. From glycolysis, glucose 6-phosphate is a precursor for histidine; 3-phosphoglycerate is a precursor for glycine and cysteine; phosphoenol pyruvate, combined with the 3-phosphoglycerate-derivative erythrose 4-phosphate, forms tryptophan, phenylalanine, and tyrosine; and pyruvate is a precursor for alanine, valine, leucine, and isoleucine. From the citric acid cycle, α-ketoglutarate is converted into glutamate and subsequently glutamine, proline, and arginine; and oxaloacetate is converted into aspartate and subsequently asparagine, methionine, threonine, and lysine.
Glycogen storage
During periods of high blood sugar, glucose 6-phosphate from glycolysis is diverted to the glycogen-storing pathway. It is changed to glucose-1-phosphate by phosphoglucomutase and then to UDP-glucose by UTP--glucose-1-phosphate uridylyltransferase. Glycogen synthase adds this UDP-glucose to a glycogen chain.
Gluconeogenesis
Glucagon is traditionally a catabolic hormone, but also stimulates the anabolic process of gluconeogenesis by the liver, and to a lesser extent the kidney cortex and intestines, during starvation to prevent low blood sugar. It is the process of converting pyruvate into glucose. Pyruvate can come from the breakdown of glucose, lactate, amino acids, or glycerol. The gluconeogenesis pathway has many reversible enzymatic processes in common with glycolysis, but it is not the process of glycolysis in reverse. It uses different irreversible enzymes to ensure the overall pathway runs in one direction only.
Regulation
Anabolism operates with separate enzymes from catalysis, which undergo irreversible steps at some point in their pathways. This allows the cell to regulate the rate of production and prevent an infinite loop, also known as a futile cycle, from forming with catabolism.
The balance between anabolism and catabolism is sensitive to ADP and ATP, otherwise known as the energy charge of the cell. High amounts of ATP cause cells to favor the anabolic pathway and slow catabolic activity, while excess ADP slows anabolism and favors catabolism. These pathways are also regulated by circadian rhythms, with processes such as glycolysis fluctuating to match an animal's normal periods of activity throughout the day.
Etymology
The word anabolism is from Neo-Latin, with roots from , "upward" and , "to throw".
References
Metabolism | 0.803686 | 0.99432 | 0.799121 |
Forest ecology | Forest ecology is the scientific study of the interrelated patterns, processes, flora, fauna, funga, and ecosystems in forests. The management of forests is known as forestry, silviculture, and forest management. A forest ecosystem is a natural woodland unit consisting of all plants, animals, and micro-organisms (biotic components) in that area functioning together with all of the non-living physical (abiotic) factors of the environment.
Importance
Forests have an enormously important role to play in the global ecosystem. Forests produce approximately 28% of the Earth's oxygen (the vast majority being created by oceanic plankton), they also serve as homes for millions of people, and billions depend on forests in some way. Likewise, a large proportion of the world's animal species live in forests. Forests are also used for economic purposes such as fuel and wood products. Forest ecology therefore has a great impact upon the whole biosphere and human activities that are sustained by it.
Approaches
Forests are studied at a number of organisational levels, from the individual organism to the ecosystem. However, as the term forest connotes an area inhabited by more than one organism, forest ecology most often concentrates on the level of the population, community or ecosystem. Logically, trees are an important component of forest research, but the wide variety of other life forms and abiotic components in most forests means that other elements, such as wildlife or soil nutrients, are also crucial components.
Forest ecology shares characteristics and methodological approaches with other areas of terrestrial plant ecology, however, the presence of trees makes forest ecosystems and their study unique in numerous ways due to the potential for a wide variety of forest structures created by the uniquely large size and height of trees compared with other terrestrial plants.
Forest pathology
Community diversity and complexity
Since trees can grow larger than other plant life-forms, there is the potential for a wide variety of forest structures (or physiognomies). The infinite number of possible spatial arrangements of trees of varying size and species makes for a highly intricate and diverse micro-environment in which environmental variables such as solar radiation, temperature, relative humidity, and wind speed can vary considerably over large and small distances. In addition, an important proportion of a forest ecosystem's biomass is often underground, where soil structure, water quality and quantity, and levels of various soil nutrients can vary greatly. Thus, forests are often highly heterogeneous environments compared to other terrestrial plant communities. This heterogeneity in turn can enable great biodiversity of species of both plants and animals. Some structures, such as tree ferns may be keystone species for a diverse range of other species.
A number of factors within the forest affect biodiversity; primary factors enhancing wildlife abundance and biodiversity was the presence of diverse tree species within the forest and the absence of even aged timber management. For example, the wild turkey thrives when uneven heights and canopy variations exist and its numbers are diminished by even aged timber management.
Forest management techniques that mimic natural disturbance events (variable retention forestry) can allow community diversity to recover rapidly for a variety of groups including beetles.
Types of Forests Ecosystems
Temperate Forests
Tropical Forests
Tropical forests are some of the most diverse ecosystems in the world. Although there are many different tree species present per acre of forest, many share similar appearances due to the similar environmental pressures. Some of these shared traits, possessed by many tropical trees, include thick and leathery leaves that are elongated and ovular with mid-ribs and drip-tips. These adaptations help to quickly drain water from the leaves, likely to help prevent algae or lichen growth and prevent water reflecting the sunlight or restricting transpiration. Commonly, tropical trees have large buttress roots on larger trees, and stilt roots on mid-sized trees which help support their tall and vertical structures in the shallow and moist soil. Tropical forests grow very densely due to the heavy rainfall and year-round growing season. This creates competition for light which causes many trees to grow very tall, blocking out most or all of the light from reaching the forest floor. Because of this, the canopy exhibits distinct stratified layers from the tallest trees to the tightly packed midstory trees below. Due to low light on the forest floor, there is a diverse population of epiphytes, a type of plant that grows on the canopy trees, rather than soil, to access better light. Many vines use a similar tactic, however they root in the ground, growing up the trees to reach light. The fauna in tropical forests also show many unique adaptations to fill various niches. These adaptations are possessed by different species depending on where they are located. For example, there are similar looking animals in the rainforests of South America and Africa that share ecological niches, however the mammals from South America are rodents while the African ones are ungulates. This clearly demonstrates the convergent evolution between species found in tropical forest environments.
Coniferous Forests
Conifers have unique traits that make them especially adapted to harsh conditions, including cold, drought, wind, and snow. Their leaves have a wax coating and are filled with resin to help prevent moisture loss, this makes them unpalatable to animals and slow to decompose. This leaf litter creates an acidic forest floor that is distinct to coniferous forests. Because of the types of leaves possessed by conifers, they face the problem of soil nutrient loss; this problem is solved through mycorrhizal symbiosis with fungi that help transport the limited nutrients to the trees in exchange for sugars. Some conifers are incapable of surviving without mycorrhizal fungi. The majority of conifers are also evergreen, allowing them to take advantage of the short growing seasons of their respective environments. Their thin tapered structure helps them to withstand strong winds without being blown over. The stereotypically cone shape of conifers helps prevent large quantities of snow from building up on their branches and breaking them. Due to the harsh environments that coniferous forests are commonly found, the diversity is limited in both plant and animal species. The colder climates limit the number of reptilian and amphibian species that can survive. The species more commonly found in coniferous forests are mammals, including large herbivores such as moose and elk, predators like bears and wolves, along with a few smaller species like rabbits, foxes, and mink. There are also a variety of migratory bird species and some birds of prey such as owls and hawks. Coniferous forests contain a variety of valuable pulp and lumber trees making them some of the most economically important ecosystems. They have also been historically sought for the fur trade due to the animals species that inhabit them.
Island Forests
Ecological Interactions
Plant-Plant Interactions
In forests, trees and shrubs often serve as nurse plants that facilitate the establishment and seedling growth of understory plants. The forest canopy protects young understory plants from extremes of temperature and dry conditions.
Mycorrhizal Symbiosis
An important interaction in forest ecosystems is the mycorrhizal network, which consists of fungi and plants that share symbiotic relationships. Mycorrhizal networks have been shown to increase the uptake of important nutrients, especially ones which disperse slowly into the soil like phosphorus. The fine hypha of the mycelium is able to reach farther into the soil than the roots of the plant, allowing it to better access phosphorus and water. The mycorrhizal network can also transport water and nutrients between plants. These interactions can help provide drought resistance to their symbiotic plants, helping protect them through the progression of climate change. However, it's been shown that the benefit of mycorrhizal networks vary greatly depending on the species of plant and nutrient availability. The plants’ benefit from mycorrhizal fungus decreases as nutrient density increases, because the plants' loss of sugars costs more than the benefit they receive. While many plants rely on mycorrhizal symbiosis, not all possess this ability, and those without are shown to be negatively affected by the presence of mycorrhizal fungi.
Ecological potential of forest species
The ecological potential of a particular species is a measure of its capacity to effectively compete in a given geographical area, ahead of other species, as they all try to occupy a natural space. For some areas it has been quantified, as for instance by Hans-Jürgen Otto, for central Europe. He takes three groups of parameters:
Related to site requirements: Tolerance to low temperatures, tolerance to dry climate, frugality.
Specific qualities: Shade tolerance, height growth, stability, longevity, regeneration capacity.
Specific risks: Resistance to late freezing, resistance to wind/ice storm, resistance to fire, resistance to biotic agents.
Every parameter is scored between 0 and 5 for each considered species, and then a global mean value calculated. A value above 3.5 is considered high, below 3.0 low, and intermediate for those in between. In this study Fagus sylvatica has a score of 3.82, Fraxinus excelsior 3.08 and Juglans regia 2.92; and are examples of the three categories.
Matter and energy flows
Energy flux
Forests accumulate large amounts of standing biomass, and many are capable of accumulating it at high rates, i.e. they are highly productive. Such high levels of biomass and tall vertical structures represent large stores of potential energy that can be converted to kinetic energy under the right circumstances.
The world’s forests contain about 606 gigatonnes of living biomass (above- and below-ground) and 59 gigatonnes of dead wood.
Two such conversions of great importance are fires and treefalls, both of which radically alter the biota and the physical environment where they occur. Also, in forests of high productivity, the rapid growth of the trees themselves induces biotic and environmental changes, although at a slower rate and lower intensity than relatively instantaneous disturbances such as fires.
Water
Forest trees store large amounts of water because of their large size and anatomical/physiological characteristics. They are therefore important regulators of hydrological processes, especially those involving groundwater hydrology and local evaporation and rainfall/snowfall patterns.
An estimated 399 million ha of forest is designated primarily for the protection of soil and water, an increase of 119 million ha since 1990.
Thus, forest ecological studies are sometimes closely aligned with meteorological and hydrological studies in regional ecosystem or resource planning studies. Perhaps more importantly the duff or leaf litter can form a major repository of water storage. When this litter is removed or compacted (through grazing or human overuse), erosion and flooding are exacerbated as well as deprivation of dry season water for forest organisms.
Death and regeneration
Woody material, often referred to as coarse woody debris, decays relatively slowly in many forests in comparison to most other organic materials, due to a combination of environmental factors and wood chemistry (see lignin). Trees growing in arid and/or cold environments do so especially slowly. Thus, tree trunks and branches can remain on the forest floor for long periods, affecting such things as wildlife habitat, fire behaviour, and tree regeneration processes.
Some trees leave behind eerie skeletons after death. In reality these deaths are actually very few compared to the amount of tree deaths that go unnoticed. Thousands of seedlings can be produced from a single tree but only a few can actually grow to maturity. Most of those deaths are caused from competition for light, water, or soil nutrients, this is called natural thinning. Singular deaths caused by natural thinning go unnoticed, but many deaths can help form forest ecosystems. There are four stages to forest regrowth after a disturbance, the establishment phase which is rapid increase in seedlings, the thinning phase which happens after a canopy is formed and the seedlings covered by it die, the transition phase which occurs when one tree from the canopy dies and creates a pocket of light giving new seedlings opportunity to grow, and lastly the steady-state phase which happens when the forest has different sizes and ages of trees.
See also
Clear cutting
Close to nature forestry
Deforestation and climate change
Forest Ecology and Management (journal)
Forest Principles
Intact forest landscapes
Mountain ecology
Old-growth forest
Plant ecology
Regeneration (ecology)
References
Bibliography
Philip Joseph Burton. 2003. Towards sustainable management of the boreal forest 1039 pages
Robert W. Christopherson. 1996. Geosystems: An Introduction to Physical Geography. Prentice Hall Inc.
C. Michael Hogan. 2008. Wild turkey: Meleagris gallopavo, GlobalTwitcher.com, ed. N. Stromberg
James P. Kimmins. 2054. Forest Ecology: a foundation for sustainable forest management and environmental ethics in forestry, 3rd Edit. Prentice Hall, Upper Saddle River, NJ, USA. 611 pages
Copyright notice | 0.807857 | 0.988518 | 0.798581 |
Phylogenetics | In biology, phylogenetics is the study of the evolutionary history of life using genetics, which is known as phylogenetic inference. It establishes the relationship between organisms with the empirical data and observed heritable traits of DNA sequences, protein amino acid sequences, and morphology. The results are a phylogenetic tree—a diagram setting the hypothetical relationships between organisms and their evolutionary history.
The tips of a phylogenetic tree can be living taxa or fossils, which represent the present time or "end" of an evolutionary lineage, respectively. A phylogenetic diagram can be rooted or unrooted. A rooted tree diagram indicates the hypothetical common ancestor of the tree. An unrooted tree diagram (a network) makes no assumption about the ancestral line, and does not show the origin or "root" of the taxa in question or the direction of inferred evolutionary transformations.
In addition to their use for inferring phylogenetic patterns among taxa, phylogenetic analyses are often employed to represent relationships among genes or individual organisms. Such uses have become central to understanding biodiversity, evolution, ecology, and genomes.
Phylogenetics is a component of systematics that uses similarities and differences of the characteristics of species to interpret their evolutionary relationships and origins. Phylogenetics focuses on whether the characteristics of a species reinforce a phylogenetic inference that it diverged from the most recent common ancestor of a taxonomic group.
In the field of cancer research, phylogenetics can be used to study the clonal evolution of tumors and molecular chronology, predicting and showing how cell populations vary throughout the progression of the disease and during treatment, using whole genome sequencing techniques. The evolutionary processes behind cancer progression are quite different from those in most species and are important to phylogenetic inference; these differences manifest in several areas: the types of aberrations that occur, the rates of mutation, the high heterogeneity (variability) of tumor cell subclones, and the absence of genetic recombination.
Phylogenetics can also aid in drug design and discovery. Phylogenetics allows scientists to organize species and can show which species are likely to have inherited particular traits that are medically useful, such as producing biologically active compounds - those that have effects on the human body. For example, in drug discovery, venom-producing animals are particularly useful. Venoms from these animals produce several important drugs, e.g., ACE inhibitors and Prialt (Ziconotide). To find new venoms, scientists turn to phylogenetics to screen for closely related species that may have the same useful traits. The phylogenetic tree shows which species of fish have an origin of venom, and related fish they may contain the trait. Using this approach in studying venomous fish, biologists are able to identify the fish species that may be venomous. Biologist have used this approach in many species such as snakes and lizards.
In forensic science, phylogenetic tools are useful to assess DNA evidence for court cases. The simple phylogenetic tree of viruses A-E shows the relationships between viruses e.g., all viruses are descendants of Virus A.
HIV forensics uses phylogenetic analysis to track the differences in HIV genes and determine the relatedness of two samples. Phylogenetic analysis has been used in criminal trials to exonerate or hold individuals. HIV forensics does have its limitations, i.e., it cannot be the sole proof of transmission between individuals and phylogenetic analysis which shows transmission relatedness does not indicate direction of transmission.
Taxonomy and classification
Taxonomy is the identification, naming, and classification of organisms. Compared to systemization, classification emphasizes whether a species has characteristics of a taxonomic group. The Linnaean classification system developed in the 1700s by Carolus Linnaeus is the foundation for modern classification methods. Linnaean classification relies on an organism's phenotype or physical characteristics to group and organize species. With the emergence of biochemistry, organism classifications are now usually based on phylogenetic data, and many systematists contend that only monophyletic taxa should be recognized as named groups. The degree to which classification depends on inferred evolutionary history differs depending on the school of taxonomy: phenetics ignores phylogenetic speculation altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reflect phylogeny in its classifications by only recognizing groups based on shared, derived characters (synapomorphies); evolutionary taxonomy tries to take into account both the branching pattern and "degree of difference" to find a compromise between them.
Inference of a phylogenetic tree
Usual methods of phylogenetic inference involve computational approaches implementing the optimality criteria and methods of parsimony, maximum likelihood (ML), and MCMC-based Bayesian inference. All these depend upon an implicit or explicit mathematical model describing the evolution of characters observed.
Phenetics, popular in the mid-20th century but now largely obsolete, used distance matrix-based methods to construct trees based on overall similarity in morphology or similar observable traits (i.e. in the phenotype or the overall similarity of DNA, not the DNA sequence), which was often assumed to approximate phylogenetic relationships.
Prior to 1950, phylogenetic inferences were generally presented as narrative scenarios. Such methods are often ambiguous and lack explicit criteria for evaluating alternative hypotheses.
Impacts of taxon sampling
In phylogenetic analysis, taxon sampling selects a small group of taxa to represent the evolutionary history of its broader population. This process is also known as stratified sampling or clade-based sampling. The practice occurs given limited resources to compare and analyze every species within a target population. Based on the representative group selected, the construction and accuracy of phylogenetic trees vary, which impacts derived phylogenetic inferences.
Unavailable datasets, such as an organism's incomplete DNA and protein amino acid sequences in genomic databases, directly restrict taxonomic sampling. Consequently, a significant source of error within phylogenetic analysis occurs due to inadequate taxon samples. Accuracy may be improved by increasing the number of genetic samples within its monophyletic group. Conversely, increasing sampling from outgroups extraneous to the target stratified population may decrease accuracy. Long branch attraction is an attributed theory for this occurrence, where nonrelated branches are incorrectly classified together, insinuating a shared evolutionary history.
There are debates if increasing the number of taxa sampled improves phylogenetic accuracy more than increasing the number of genes sampled per taxon. Differences in each method's sampling impact the number of nucleotide sites utilized in a sequence alignment, which may contribute to disagreements. For example, phylogenetic trees constructed utilizing a more significant number of total nucleotides are generally more accurate, as supported by phylogenetic trees' bootstrapping replicability from random sampling.
The graphic presented in Taxon Sampling, Bioinformatics, and Phylogenomics, compares the correctness of phylogenetic trees generated using fewer taxa and more sites per taxon on the x-axis to more taxa and fewer sites per taxon on the y-axis. With fewer taxa, more genes are sampled amongst the taxonomic group; in comparison, with more taxa added to the taxonomic sampling group, fewer genes are sampled. Each method has the same total number of nucleotide sites sampled. Furthermore, the dotted line represents a 1:1 accuracy between the two sampling methods. As seen in the graphic, most of the plotted points are located below the dotted line, which indicates gravitation toward increased accuracy when sampling fewer taxa with more sites per taxon. The research performed utilizes four different phylogenetic tree construction models to verify the theory; neighbor-joining (NJ), minimum evolution (ME), unweighted maximum parsimony (MP), and maximum likelihood (ML). In the majority of models, sampling fewer taxon with more sites per taxon demonstrated higher accuracy.
Generally, with the alignment of a relatively equal number of total nucleotide sites, sampling more genes per taxon has higher bootstrapping replicability than sampling more taxa. However, unbalanced datasets within genomic databases make increasing the gene comparison per taxon in uncommonly sampled organisms increasingly difficult.
History
Overview
The term "phylogeny" derives from the German , introduced by Haeckel in 1866, and the Darwinian approach to classification became known as the "phyletic" approach. It can be traced back to Aristotle, who wrote in his Posterior Analytics, "We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses."
Ernst Haeckel's recapitulation theory
The modern concept of phylogenetics evolved primarily as a disproof of a previously widely accepted theory. During the late 19th century, Ernst Haeckel's recapitulation theory, or "biogenetic fundamental law", was widely popular. It was often expressed as "ontogeny recapitulates phylogeny", i.e. the development of a single organism during its lifetime, from germ to adult, successively mirrors the adult stages of successive ancestors of the species to which it belongs. But this theory has long been rejected. Instead, ontogeny evolves – the phylogenetic history of a species cannot be read directly from its ontogeny, as Haeckel thought would be possible, but characters from ontogeny can be (and have been) used as data for phylogenetic analyses; the more closely related two species are, the more apomorphies their embryos share.
Timeline of key points
14th century, lex parsimoniae (parsimony principle), William of Ockam, English philosopher, theologian, and Franciscan friar, but the idea actually goes back to Aristotle, as a precursor concept. He introduced the concept of Occam's razor, which is the problem solving principle that recommends searching for explanations constructed with the smallest possible set of elements. Though he did not use these exact words, the principle can be summarized as "Entities must not be multiplied beyond necessity." The principle advocates that when presented with competing hypotheses about the same prediction, one should prefer the one that requires fewest assumptions.
1763, Bayesian probability, Rev. Thomas Bayes, a precursor concept. Bayesian probability began a resurgence in the 1950s, allowing scientists in the computing field to pair traditional Bayesian statistics with other more modern techniques. It is now used as a blanket term for several related interpretations of probability as an amount of epistemic confidence.
18th century, Pierre Simon (Marquis de Laplace), perhaps first to use ML (maximum likelihood), precursor concept. His work gave way to the Laplace distribution, which can be directly linked to least absolute deviations.
1809, evolutionary theory, Philosophie Zoologique, Jean-Baptiste de Lamarck, precursor concept, foreshadowed in the 17th century and 18th century by Voltaire, Descartes, and Leibniz, with Leibniz even proposing evolutionary changes to account for observed gaps suggesting that many species had become extinct, others transformed, and different species that share common traits may have at one time been a single race, also foreshadowed by some early Greek philosophers such as Anaximander in the 6th century BC and the atomists of the 5th century BC, who proposed rudimentary theories of evolution
1837, Darwin's notebooks show an evolutionary tree
1840, American Geologist Edward Hitchcock published what is considered to be the first paleontological "Tree of Life". Many critiques, modifications, and explanations would follow.
1843, distinction between homology and analogy (the latter now referred to as homoplasy), Richard Owen, precursor concept. Homology is the term used to characterize the similarity of features that can be parsimoniously explained by common ancestry. Homoplasy is the term used to describe a feature that has been gained or lost independently in separate lineages over the course of evolution.
1858, Paleontologist Heinrich Georg Bronn (1800–1862) published a hypothetical tree to illustrating the paleontological "arrival" of new, similar species. following the extinction of an older species. Bronn did not propose a mechanism responsible for such phenomena, precursor concept.
1858, elaboration of evolutionary theory, Darwin and Wallace, also in Origin of Species by Darwin the following year, precursor concept.
1866, Ernst Haeckel, first publishes his phylogeny-based evolutionary tree, precursor concept. Haeckel introduces the now-disproved recapitulation theory. He introduced the term "Cladus" as a taxonomic category just below subphylum.
1893, Dollo's Law of Character State Irreversibility, precursor concept. Dollo's Law of Irreversibility states that "an organism never comes back exactly to its previous state due to the indestructible nature of the past, it always retains some trace of the transitional stages through which it has passed."
1912, ML (maximum likelihood recommended, analyzed, and popularized by Ronald Fisher, precursor concept. Fisher is one of the main contributors to the early 20th-century revival of Darwinism, and has been called the "greatest of Darwin's successors" for his contributions to the revision of the theory of evolution and his use of mathematics to combine Mendelian genetics and natural selection in the 20th century "modern synthesis".
1921, Tillyard uses term "phylogenetic" and distinguishes between archaic and specialized characters in his classification system.
1940, Lucien Cuénot coined the term "clade" in 1940: "terme nouveau de clade (du grec κλάδοςç, branche) [A new term clade (from the Greek word klados, meaning branch)]". He used it for evolutionary branching.
1947, Bernhard Rensch introduced the term Kladogenesis in his German book Neuere Probleme der Abstammungslehre Die transspezifische Evolution, translated into English in 1959 as Evolution Above the Species Level (still using the same spelling).
1949, Jackknife resampling, Maurice Quenouille (foreshadowed in '46 by Mahalanobis and extended in '58 by Tukey), precursor concept.
1950, Willi Hennig's classic formalization. Hennig is considered the founder of phylogenetic systematics, and published his first works in German of this year. He also asserted a version of the parsimony principle, stating that the presence of amorphous characters in different species 'is always reason for suspecting kinship, and that their origin by convergence should not be presumed a priori'. This has been considered a foundational view of phylogenetic inference.
1952, William Wagner's ground plan divergence method.
1957, Julian Huxley adopted Rensch's terminology as "cladogenesis" with a full definition: "Cladogenesis I have taken over directly from Rensch, to denote all splitting, from subspeciation through adaptive radiation to the divergence of phyla and kingdoms." With it he introduced the word "clades", defining it as: "Cladogenesis results in the formation of delimitable monophyletic units, which may be called clades."
1960, Arthur Cain and Geoffrey Ainsworth Harrison coined "cladistic" to mean evolutionary relationship,
1963, first attempt to use ML (maximum likelihood) for phylogenetics, Edwards and Cavalli-Sforza.
1965
Camin-Sokal parsimony, first parsimony (optimization) criterion and first computer program/algorithm for cladistic analysis both by Camin and Sokal.
Character compatibility method, also called clique analysis, introduced independently by Camin and Sokal (loc. cit.) and E. O. Wilson.
1966
English translation of Hennig.
"Cladistics" and "cladogram" coined (Webster's, loc. cit.)
1969
Dynamic and successive weighting, James Farris.
Wagner parsimony, Kluge and Farris.
CI (consistency index), Kluge and Farris.
Introduction of pairwise compatibility for clique analysis, Le Quesne.
1970, Wagner parsimony generalized by Farris.
1971
First successful application of ML (maximum likelihood) to phylogenetics (for protein sequences), Neyman.
Fitch parsimony, Walter M. Fitch. These gave way to the most basic ideas of maximum parsimony. Fitch is known for his work on reconstructing phylogenetic trees from protein and DNA sequences. His definition of orthologous sequences has been referenced in many research publications.
NNI (nearest neighbour interchange), first branch-swapping search strategy, developed independently by Robinson and Moore et al.
ME (minimum evolution), Kidd and Sgaramella-Zonta (it is unclear if this is the pairwise distance method or related to ML as Edwards and Cavalli-Sforza call ML "minimum evolution").
1972, Adams consensus, Adams.
1976, prefix system for ranks, Farris.
1977, Dollo parsimony, Farris.
1979
Nelson consensus, Nelson.
MAST (maximum agreement subtree)((GAS) greatest agreement subtree), a consensus method, Gordon.
Bootstrap, Bradley Efron, precursor concept.
1980, PHYLIP, first software package for phylogenetic analysis, Joseph Felsenstein. A free computational phylogenetics package of programs for inferring evolutionary trees (phylogenies). One such example tree created by PHYLIP, called a "drawgram", generates rooted trees. This image shown in the figure below shows the evolution of phylogenetic trees over time.
1981
Majority consensus, Margush and MacMorris.
Strict consensus, Sokal and Rohlffirst computationally efficient ML (maximum likelihood) algorithm. Felsenstein created the Felsenstein Maximum Likelihood method, used for the inference of phylogeny which evaluates a hypothesis about evolutionary history in terms of the probability that the proposed model and the hypothesized history would give rise to the observed data set.
1982
PHYSIS, Mikevich and Farris
Branch and bound, Hendy and Penny
1985
First cladistic analysis of eukaryotes based on combined phenotypic and genotypic evidence Diana Lipscomb.
First issue of Cladistics.
First phylogenetic application of bootstrap, Felsenstein.
First phylogenetic application of jackknife, Scott Lanyon.
1986, MacClade, Maddison and Maddison.
1987, neighbor-joining method Saitou and Nei
1988, Hennig86 (version 1.5), Farris
Bremer support (decay index), Bremer.
1989
RI (retention index), RCI (rescaled consistency index), Farris.
HER (homoplasy excess ratio), Archie.
1990
combinable components (semi-strict) consensus, Bremer.
SPR (subtree pruning and regrafting), TBR (tree bisection and reconnection), Swofford and Olsen.
1991
DDI (data decisiveness index), Goloboff.
First cladistic analysis of eukaryotes based only on phenotypic evidence, Lipscomb.
1993, implied weighting Goloboff.
1994, reduced consensus: RCC (reduced cladistic consensus) for rooted trees, Wilkinson.
1995, reduced consensus RPC (reduced partition consensus) for unrooted trees, Wilkinson.
1996, first working methods for BI (Bayesian Inference) independently developed by Li, Mau, and Rannala and Yang and all using MCMC (Markov chain-Monte Carlo).
1998, TNT (Tree Analysis Using New Technology), Goloboff, Farris, and Nixon.
1999, Winclada, Nixon.
2003, symmetrical resampling, Goloboff.
2004, 2005, similarity metric (using an approximation to Kolmogorov complexity) or NCD (normalized compression distance), Li et al., Cilibrasi and Vitanyi.
Uses of phylogenetic analysis
Pharmacology
One use of phylogenetic analysis involves the pharmacological examination of closely related groups of organisms. Advances in cladistics analysis through faster computer programs and improved molecular techniques have increased the precision of phylogenetic determination, allowing for the identification of species with pharmacological potential.
Historically, phylogenetic screens for pharmacological purposes were used in a basic manner, such as studying the Apocynaceae family of plants, which includes alkaloid-producing species like Catharanthus, known for producing vincristine, an antileukemia drug. Modern techniques now enable researchers to study close relatives of a species to uncover either a higher abundance of important bioactive compounds (e.g., species of Taxus for taxol) or natural variants of known pharmaceuticals (e.g., species of Catharanthus for different forms of vincristine or vinblastine).
Biodiversity
Phylogenetic analysis has also been applied to biodiversity studies within the fungi family. Phylogenetic analysis helps understand the evolutionary history of various groups of organisms, identify relationships between different species, and predict future evolutionary changes. Emerging imagery systems and new analysis techniques allow for the discovery of more genetic relationships in biodiverse fields, which can aid in conservation efforts by identifying rare species that could benefit ecosystems globally.
Infectious disease epidemiology
Whole-genome sequence data from outbreaks or epidemics of infectious diseases can provide important insights into transmission dynamics and inform public health strategies. Traditionally, studies have combined genomic and epidemiological data to reconstruct transmission events. However, recent research has explored deducing transmission patterns solely from genomic data using phylodynamics, which involves analyzing the properties of pathogen phylogenies. Phylodynamics uses theoretical models to compare predicted branch lengths with actual branch lengths in phylogenies to infer transmission patterns. Additionally, coalescent theory, which describes probability distributions on trees based on population size, has been adapted for epidemiological purposes. Another source of information within phylogenies that has been explored is "tree shape." These approaches, while computationally intensive, have the potential to provide valuable insights into pathogen transmission dynamics.
The structure of the host contact network significantly impacts the dynamics of outbreaks, and management strategies rely on understanding these transmission patterns. Pathogen genomes spreading through different contact network structures, such as chains, homogeneous networks, or networks with super-spreaders, accumulate mutations in distinct patterns, resulting in noticeable differences in the shape of phylogenetic trees, as illustrated in Fig. 1. Researchers have analyzed the structural characteristics of phylogenetic trees generated from simulated bacterial genome evolution across multiple types of contact networks. By examining simple topological properties of these trees, researchers can classify them into chain-like, homogeneous, or super-spreading dynamics, revealing transmission patterns. These properties form the basis of a computational classifier used to analyze real-world outbreaks. Computational predictions of transmission dynamics for each outbreak often align with known epidemiological data.
Different transmission networks result in quantitatively different tree shapes. To determine whether tree shapes captured information about underlying disease transmission patterns, researchers simulated the evolution of a bacterial genome over three types of outbreak contact networks—homogeneous, super-spreading, and chain-like. They summarized the resulting phylogenies with five metrics describing tree shape. Figures 2 and 3 illustrate the distributions of these metrics across the three types of outbreaks, revealing clear differences in tree topology depending on the underlying host contact network.
Super-spreader networks give rise to phylogenies with higher Colless imbalance, longer ladder patterns, lower Δw, and deeper trees than those from homogeneous contact networks. Trees from chain-like networks are less variable, deeper, more imbalanced, and narrower than those from other networks.
Scatter plots can be used to visualize the relationship between two variables in pathogen transmission analysis, such as the number of infected individuals and the time since infection. These plots can help identify trends and patterns, such as whether the spread of the pathogen is increasing or decreasing over time, and can highlight potential transmission routes or super-spreader events. Box plots displaying the range, median, quartiles, and potential outliers datasets can also be valuable for analyzing pathogen transmission data, helping to identify important features in the data distribution. They may be used to quickly identify differences or similarities in the transmission data.
Disciplines other than biology
Phylogenetic tools and representations (trees and networks) can also be applied to philology, the study of the evolution of oral languages and written text and manuscripts, such as in the field of quantitative comparative linguistics.
Computational phylogenetics can be used to investigate a language as an evolutionary system. The evolution of human language closely corresponds with human's biological evolution which allows phylogenetic methods to be applied. The concept of a "tree" serves as an efficient way to represent relationships between languages and language splits. It also serves as a way of testing hypotheses about the connections and ages of language families. For example, relationships among languages can be shown by using cognates as characters. The phylogenetic tree of Indo-European languages shows the relationships between several of the languages in a timeline, as well as the similarity between words and word order.
There are three types of criticisms about using phylogenetics in philology, the first arguing that languages and species are different entities, therefore you can not use the same methods to study both. The second being how phylogenetic methods are being applied to linguistic data. And the third, discusses the types of data that is being used to construct the trees.
Bayesian phylogenetic methods, which are sensitive to how treelike the data is, allow for the reconstruction of relationships among languages, locally and globally. The main two reasons for the use of Bayesian phylogenetics are that (1) diverse scenarios can be included in calculations and (2) the output is a sample of trees and not a single tree with true claim.
The same process can be applied to texts and manuscripts. In Paleography, the study of historical writings and manuscripts, texts were replicated by scribes who copied from their source and alterations - i.e., 'mutations' - occurred when the scribe did not precisely copy the source.
Phylogenetics has been applied to archaeological artefacts such as the early hominin hand-axes, late Palaeolithic figurines, Neolithic stone arrowheads, Bronze Age ceramics, and historical-period houses. Bayesian methods have also been employed by archaeologists in an attempt to quantify uncertainty in the tree topology and divergence times of stone projectile point shapes in the European Final Palaeolithic and earliest Mesolithic.
See also
Angiosperm Phylogeny Group
Bauplan
Bioinformatics
Biomathematics
Coalescent theory
EDGE of Existence programme
Evolutionary taxonomy
Language family
Maximum parsimony
Microbial phylogenetics
Molecular phylogeny
Ontogeny
PhyloCode
Phylodynamics
Phylogenesis
Phylogenetic comparative methods
Phylogenetic network
Phylogenetic nomenclature
Phylogenetic tree viewers
Phylogenetics software
Phylogenomics
Phylogeny (psychoanalysis)
Phylogeography
Systematics
References
Bibliography
External links | 0.800812 | 0.997117 | 0.798503 |
Biological organisation | Biological organisation is the organisation of complex biological structures and systems that define life using a reductionistic approach. The traditional hierarchy, as detailed below, extends from atoms to biospheres. The higher levels of this scheme are often referred to as an ecological organisation concept, or as the field, hierarchical ecology.
Each level in the hierarchy represents an increase in organisational complexity, with each "object" being primarily composed of the previous level's basic unit. The basic principle behind the organisation is the concept of emergence—the properties and functions found at a hierarchical level are not present and irrelevant at the lower levels.
The biological organisation of life is a fundamental premise for numerous areas of scientific research, particularly in the medical sciences. Without this necessary degree of organisation, it would be much more difficult—and likely impossible—to apply the study of the effects of various physical and chemical phenomena to diseases and physiology (body function). For example, fields such as cognitive and behavioral neuroscience could not exist if the brain was not composed of specific types of cells, and the basic concepts of pharmacology could not exist if it was not known that a change at the cellular level can affect an entire organism. These applications extend into the ecological levels as well. For example, DDT's direct insecticidal effect occurs at the subcellular level, but affects higher levels up to and including multiple ecosystems. Theoretically, a change in one atom could change the entire biosphere.
Levels
The simple standard biological organisation scheme, from the lowest level to the highest level, is as follows:
More complex schemes incorporate many more levels. For example, a molecule can be viewed as a grouping of elements, and an atom can be further divided into subatomic particles (these levels are outside the scope of biological organisation). Each level can also be broken down into its own hierarchy, and specific types of these biological objects can have their own hierarchical scheme. For example, genomes can be further subdivided into a hierarchy of genes.
Each level in the hierarchy can be described by its lower levels. For example, the organism may be described at any of its component levels, including the atomic, molecular, cellular, histological (tissue), organ and organ system levels. Furthermore, at every level of the hierarchy, new functions necessary for the control of life appear. These new roles are not functions that the lower level components are capable of and are thus referred to as emergent properties.
Every organism is organised, though not necessarily to the same degree. An organism can not be organised at the histological (tissue) level if it is not composed of tissues in the first place.
Emergence of biological organisation
Biological organisation is thought to have emerged in the early RNA world when RNA chains began to express the basic conditions necessary for natural selection to operate as conceived by Darwin: heritability, variation of type, and competition for limited resources. Fitness of an RNA replicator (its per capita rate of increase) would likely have been a function of adaptive capacities that were intrinsic (in the sense that they were determined by the nucleotide sequence) and the availability of resources. The three primary adaptive capacities may have been (1) the capacity to replicate with moderate fidelity (giving rise to both heritability and variation of type); (2) the capacity to avoid decay; and (3) the capacity to acquire and process resources. These capacities would have been determined initially by the folded configurations of the RNA replicators (see "Ribozyme") that, in turn, would be encoded in their individual nucleotide sequences. Competitive success among different RNA replicators would have depended on the relative values of these adaptive capacities. Subsequently, among more recent organisms competitive success at successive levels of biological organisation, presumably continued to depend, in a broad sense, on the relative values of these adaptive capacities.
Fundamentals
Empirically, a large proportion of the (complex) biological systems we observe in nature exhibit hierarchical structure. On theoretical grounds we could expect complex systems to be hierarchies in a world in which complexity had to evolve from simplicity. System hierarchies analysis performed in the 1950s, laid the empirical foundations for a field that would be, from the 1980s, hierarchical ecology.
The theoretical foundations are summarized by thermodynamics.
When biological systems are modeled as physical systems, in its most general abstraction, they are thermodynamic open systems that exhibit self-organised behavior, and the set/subset relations between dissipative structures can be characterized in a hierarchy.
A simpler and more direct way to explain the fundamentals of the "hierarchical organisation of life", was introduced in Ecology by Odum and others as the "Simon's hierarchical principle"; Simon emphasized that hierarchy "emerges almost inevitably through a wide variety of evolutionary processes, for the simple reason that hierarchical structures are stable".
To motivate this deep idea, he offered his "parable" about imaginary watchmakers.
{|
!Parable of the Watchmakers
|-
|
There once were two watchmakers, named Hora and Tempus, who made very fine watches. The phones in their workshops rang frequently; new customers were constantly calling them. However, Hora prospered while Tempus became poorer and poorer. In the end, Tempus lost his shop. What was the reason behind this?
The watches consisted of about 1000 parts each. The watches that Tempus made were designed such that, when he had to put down a partly assembled watch (for instance, to answer the phone), it immediately fell into pieces and had to be reassembled from the basic elements.
Hora had designed his watches so that he could put together subassemblies of about ten components each. Ten of these subassemblies could be put together to make a larger sub-assembly. Finally, ten of the larger subassemblies constituted the whole watch. Each subassembly could be put down without falling apart.
|}
See also
Abiogenesis
Cell theory
Cellular differentiation
Composition of the human body
Evolution of biological complexity
Evolutionary biology
Gaia hypothesis
Hierarchy theory
Holon (philosophy)
Human ecology
Level of analysis
Living systems
Self-organization
Spontaneous order
Structuralism (biology)
Timeline of the evolutionary history of life
Notes
References
External links
2011's theoretical/mathematical discussion.
Life
Articles containing video clips
Hierarchy
Emergence
Levels of organization (Biology) | 0.806814 | 0.989473 | 0.798321 |
Biology | Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments.
Biologists are able to study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them.
Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment.
History
The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scientific study of plants. Scholars of the medieval Islamic world who wrote on biology included al-Jahiz (781–869), Al-Dīnawarī (828–896), who wrote on botany, and Rhazes (865–925) who wrote on anatomy and physiology. Medicine was especially well studied by Islamic scholars working in Greek philosopher traditions, while natural history drew heavily on Aristotelian thought.
Biology began to quickly develop with Anton van Leeuwenhoek's dramatic improvement of the microscope. It was then that scholars discovered spermatozoa, bacteria, infusoria and the diversity of microscopic life. Investigations by Jan Swammerdam led to new interest in entomology and helped to develop techniques of microscopic dissection and staining. Advances in microscopy had a profound impact on biological thinking. In the early 19th century, biologists pointed to the central importance of the cell. In 1838, Schleiden and Schwann began promoting the now universal ideas that (1) the basic unit of organisms is the cell and (2) that individual cells have all the characteristics of life, although they opposed the idea that (3) all cells come from the division of other cells, continuing to support spontaneous generation. However, Robert Remak and Rudolf Virchow were able to reify the third tenet, and by the 1860s most biologists accepted all three tenets which consolidated into cell theory.
Meanwhile, taxonomy and classification became the focus of natural historians. Carl Linnaeus published a basic taxonomy for the natural world in 1735, and in the 1750s introduced scientific names for all his species. Georges-Louis Leclerc, Comte de Buffon, treated species as artificial categories and living forms as malleable—even suggesting the possibility of common descent.
Serious evolutionary thinking originated with the works of Jean-Baptiste Lamarck, who presented a coherent theory of evolution. The British naturalist Charles Darwin, combining the biogeographical approach of Humboldt, the uniformitarian geology of Lyell, Malthus's writings on population growth, and his own morphological expertise and extensive natural observations, forged a more successful evolutionary theory based on natural selection; similar reasoning and evidence led Alfred Russel Wallace to independently reach the same conclusions.
The basis for modern genetics began with the work of Gregor Mendel in 1865. This outlined the principles of biological inheritance. However, the significance of his work was not realized until the early 20th century when evolution became a unified theory as the modern synthesis reconciled Darwinian evolution with classical genetics. In the 1940s and early 1950s, a series of experiments by Alfred Hershey and Martha Chase pointed to DNA as the component of chromosomes that held the trait-carrying units that had become known as genes. A focus on new kinds of model organisms such as viruses and bacteria, along with the discovery of the double-helical structure of DNA by James Watson and Francis Crick in 1953, marked the transition to the era of molecular genetics. From the 1950s onwards, biology has been vastly extended in the molecular domain. The genetic code was cracked by Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg after DNA was understood to contain codons. The Human Genome Project was launched in 1990 to map the human genome.
Chemical basis
Atoms and molecules
All organisms are made up of chemical elements; oxygen, carbon, hydrogen, and nitrogen account for most (96%) of the mass of all organisms, with calcium, phosphorus, sulfur, sodium, chlorine, and magnesium constituting essentially all the remainder. Different elements can combine to form compounds such as water, which is fundamental to life. Biochemistry is the study of chemical processes within and relating to living organisms. Molecular biology is the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including molecular synthesis, modification, mechanisms, and interactions.
Water
Life arose from the Earth's first ocean, which formed some 3.8 billion years ago. Since then, water continues to be the most abundant molecule in every organism. Water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. Once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life. In terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen (H) atoms to one oxygen (O) atom (H2O). Because the O–H bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. This polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. Surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. Water is also adhesive as it is able to adhere to the surface of any polar or charged non-water molecules. Water is denser as a liquid than it is as a solid (or ice). This unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. Water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. Thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. As a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. In pure water, the number of hydrogen ions balances (or equals) the number of hydroxyl ions, resulting in a pH that is neutral.
Organic compounds
Organic compounds are molecules that contain carbon bonded to another element such as hydrogen. With the exception of water, nearly all the molecules that make up each organism contain carbon. Carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. For example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide, or a triple covalent bond such as in carbon monoxide (CO). Moreover, carbon can form very long chains of interconnecting carbon–carbon bonds such as octane or ring-like structures such as glucose.
The simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other elements such as oxygen (O), hydrogen (H), phosphorus (P), and sulfur (S), which can change the chemical behavior of that compound. Groups of atoms that contain these elements (O-, H-, P-, and S-) and are bonded to a central carbon atom or skeleton are called functional groups. There are six prominent functional groups that can be found in organisms: amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group.
In 1953, the Miller–Urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early Earth, thus suggesting that complex organic molecules could have arisen spontaneously in early Earth (see abiogenesis).
Macromolecules
Macromolecules are large molecules made up of smaller subunits or monomers. Monomers include sugars, amino acids, and nucleotides. Carbohydrates include monomers and polymers of sugars.
Lipids are the only class of macromolecules that are not made up of polymers. They include steroids, phospholipids, and fats, largely nonpolar and hydrophobic (water-repelling) substances.
Proteins are the most diverse of the macromolecules. They include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. The basic unit (or monomer) of a protein is an amino acid. Twenty amino acids are used in proteins.
Nucleic acids are polymers of nucleotides. Their function is to store, transmit, and express hereditary information.
Cells
Cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. Most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. There are generally two types of cells: eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. Prokaryotes are single-celled organisms such as bacteria, whereas eukaryotes can be single-celled or multicellular. In multicellular organisms, every cell in the organism's body is derived ultimately from a single cell in a fertilized egg.
Cell structure
Every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. A cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. Cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. Cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. Cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton.
Within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. In addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. These organelles include the cell nucleus, which contains most of the cell's DNA, or mitochondria, which generate adenosine triphosphate (ATP) to power cellular processes. Other organelles such as endoplasmic reticulum and Golgi apparatus play a role in the synthesis and packaging of proteins, respectively. Biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. Plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. Eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. In terms of their structural composition, the microtubules are made up of tubulin (e.g., α-tubulin and β-tubulin) whereas intermediate filaments are made up of fibrous proteins. Microfilaments are made up of actin molecules that interact with other strands of proteins.
Metabolism
All cells require energy to sustain cellular processes. Metabolism is the set of chemical reactions in an organism. The three main purposes of metabolism are: the conversion of food to energy to run cellular processes; the conversion of food/fuel to monomer building blocks; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, the breaking down of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy. The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy that will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly without being consumed by it—by reducing the amount of activation energy needed to convert reactants into products. Enzymes also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells.
Cellular respiration
Cellular respiration is a set of metabolic reactions and processes that take place in cells to convert chemical energy from nutrients into adenosine triphosphate (ATP), and then release waste products. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, releasing energy. Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it clearly does not resemble one when it occurs in a cell because of the slow, controlled release of energy from the series of reactions.
Sugar in the form of glucose is the main nutrient used by animal and plant cells in respiration. Cellular respiration involving oxygen is called aerobic respiration, which has four stages: glycolysis, citric acid cycle (or Krebs cycle), electron transport chain, and oxidative phosphorylation. Glycolysis is a metabolic process that occurs in the cytoplasm whereby glucose is converted into two pyruvates, with two net molecules of ATP being produced at the same time. Each pyruvate is then oxidized into acetyl-CoA by the pyruvate dehydrogenase complex, which also generates NADH and carbon dioxide. Acetyl-CoA enters the citric acid cycle, which takes places inside the mitochondrial matrix. At the end of the cycle, the total yield from 1 glucose (or 2 pyruvates) is 6 NADH, 2 FADH2, and 2 ATP molecules. Finally, the next stage is oxidative phosphorylation, which in eukaryotes, occurs in the mitochondrial cristae. Oxidative phosphorylation comprises the electron transport chain, which is a series of four protein complexes that transfer electrons from one complex to another, thereby releasing energy from NADH and FADH2 that is coupled to the pumping of protons (hydrogen ions) across the inner mitochondrial membrane (chemiosmosis), which generates a proton motive force. Energy from the proton motive force drives the enzyme ATP synthase to synthesize more ATPs by phosphorylating ADPs. The transfer of electrons terminates with molecular oxygen being the final electron acceptor.
If oxygen were not present, pyruvate would not be metabolized by cellular respiration but undergoes a process of fermentation. The pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. This serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. Fermentation oxidizes NADH to NAD+ so it can be re-used in glycolysis. In the absence of oxygen, fermentation prevents the buildup of NADH in the cytoplasm and provides NAD+ for glycolysis. This waste product varies depending on the organism. In skeletal muscles, the waste product is lactic acid. This type of fermentation is called lactic acid fermentation. In strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by NADH. During anaerobic glycolysis, NAD+ regenerates when pairs of hydrogen combine with pyruvate to form lactate. Lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. Lactate can also be used as an indirect precursor for liver glycogen. During recovery, when oxygen becomes available, NAD+ attaches to hydrogen from lactate to form ATP. In yeast, the waste products are ethanol and carbon dioxide. This type of fermentation is known as alcoholic or ethanol fermentation. The ATP generated in this process is made by substrate-level phosphorylation, which does not require oxygen.
Photosynthesis
Photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism's metabolic activities via cellular respiration. This chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. In most cases, oxygen is released as a waste product. Most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the Earth's atmosphere, and supplies most of the energy necessary for life on Earth.
Photosynthesis has four stages: Light absorption, electron transport, ATP synthesis, and carbon fixation. Light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. The absorbed light energy is used to remove electrons from a donor (water) to a primary electron acceptor, a quinone designated as Q. In the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of NADP+, which is reduced to NADPH, a process that takes place in a protein complex called photosystem I (PSI). The transport of electrons is coupled to the movement of protons (or hydrogen) from the stroma to the thylakoid membrane, which forms a pH gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. This is analogous to the proton-motive force generated across the inner mitochondrial membrane in aerobic respiration.
During the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the ATP synthase is coupled to the synthesis of ATP by that same ATP synthase. The NADPH and ATPs generated by the light-dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate (RuBP) in a sequence of light-independent (or dark) reactions called the Calvin cycle.
Cell signaling
Cell signaling (or communication) is the ability of cells to receive, process, and transmit signals with its environment and with itself. Signals can be non-chemical such as light, electrical impulses, and heat, or chemical signals (or ligands) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. There are generally four types of chemical signals: autocrine, paracrine, juxtacrine, and hormones. In autocrine signaling, the ligand affects the same cell that releases it. Tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their own self-division. In paracrine signaling, the ligand diffuses to nearby cells and affects them. For example, brain cells called neurons release ligands called neurotransmitters that diffuse across a synaptic cleft to bind with a receptor on an adjacent cell such as another neuron or muscle cell. In juxtacrine signaling, there is direct contact between the signaling and responding cells. Finally, hormones are ligands that travel through the circulatory systems of animals or vascular systems of plants to reach their target cells. Once a ligand binds with a receptor, it can influence the behavior of another cell, depending on the type of receptor. For instance, neurotransmitters that bind with an inotropic receptor can alter the excitability of a target cell. Other types of receptors include protein kinase receptors (e.g., receptor for the hormone insulin) and G protein-coupled receptors. Activation of G protein-coupled receptors can initiate second messenger cascades. The process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction.
Cell cycle
The cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. These events include the duplication of its DNA and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. In eukaryotes (i.e., animal, plant, fungal, and protist cells), there are two distinct types of cell division: mitosis and meiosis. Mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA is replicated) and is often followed by telophase and cytokinesis; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis all together define the mitotic phase of an animal cell cycle—the division of the mother cell into two genetically identical daughter cells. The cell cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. After cell division, each of the daughter cells begin the interphase of a new cycle. In contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of DNA replication followed by two divisions. Homologous chromosomes are separated in the first division (meiosis I), and sister chromatids are separated in the second division (meiosis II). Both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. Both are believed to be present in the last eukaryotic common ancestor.
Prokaryotes (i.e., archaea and bacteria) can also undergo cell division (or binary fission). Unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. Before binary fission, DNA in the bacterium is tightly coiled. After it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. Growth of a new cell wall begins to separate the bacterium (triggered by FtsZ polymerization and "Z-ring" formation). The new cell wall (septum) fully develops, resulting in the complete split of the bacterium. The new daughter cells have tightly coiled DNA rods, ribosomes, and plasmids.
Sexual reproduction and meiosis
Meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. Two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic DNA damage and genetic complementation which masks the expression of deleterious recessive mutations.
The beneficial effect of genetic complementation, derived from outcrossing (cross-fertilization) is also referred to as hybrid vigor or heterosis. Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom at the start of chapter XII noted “The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented.” Genetic variation, often produced as a byproduct of sexual reproduction, may provide long-term advantages to those sexual lineages that engage in outcrossing.
Genetics
Inheritance
Genetics is the scientific study of inheritance. Mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. It has several principles. The first is that genetic characteristics, alleles, are discrete and have alternate forms (e.g., purple vs. white or tall vs. dwarf), each inherited from one of two parents. Based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive; an organism with at least one dominant allele will display the phenotype of that dominant allele. During gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. Heterozygotic individuals produce gametes with an equal frequency of two alleles. Finally, the law of independent assortment, states that genes of different traits can segregate independently during the formation of gametes, i.e., genes are unlinked. An exception to this rule would include traits that are sex-linked. Test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. A Punnett square can be used to predict the results of a test cross. The chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by Thomas Morgans's experiments with fruit flies, which established the sex linkage between eye color and sex in these insects.
Genes and DNA
A gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid (DNA) that carries genetic information that controls form or function of an organism. DNA is composed of two polynucleotide chains that coil around each other to form a double helix. It is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell is collectively known as its genome. In eukaryotes, DNA is mainly in the cell nucleus. In prokaryotes, the DNA is held within the nucleoid. The genetic information is held within genes, and the complete assemblage in an organism is called its genotype.
DNA replication is a semiconservative process whereby each strand serves as a template for a new strand of DNA. Mutations are heritable changes in DNA. They can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical (e.g., nitrous acid, benzopyrene) or radiation (e.g., x-ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes). Mutations can lead to phenotypic effects such as loss-of-function, gain-of-function, and conditional mutations.
Some mutations are beneficial, as they are a source of genetic variation for evolution. Others are harmful if they were to result in a loss of function of genes needed for survival.
Gene expression
Gene expression is the molecular process by which a genotype encoded in DNA gives rise to an observable phenotype in the proteins of an organism's body. This process is summarized by the central dogma of molecular biology, which was formulated by Francis Crick in 1958. According to the Central Dogma, genetic information flows from DNA to RNA to protein. There are two gene expression processes: transcription (DNA to RNA) and translation (RNA to protein).
Gene regulation
The regulation of gene expression by environmental factors and during different stages of development can occur at each step of the process such as transcription, RNA splicing, translation, and post-translational modification of a protein. Gene expression can be influenced by positive or negative regulation, depending on which of the two types of regulatory proteins called transcription factors bind to the DNA sequence close to or at a promoter. A cluster of genes that share the same promoter is called an operon, found mainly in prokaryotes and some lower eukaryotes (e.g., Caenorhabditis elegans). In positive regulation of gene expression, the activator is the transcription factor that stimulates transcription when it binds to the sequence near or at the promoter. Negative regulation occurs when another transcription factor called a repressor binds to a DNA sequence called an operator, which is part of an operon, to prevent transcription. Repressors can be inhibited by compounds called inducers (e.g., allolactose), thereby allowing transcription to occur. Specific genes that can be activated by inducers are called inducible genes, in contrast to constitutive genes that are almost constantly active. In contrast to both, structural genes encode proteins that are not involved in gene regulation. In addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of DNA and protein found in eukaryotic cells.
Genes, development, and evolution
Development is the process by which a multicellular organism (plant or animal) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. There are four key processes that underlie development: Determination, differentiation, morphogenesis, and growth. Determination sets the developmental fate of a cell, which becomes more restrictive during development. Differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. Stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. Cellular differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. Thus, different cells can have very different physical characteristics despite having the same genome. Morphogenesis, or the development of body form, is the result of spatial differences in gene expression. A small fraction of the genes in an organism's genome called the developmental-genetic toolkit control the development of that organism. These toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. Among the most important toolkit genes are the Hox genes. Hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva.
Evolution
Evolutionary processes
Evolution is a central organizing concept in biology. It is the change in heritable characteristics of populations over successive generations. In artificial selection, animals were selectively bred for specific traits.
Given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population, Darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. Darwin inferred that individuals who possessed heritable traits better adapted to their environments are more likely to survive and produce more offspring than other individuals. He further inferred that this would lead to the accumulation of favorable traits over successive generations, thereby increasing the match between the organisms and their environment.
Speciation
A species is a group of organisms that mate with one another and speciation is the process by which one lineage splits into two lineages as a result of having evolved independently from each other. For speciation to occur, there has to be reproductive isolation. Reproductive isolation can result from incompatibilities between genes as described by Bateson–Dobzhansky–Muller model. Reproductive isolation also tends to increase with genetic divergence. Speciation can occur when there are physical barriers that divide an ancestral species, a process known as allopatric speciation.
Phylogeny
A phylogeny is an evolutionary history of a specific group of organisms or their genes. It can be represented using a phylogenetic tree, a diagram showing lines of descent among organisms or their genes. Each line drawn on the time axis of a tree represents a lineage of descendants of a particular species or population. When a lineage divides into two, it is represented as a fork or split on the phylogenetic tree. Phylogenetic trees are the basis for comparing and grouping different species. Different species that share a feature inherited from a common ancestor are described as having homologous features (or synapomorphy). Phylogeny provides the basis of biological classification. This classification system is rank-based, with the highest rank being the domain followed by kingdom, phylum, class, order, family, genus, and species. All organisms can be classified as belonging to one of three domains: Archaea (originally Archaebacteria), bacteria (originally eubacteria), or eukarya (includes the fungi, plant, and animal kingdoms).
History of life
The history of life on Earth traces how organisms have evolved from the earliest emergence of life to present day. Earth formed about 4.5 billion years ago and all life on Earth, both living and extinct, descended from a last universal common ancestor that lived about 3.5 billion years ago. Geologists have developed a geologic time scale that divides the history of the Earth into major divisions, starting with four eons (Hadean, Archean, Proterozoic, and Phanerozoic), the first three of which are collectively known as the Precambrian, which lasted approximately 4 billion years. Each eon can be divided into eras, with the Phanerozoic eon that began 539 million years ago being subdivided into Paleozoic, Mesozoic, and Cenozoic eras. These three eras together comprise eleven periods (Cambrian, Ordovician, Silurian, Devonian, Carboniferous, Permian, Triassic, Jurassic, Cretaceous, Tertiary, and Quaternary).
The similarities among all known present-day species indicate that they have diverged through the process of evolution from their common ancestor. Biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean eon and many of the major steps in early evolution are thought to have taken place in this environment. The earliest evidence of eukaryotes dates from 1.85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. Later, around 1.7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions.
Algae-like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2.7 billion years ago. Microorganisms are thought to have paved the way for the inception of land plants in the Ordovician period. Land plants were so successful that they are thought to have contributed to the Late Devonian extinction event.
Ediacara biota appear during the Ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the Cambrian explosion. During the Permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the Permian–Triassic extinction event 252 million years ago. During the recovery from this catastrophe, archosaurs became the most abundant land vertebrates; one archosaur group, the dinosaurs, dominated the Jurassic and Cretaceous periods. After the Cretaceous–Paleogene extinction event 66 million years ago killed off the non-avian dinosaurs, mammals increased rapidly in size and diversity. Such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.
Diversity
Bacteria and Archaea
Bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. Typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. Bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the Earth's crust. Bacteria also live in symbiotic and parasitic relationships with plants and animals. Most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory.
Archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria (in the Archaebacteria kingdom), a term that has fallen out of use. Archaeal cells have unique properties separating them from the other two domains, Bacteria and Eukaryota. Archaea are further divided into multiple recognized phyla. Archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of Haloquadratum walsbyi. Despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. Other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. Archaea use more energy sources than eukaryotes: these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. Salt-tolerant archaea (the Haloarchaea) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. Archaea reproduce asexually by binary fission, fragmentation, or budding; unlike bacteria, no known species of Archaea form endospores.
The first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. Improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. Archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet.
Archaea are a major part of Earth's life. They are part of the microbiota of all organisms. In the human microbiome, they are important in the gut, mouth, and on the skin. Their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles: carbon fixation; nitrogen cycling; organic compound turnover; and maintaining microbial symbiotic and syntrophic communities, for example.
Eukaryotes
Eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria (or symbiogenesis) that gave rise to mitochondria and chloroplasts, both of which are now part of modern-day eukaryotic cells. The major lineages of eukaryotes diversified in the Precambrian about 1.5 billion years ago and can be classified into eight major clades: alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. Five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. While it is likely that protists share a common ancestor (the last eukaryotic common ancestor), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. Like groupings such as algae, invertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience. Most protists are unicellular; these are called microbial eukaryotes.
Plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom Plantae, which would exclude fungi and some algae. Plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. The first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. Algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of Plantae. Unlike glaucophytes, the other algal clades such as red and green algae are multicellular. Green algae comprise three major clades: chlorophytes, coleochaetophytes, and stoneworts.
Fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. Many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems.
Animals are multicellular eukaryotes. With few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million animal species in total. They have complex interactions with each other and their environments, forming intricate food webs.
Viruses
Viruses are submicroscopic infectious agents that replicate inside the cells of organisms. Viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. More than 6,000 virus species have been described in detail. Viruses are found in almost every ecosystem on Earth and are the most numerous type of biological entity.
The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. Because viruses possess some but not all characteristics of life, they have been described as "organisms at the edge of life", and as self-replicators.
Ecology
Ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment.
Ecosystems
The community of living (biotic) organisms in conjunction with the nonliving (abiotic) components (e.g., water, light, radiation, temperature, humidity, atmosphere, acidity, and soil) of their environment is called an ecosystem. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy from the sun enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals move matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and other microbes.
Populations
A population is the group of organisms of the same species that occupies an area and reproduce from generation to generation. Population size can be estimated by multiplying population density by the area or volume. The carrying capacity of an environment is the maximum population size of a species that can be sustained by that specific environment, given the food, habitat, water, and other resources that are available. The carrying capacity of a population can be affected by changing environmental conditions such as changes in the availability of resources and the cost of maintaining them. In human populations, new technologies such as the Green revolution have helped increase the Earth's carrying capacity for humans over time, which has stymied the attempted predictions of impending population decline, the most famous of which was by Thomas Malthus in the 18th century.
Communities
A community is a group of populations of species occupying the same geographical area at the same time. A biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, like pollination and predation, or long-term; both often strongly influence the evolution of the species involved. A long-term interaction is called a symbiosis. Symbioses range from mutualism, beneficial to both partners, to competition, harmful to both partners. Every species participates as a consumer, resource, or both in consumer–resource interactions, which form the core of food chains or food webs. There are different trophic levels within any food web, with the lowest level being the primary producers (or autotrophs) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. At the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. Heterotrophs that consume plants are primary consumers (or herbivores) whereas heterotrophs that consume herbivores are secondary consumers (or carnivores). And those that eat secondary consumers are tertiary consumers and so on. Omnivorous heterotrophs are able to consume at multiple levels. Finally, there are decomposers that feed on the waste products or dead bodies of organisms.
On average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one-tenth of the energy of the trophic level that it consumes. Waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level.
Biosphere
In the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. For example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. A biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic (biosphere) and the abiotic (lithosphere, atmosphere, and hydrosphere) compartments of Earth. There are biogeochemical cycles for nitrogen, carbon, and water.
Conservation
Conservation biology is the study of the conservation of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. It is concerned with factors that influence the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. The concern stems from estimates suggesting that up to 50% of all species on the planet will disappear within the next 50 years, which has contributed to poverty, starvation, and will reset the course of evolution on this planet. Biodiversity affects the functioning of ecosystems, which provide a variety of services upon which people depend. Conservation biologists research and educate on the trends of biodiversity loss, species extinctions, and the negative effect these are having on our capabilities to sustain the well-being of human society. Organizations and citizens are responding to the current biodiversity crisis through conservation action plans that direct research, monitoring, and education programs that engage concerns at local through global scales.
See also
Biology in fiction
Glossary of biology
Idiobiology
List of biological websites
List of biologists
List of biology journals
List of biology topics
List of life sciences
List of omics topics in biology
National Association of Biology Teachers
Outline of biology
Periodic table of life sciences in Tinbergen's four questions
Science tourism
Terminology of biology
References
Further reading
External links
OSU's Phylocode
Biology Online – Wiki Dictionary
MIT video lecture series on biology
OneZoom Tree of Life
Journal of the History of Biology (springer.com)
Journal links
PLOS ONE
PLOS Biology A peer-reviewed, open-access journal published by the Public Library of Science
Current Biology: General journal publishing original research from all areas of biology
Biology Letters: A high-impact Royal Society journal publishing peer-reviewed biology papers of general interest
Science: Internationally renowned AAAS science journal – see sections of the life sciences
International Journal of Biological Sciences: A biological journal publishing significant peer-reviewed scientific papers
Perspectives in Biology and Medicine: An interdisciplinary scholarly journal publishing essays of broad relevance | 0.797394 | 0.999659 | 0.797122 |
Bioinformatics | Bioinformatics is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is often referred to as computational biology, though the distinction between the two terms is often disputed.
Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences.
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions.
History
The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems).
Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology.
Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics.
Sequences
There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less.
Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991.
In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful.
Goals
In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures.
Important sub-disciplines within bioinformatics and computational biology include:
Development and implementation of computer programs to efficiently access, manage, and use various types of information.
Development of new mathematical algorithms and statistical measures to assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences.
The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis.
Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.
Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.
Sequence analysis
Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides.
DNA sequencing
Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing.
Sequence assembly
Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research.
Genome annotation
In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics.
Genome annotation can be classified into three levels: the nucleotide, protein, and process levels.
Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome.
The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function.
Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem.
The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium Haemophilus influenzae. The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).
Gene function prediction
While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions.
Computational evolutionary biology
Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to:
trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation,
build complex computational population genetics models to predict the outcome of the system over time
track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
Comparative genomics
The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.
Many of these studies are based on the detection of sequence homology to assign sequences to protein families.
Pan genomics
Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species.
Genetics of disease
As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides.
Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes.
Analysis of mutations in cancer
In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.
Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.
Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.
Gene and protein expression
Analysis of gene expression
The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of protein expression
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays.
Analysis of regulation
Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process.
For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments.
Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods.
Analysis of cellular organization
Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases.
Microscopy and image analysis
Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases.
Protein localization
Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools.
Nuclear organization of chromatin
Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space.
Structural bioinformatics
Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models.
Amino acid sequence
The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the secondary, tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time.
Homology
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins.
One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor.
Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.
Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies.
A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.
Network and systems biology
Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.
Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Molecular interaction networks
Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.
Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions.
Biodiversity informatics
Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change.
Others
Literature analysis
The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:
Abbreviation recognition – identify the long-form and abbreviation of biological terms
Named-entity recognition – recognizing biological terms such as gene names
Protein–protein interaction – identify which proteins interact with which proteins from text
The area of research draws from statistics and computational linguistics.
High-throughput image analysis
Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are:
high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics)
morphometrics
clinical image analysis and visualization
determining the real-time air-flow patterns in breathing lungs of living animals
quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury
making behavioral observations from extended video recordings of laboratory animals
infrared measurements for metabolic activity determination
inferring clone overlaps in DNA mapping, e.g. the Sulston score
High-throughput single cell data analysis
Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.
Ontologies and data integration
Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.
The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes.
Databases
Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private.
Some of the most commonly used databases are listed below:
Used in biological sequence analysis: Genbank, UniProt
Used in structure analysis: Protein Data Bank (PDB)
Used in finding Protein Families and Motif Finding: InterPro, Pfam
Used for Next Generation Sequencing: Sequence Read Archive
Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks
Used in design of synthetic genetic circuits: GenoCAD
Software and tools
Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions.
Open-source bioinformatics software
Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration.
Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD.
The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software.
Web services in bioinformatics
SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.
Bioinformatics workflow management systems
A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to
provide an easy-to-use environment for individual application scientists themselves to create their own workflows,
provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time,
simplify the process of sharing and reusing workflows between the scientists, and
enable scientists to track the provenance of the workflow execution results and the workflow creation steps.
Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE.
BioCompute and BioCompute Objects
In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University.
It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff.
In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators.
Education platforms
Bioinformatics is not only taught as in-person masters degree at many universities. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4273 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system.
MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization at the University of California, San Diego, Genomic Data Science Specialization at Johns Hopkins University, and EdX's Data Analysis for Life Sciences XSeries at Harvard University.
Conferences
There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB).
See also
References
Further reading
Sehgal et al. : Structural, phylogenetic and docking studies of D-amino acid oxidase activator(DAOA ), a candidate schizophrenia gene. Theoretical Biology and Medical Modelling 2013 10 :3.
Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview , Communications of Computer Society of India, January 2007
Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. (Chapman & Hall/Crc Computer and Information Science Series)
Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001.
Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003.
Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins, third edition. Wiley, 2005.
Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 2007.
Cristianini, N. and Hahn, M. Introduction to Computational Genomics , Cambridge University Press, 2006. ( |)
Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 1998.
Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics Problems. Wiley, 2005.
Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002.
Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005.
Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University Press, 2005.
Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000.
Soinov, L. Bioinformatics and Pattern Recognition Come Together Journal of Pattern Recognition Research (JPRR ), Vol 1 (1) 2006 p. 37–41
Stevens, Hallam, Life Out of Sequence: A Data-Driven History of Bioinformatics, Chicago: The University of Chicago Press, 2013,
Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001.
Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report
Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology (1995)
Foundations of Computational and Systems Biology MIT Course
Computational Biology: Genomes, Networks, Evolution Free MIT Course
External links
Bioinformatics Resource Portal (SIB) | 0.797099 | 0.99827 | 0.79572 |
Life | Life is a quality that distinguishes matter that has biological processes, such as signaling and self-sustaining processes, from matter that does not. It is defined descriptively by the capacity for homeostasis, organisation, metabolism, growth, adaptation, response to stimuli, and reproduction. All life over time eventually reaches a state of death, and none is immortal. Many philosophical definitions of living systems have been proposed, such as self-organizing systems. Viruses in particular make definition difficult as they replicate only in host cells. Life exists all over the Earth in air, water, and soil, with many ecosystems forming the biosphere. Some of these are harsh environments occupied only by extremophiles.
Life has been studied since ancient times, with theories such as Empedocles's materialism asserting that it was composed of four eternal elements, and Aristotle's hylomorphism asserting that living things have souls and embody both form and matter. Life originated at least 3.5 billion years ago, resulting in a universal common ancestor. This evolved into all the species that exist now, by way of many extinct species, some of which have left traces as fossils. Attempts to classify living things, too, began with Aristotle. Modern classification began with Carl Linnaeus's system of binomial nomenclature in the 1740s.
Living things are composed of biochemical molecules, formed mainly from a few core chemical elements. All living things contain two types of large molecule, proteins and nucleic acids, the latter usually both DNA and RNA: these carry the information needed by each species, including the instructions to make each type of protein. The proteins, in turn, serve as the machinery which carries out the many chemical processes of life. The cell is the structural and functional unit of life. Smaller organisms, including prokaryotes (bacteria and archaea), consist of small single cells. Larger organisms, mainly eukaryotes, can consist of single cells or may be multicellular with more complex structure. Life is only known to exist on Earth but extraterrestrial life is thought probable. Artificial life is being simulated and explored by scientists and engineers.
Definitions
Challenge
The definition of life has long been a challenge for scientists and philosophers. This is partially because life is a process, not a substance. This is complicated by a lack of knowledge of the characteristics of living entities, if any, that may have developed outside Earth. Philosophical definitions of life have also been put forward, with similar difficulties on how to distinguish living things from the non-living. Legal definitions of life have been debated, though these generally focus on the decision to declare a human dead, and the legal ramifications of this decision. At least 123 definitions of life have been compiled.
Descriptive
Since there is no consensus for a definition of life, most current definitions in biology are descriptive. Life is considered a characteristic of something that preserves, furthers or reinforces its existence in the given environment. This implies all or most of the following traits:
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature.
Organisation: being structurally composed of one or more cells – the basic units of life.
Metabolism: transformation of energy, used to convert chemicals into cellular components (anabolism) and to decompose organic matter (catabolism). Living things require energy for homeostasis and other activities.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size and structure.
Adaptation: the evolutionary process whereby an organism becomes better able to live in its habitat.
Response to stimuli: such as the contraction of a unicellular organism away from external chemicals, the complex reactions involving all the senses of multicellular organisms, or the motion of the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Physics
From a physics perspective, an organism is a thermodynamic system with an organised molecular structure that can reproduce itself and evolve as survival dictates. Thermodynamically, life has been described as an open system which makes use of gradients in its surroundings to create imperfect copies of itself. Another way of putting this is to define life as "a self-sustained chemical system capable of undergoing Darwinian evolution", a definition adopted by a NASA committee attempting to define life for the purposes of exobiology, based on a suggestion by Carl Sagan. This definition, however, has been widely criticised because according to it, a single sexually reproducing individual is not alive as it is incapable of evolving on its own.
Living systems
Others take a living systems theory viewpoint that does not necessarily depend on molecular chemistry. One systemic definition of life is that living things are self-organizing and autopoietic (self-producing). Variations of this include Stuart Kauffman's definition as an autonomous agent or a multi-agent system capable of reproducing itself, and of completing at least one thermodynamic work cycle. This definition is extended by the evolution of novel functions over time.
Death
Death is the termination of all vital functions or life processes in an organism or cell.
One of the challenges in defining death is in distinguishing it from life. Death would seem to refer to either the moment life ends, or when the state that follows life begins. However, determining when death has occurred is difficult, as cessation of life functions is often not simultaneous across organ systems. Such determination, therefore, requires drawing conceptual lines between life and death. This is problematic because there is little consensus over how to define life. The nature of death has for millennia been a central concern of the world's religious traditions and of philosophical inquiry. Many religions maintain faith in either a kind of afterlife or reincarnation for the soul, or resurrection of the body at a later date.
Viruses
Whether or not viruses should be considered as alive is controversial. They are most often considered as just gene coding replicators rather than forms of life. They have been described as "organisms at the edge of life" because they possess genes, evolve by natural selection, and replicate by making multiple copies of themselves through self-assembly. However, viruses do not metabolise and they require a host cell to make new products. Virus self-assembly within host cells has implications for the study of the origin of life, as it may support the hypothesis that life could have started as self-assembling organic molecules.
History of study
Materialism
Some of the earliest theories of life were materialist, holding that all that exists is matter, and that life is merely a complex form or arrangement of matter. Empedocles (430 BC) argued that everything in the universe is made up of a combination of four eternal "elements" or "roots of all": earth, water, air, and fire. All change is explained by the arrangement and rearrangement of these four elements. The various forms of life are caused by an appropriate mixture of elements.
Democritus (460 BC) was an atomist; he thought that the essential characteristic of life was having a soul (psyche), and that the soul, like everything else, was composed of fiery atoms. He elaborated on fire because of the apparent connection between life and heat, and because fire moves.
Plato, in contrast, held that the world was organised by permanent forms, reflected imperfectly in matter; forms provided direction or intelligence, explaining the regularities observed in the world. The mechanistic materialism that originated in ancient Greece was revived and revised by the French philosopher René Descartes (1596–1650), who held that animals and humans were assemblages of parts that together functioned as a machine. This idea was developed further by Julien Offray de La Mettrie (1709–1750) in his book L'Homme Machine. In the 19th century the advances in cell theory in biological science encouraged this view. The evolutionary theory of Charles Darwin (1859) is a mechanistic explanation for the origin of species by means of natural selection. At the beginning of the 20th century Stéphane Leduc (1853–1939) promoted the idea that biological processes could be understood in terms of physics and chemistry, and that their growth resembled that of inorganic crystals immersed in solutions of sodium silicate. His ideas, set out in his book La biologie synthétique, were widely dismissed during his lifetime, but has incurred a resurgence of interest in the work of Russell, Barge and colleagues.
Hylomorphism
Hylomorphism is a theory first expressed by the Greek philosopher Aristotle (322 BC). The application of hylomorphism to biology was important to Aristotle, and biology is extensively covered in his extant writings. In this view, everything in the material universe has both matter and form, and the form of a living thing is its soul (Greek psyche, Latin anima). There are three kinds of souls: the vegetative soul of plants, which causes them to grow and decay and nourish themselves, but does not cause motion and sensation; the animal soul, which causes animals to move and feel; and the rational soul, which is the source of consciousness and reasoning, which (Aristotle believed) is found only in man. Each higher soul has all of the attributes of the lower ones. Aristotle believed that while matter can exist without form, form cannot exist without matter, and that therefore the soul cannot exist without the body.
This account is consistent with teleological explanations of life, which account for phenomena in terms of purpose or goal-directedness. Thus, the whiteness of the polar bear's coat is explained by its purpose of camouflage. The direction of causality (from the future to the past) is in contradiction with the scientific evidence for natural selection, which explains the consequence in terms of a prior cause. Biological features are explained not by looking at future optimal results, but by looking at the past evolutionary history of a species, which led to the natural selection of the features in question.
Spontaneous generation
Spontaneous generation was the belief that living organisms can form without descent from similar organisms. Typically, the idea was that certain forms such as fleas could arise from inanimate matter such as dust or the supposed seasonal generation of mice and insects from mud or garbage.
The theory of spontaneous generation was proposed by Aristotle, who compiled and expanded the work of prior natural philosophers and the various ancient explanations of the appearance of organisms; it was considered the best explanation for two millennia. It was decisively dispelled by the experiments of Louis Pasteur in 1859, who expanded upon the investigations of predecessors such as Francesco Redi. Disproof of the traditional ideas of spontaneous generation is no longer controversial among biologists.
Vitalism
Vitalism is the belief that there is a non-material life-principle. This originated with Georg Ernst Stahl (17th century), and remained popular until the middle of the 19th century. It appealed to philosophers such as Henri Bergson, Friedrich Nietzsche, and Wilhelm Dilthey, anatomists like Xavier Bichat, and chemists like Justus von Liebig. Vitalism included the idea that there was a fundamental difference between organic and inorganic material, and the belief that organic material can only be derived from living things. This was disproved in 1828, when Friedrich Wöhler prepared urea from inorganic materials. This Wöhler synthesis is considered the starting point of modern organic chemistry. It is of historical significance because for the first time an organic compound was produced in inorganic reactions.
During the 1850s Hermann von Helmholtz, anticipated by Julius Robert von Mayer, demonstrated that no energy is lost in muscle movement, suggesting that there were no "vital forces" necessary to move a muscle. These results led to the abandonment of scientific interest in vitalistic theories, especially after Eduard Buchner's demonstration that alcoholic fermentation could occur in cell-free extracts of yeast. Nonetheless, belief still exists in pseudoscientific theories such as homoeopathy, which interprets diseases and sickness as caused by disturbances in a hypothetical vital force or life force.
Development
Origin of life
The age of Earth is about 4.54 billion years. Life on Earth has existed for at least 3.5 billion years, with the oldest physical traces of life dating back 3.7 billion years. Estimates from molecular clocks, as summarised in the TimeTree public database, place the origin of life around 4.0 billion years ago. Hypotheses on the origin of life attempt to explain the formation of a universal common ancestor from simple organic molecules via pre-cellular life to protocells and metabolism. In 2016, a set of 355 genes from the last universal common ancestor was tentatively identified.
The biosphere is postulated to have developed, from the origin of life onwards, at least some 3.5 billion years ago. The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilised microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on Earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago.
Evolution
Evolution is the change in heritable characteristics of biological populations over successive generations. It results in the appearance of new species and often the disappearance of old ones. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on genetic variation, resulting in certain characteristics increasing or decreasing in frequency within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
Fossils
Fossils are the preserved remains or traces of organisms from the remote past. The totality of fossils, both discovered and undiscovered, and their placement in layers (strata) of sedimentary rock is known as the fossil record. A preserved specimen is called a fossil if it is older than the arbitrary date of 10,000 years ago. Hence, fossils range in age from the youngest at the start of the Holocene Epoch to the oldest from the Archaean Eon, up to 3.4 billion years old.
Extinction
Extinction is the process by which a species dies out. The moment of extinction is the death of the last individual of that species. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively after a period of apparent absence. Species become extinct when they are no longer able to survive in changing habitat or against superior competition. Over 99% of all the species that have ever lived are now extinct. Mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.
Environmental conditions
The diversity of life on Earth is a result of the dynamic interplay between genetic opportunity, metabolic capability, environmental challenges, and symbiosis. For most of its existence, Earth's habitable environment has been dominated by microorganisms and subjected to their metabolism and evolution. As a consequence of these microbial activities, the physical-chemical environment on Earth has been changing on a geologic time scale, thereby affecting the path of evolution of subsequent life. For example, the release of molecular oxygen by cyanobacteria as a by-product of photosynthesis induced global changes in the Earth's environment. Because oxygen was toxic to most life on Earth at the time, this posed novel evolutionary challenges, and ultimately resulted in the formation of Earth's major animal and plant species. This interplay between organisms and their environment is an inherent feature of living systems.
Biosphere
The biosphere is the global sum of all ecosystems. It can also be termed as the zone of life on Earth, a closed system (apart from solar and cosmic radiation and heat from the interior of the Earth), and largely self-regulating. Organisms exist in every part of the biosphere, including soil, hot springs, inside rocks at least deep underground, the deepest parts of the ocean, and at least high in the atmosphere. For example, spores of Aspergillus niger have been detected in the mesosphere at an altitude of 48 to 77 km. Under test conditions, life forms have been observed to survive in the vacuum of space. Life forms thrive in the deep Mariana Trench, and inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, and beneath the seabed off Japan. In 2014, life forms were found living below the ice of Antarctica. Expeditions of the International Ocean Discovery Program found unicellular life in 120 °C sediment 1.2 km below seafloor in the Nankai Trough subduction zone. According to one researcher, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are."
Range of tolerance
The inert components of an ecosystem are the physical and chemical factors necessary for life—energy (sunlight or chemical energy), water, heat, atmosphere, gravity, nutrients, and ultraviolet solar radiation protection. In most ecosystems, the conditions vary during the day and from one season to the next. To live in most ecosystems, then, organisms must be able to survive a range of conditions, called the "range of tolerance". Outside that are the "zones of physiological stress", where the survival and reproduction are possible but not optimal. Beyond these zones are the "zones of intolerance", where survival and reproduction of that organism is unlikely or impossible. Organisms that have a wide range of tolerance are more widely distributed than organisms with a narrow range of tolerance.
Extremophiles
To survive, some microorganisms have evolved to withstand freezing, complete desiccation, starvation, high levels of radiation exposure, and other physical or chemical challenges. These extremophile microorganisms may survive exposure to such conditions for long periods. They excel at exploiting uncommon sources of energy. Characterization of the structure and metabolic diversity of microbial communities in such extreme environments is ongoing.
Classification
Antiquity
The first classification of organisms was made by the Greek philosopher Aristotle (384–322 BC), who grouped living things as either plants or animals, based mainly on their ability to move. He distinguished animals with blood from animals without blood, which can be compared with the concepts of vertebrates and invertebrates respectively, and divided the blooded animals into five groups: viviparous quadrupeds (mammals), oviparous quadrupeds (reptiles and amphibians), birds, fishes and whales. The bloodless animals were divided into five groups: cephalopods, crustaceans, insects (which included the spiders, scorpions, and centipedes), shelled animals (such as most molluscs and echinoderms), and "zoophytes" (animals that resemble plants). This theory remained dominant for more than a thousand years.
Linnaean
In the late 1740s, Carl Linnaeus introduced his system of binomial nomenclature for the classification of species. Linnaeus attempted to improve the composition and reduce the length of the previously used many-worded names by abolishing unnecessary rhetoric, introducing new descriptive terms and precisely defining their meaning.
The fungi were originally treated as plants. For a short period Linnaeus had classified them in the taxon Vermes in Animalia, but later placed them back in Plantae. Herbert Copeland classified the Fungi in his Protoctista, including them with single-celled organisms and thus partially avoiding the problem but acknowledging their special status. The problem was eventually solved by Whittaker, when he gave them their own kingdom in his five-kingdom system. Evolutionary history shows that the fungi are more closely related to animals than to plants.
As advances in microscopy enabled detailed study of cells and microorganisms, new groups of life were revealed, and the fields of cell biology and microbiology were created. These new organisms were originally described separately in protozoa as animals and protophyta/thallophyta as plants, but were united by Ernst Haeckel in the kingdom Protista; later, the prokaryotes were split off in the kingdom Monera, which would eventually be divided into two separate groups, the Bacteria and the Archaea. This led to the six-kingdom system and eventually to the current three-domain system, which is based on evolutionary relationships. However, the classification of eukaryotes, especially of protists, is still controversial.
As microbiology developed, viruses, which are non-cellular, were discovered. Whether these are considered alive has been a matter of debate; viruses lack characteristics of life such as cell membranes, metabolism and the ability to grow or respond to their environments. Viruses have been classed into "species" based on their genetics, but many aspects of such a classification remain controversial.
The original Linnaean system has been modified many times, for example as follows:
The attempt to organise the Eukaryotes into a small number of kingdoms has been challenged. The Protozoa do not form a clade or natural grouping, and nor do the Chromista (Chromalveolata).
Metagenomic
The ability to sequence large numbers of complete genomes has allowed biologists to take a metagenomic view of the phylogeny of the whole tree of life. This has led to the realisation that the majority of living things are bacteria, and that all have a common origin.
Composition
Chemical elements
All life forms require certain core chemical elements for their biochemical functioning. These include carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur—the elemental macronutrients for all organisms. Together these make up nucleic acids, proteins and lipids, the bulk of living matter. Five of these six elements comprise the chemical components of DNA, the exception being sulfur. The latter is a component of the amino acids cysteine and methionine. The most abundant of these elements in organisms is carbon, which has the desirable attribute of forming multiple, stable covalent bonds. This allows carbon-based (organic) molecules to form the immense variety of chemical arrangements described in organic chemistry.
Alternative hypothetical types of biochemistry have been proposed that eliminate one or more of these elements, swap out an element for one not on the list, or change required chiralities or other chemical properties.
DNA
Deoxyribonucleic acid or DNA is a molecule that carries most of the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses. DNA and RNA are nucleic acids; alongside proteins and complex carbohydrates, they are one of the three major types of macromolecule that are essential for all known forms of life. Most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix. The two DNA strands are known as polynucleotides since they are composed of simpler units called nucleotides. Each nucleotide is composed of a nitrogen-containing nucleobase—either cytosine (C), guanine (G), adenine (A), or thymine (T)—as well as a sugar called deoxyribose and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. According to base pairing rules (A with T, and C with G), hydrogen bonds bind the nitrogenous bases of the two separate polynucleotide strands to make double-stranded DNA. This has the key property that each strand contains all the information needed to recreate the other strand, enabling the information to be preserved during reproduction and cell division. Within cells, DNA is organised into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotes store most of their DNA inside the cell nucleus.
Cells
Cells are the basic unit of structure in every living thing, and all cells arise from pre-existing cells by division. Cell theory was formulated by Henri Dutrochet, Theodor Schwann, Rudolf Virchow and others during the early nineteenth century, and subsequently became widely accepted. The activity of an organism depends on the total activity of its cells, with energy flow occurring within and between them. Cells contain hereditary information that is carried forward as a genetic code during cell division.
There are two primary types of cells, reflecting their evolutionary origins. Prokaryote cells lack a nucleus and other membrane-bound organelles, although they have circular DNA and ribosomes. Bacteria and Archaea are two domains of prokaryotes. The other primary type is the eukaryote cell, which has a distinct nucleus bound by a nuclear membrane and membrane-bound organelles, including mitochondria, chloroplasts, lysosomes, rough and smooth endoplasmic reticulum, and vacuoles. In addition, their DNA is organised into chromosomes. All species of large complex organisms are eukaryotes, including animals, plants and fungi, though with a wide diversity of protist microorganisms. The conventional model is that eukaryotes evolved from prokaryotes, with the main organelles of the eukaryotes forming through endosymbiosis between bacteria and the progenitor eukaryotic cell.
The molecular mechanisms of cell biology are based on proteins. Most of these are synthesised by the ribosomes through an enzyme-catalyzed process called protein biosynthesis. A sequence of amino acids is assembled and joined based upon gene expression of the cell's nucleic acid. In eukaryotic cells, these proteins may then be transported and processed through the Golgi apparatus in preparation for dispatch to their destination.
Cells reproduce through a process of cell division in which the parent cell divides into two or more daughter cells. For prokaryotes, cell division occurs through a process of fission in which the DNA is replicated, then the two copies are attached to parts of the cell membrane. In eukaryotes, a more complex process of mitosis is followed. However, the result is the same; the resulting cell copies are identical to each other and to the original cell (except for mutations), and both are capable of further division following an interphase period.
Multicellular structure
Multicellular organisms may have first evolved through the formation of colonies of identical cells. These cells can form group organisms through cell adhesion. The individual members of a colony are capable of surviving on their own, whereas the members of a true multi-cellular organism have developed specialisations, making them dependent on the remainder of the organism for survival. Such organisms are formed clonally or from a single germ cell that is capable of forming the various specialised cells that form the adult organism. This specialisation allows multicellular organisms to exploit resources more efficiently than single cells. About 800 million years ago, a minor genetic change in a single molecule, the enzyme GK-PID, may have allowed organisms to go from a single cell organism to one of many cells.
Cells have evolved methods to perceive and respond to their microenvironment, thereby enhancing their adaptability. Cell signalling coordinates cellular activities, and hence governs the basic functions of multicellular organisms. Signaling between cells can occur through direct cell contact using juxtacrine signalling, or indirectly through the exchange of agents as in the endocrine system. In more complex organisms, coordination of activities can occur through a dedicated nervous system.
In the universe
Though life is confirmed only on Earth, many think that extraterrestrial life is not only plausible, but probable or inevitable, possibly resulting in a biophysical cosmology instead of a mere physical cosmology. Other planets and moons in the Solar System and other planetary systems are being examined for evidence of having once supported simple life, and projects such as SETI are trying to detect radio transmissions from possible alien civilisations. Other locations within the Solar System that may host microbial life include the subsurface of Mars, the upper atmosphere of Venus, and subsurface oceans on some of the moons of the giant planets.
Investigation of the tenacity and versatility of life on Earth, as well as an understanding of the molecular systems that some organisms utilise to survive such extremes, is important for the search for extraterrestrial life. For example, lichen could survive for a month in a simulated Martian environment.
Beyond the Solar System, the region around another main-sequence star that could support Earth-like life on an Earth-like planet is known as the habitable zone. The inner and outer radii of this zone vary with the luminosity of the star, as does the time interval during which the zone survives. Stars more massive than the Sun have a larger habitable zone, but remain on the Sun-like "main sequence" of stellar evolution for a shorter time interval. Small red dwarfs have the opposite problem, with a smaller habitable zone that is subject to higher levels of magnetic activity and the effects of tidal locking from close orbits. Hence, stars in the intermediate mass range such as the Sun may have a greater likelihood for Earth-like life to develop. The location of the star within a galaxy may also affect the likelihood of life forming. Stars in regions with a greater abundance of heavier elements that can form planets, in combination with a low rate of potentially habitat-damaging supernova events, are predicted to have a higher probability of hosting planets with complex life. The variables of the Drake equation are used to discuss the conditions in planetary systems where civilisation is most likely to exist, within wide bounds of uncertainty. A "Confidence of Life Detection" scale (CoLD) for reporting evidence of life beyond Earth has been proposed.
Artificial
Artificial life is the simulation of any aspect of life, as through computers, robotics, or biochemistry. Synthetic biology is a new area of biotechnology that combines science and biological engineering. The common goal is the design and construction of new biological functions and systems not found in nature. Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health and the environment.
See also
Biology, the study of life
Biosignature
Carbon-based life
Central dogma of molecular biology
History of life
Lists of organisms by population
Viable system theory
Notes
References
External links
Vitae (BioLib)
Wikispecies – a free directory of life
Biota (Taxonomicon) (archived 15 July 2014)
Entry on the Stanford Encyclopedia of Philosophy
What Is Life? – by Jaime Green, The Atlantic (archived 5 December 2023)
Main topic articles | 0.795931 | 0.999534 | 0.795561 |
Energy flow (ecology) | Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way.
The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms.
Energetics and the carbon cycle
The first step in energetics is photosynthesis, where in water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants.
Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient or inefficient that ecosystem is. This decrease in efficiency occurs because organisms need to perform cellular respiration to survive, and energy is lost as heat when cellular respiration is performed. That is also why there are fewer tertiary consumers than there are producers.
Primary production
A producer is any organism that performs photosynthesis. Producers are important because they convert energy from the sun into a storable and usable chemical form of energy, glucose, as well as oxygen. The producers themselves can use the energy stored in glucose to perform cellular respiration. Or, if the producer is consumed by herbivores in the next trophic level, some of the energy is passed on up the pyramid. The glucose stored within producers serves as food for consumers, and so it is only through producers that consumers are able to access the sun’s energy. Some examples of primary producers are algae, mosses, and other plants such as grasses, trees, and shrubs.
Chemosynthetic bacteria perform a process similar to photosynthesis, but instead of energy from the sun they use energy stored in chemicals like hydrogen sulfide. This process, referred to as chemosynthesis, usually occurs deep in the ocean at hydrothermal vents that produce heat and chemicals such as hydrogen, hydrogen sulfide and methane. Chemosynthetic bacteria can use the energy in the bonds of the hydrogen sulfide and oxygen to convert carbon dioxide to glucose, releasing water and sulfur in the process. Organisms that consume the chemosynthetic bacteria can take in the glucose and use oxygen to perform cellular respiration, similar to herbivores consuming producers.
One of the factors that controls primary production is the amount of energy that enters the producer(s), which can be measured using productivity. Only one percent of solar energy enters the producer, the rest bounces off or moves through. Gross primary productivity is the amount of energy the producer actually gets. Generally, 60% of the energy that enters the producer goes to the producer’s own respiration. The net primary productivity is the amount that the plant retains after the amount that it used for cellular respiration is subtracted. Another factor controlling primary production is organic/inorganic nutrient levels in the water or soil that the producer is living in.
Secondary production
Secondary production is the use of energy stored in plants converted by consumers to their own biomass. Different ecosystems have different levels of consumers, all end with one top consumer. Most energy is stored in organic matter of plants, and as the consumers eat these plants they take up this energy. This energy in the herbivores and omnivores is then consumed by carnivores. There is also a large amount of energy that is in primary production and ends up being waste or litter, referred to as detritus. The detrital food chain includes a large amount of microbes, macroinvertebrates, meiofauna, fungi, and bacteria. These organisms are consumed by omnivores and carnivores and account for a large amount of secondary production. Secondary consumers can vary widely in how efficient they are in consuming. The efficiency of energy being passed on to consumers is estimated to be around 10%. Energy flow through consumers differs in aquatic and terrestrial environments.
In aquatic environments
Heterotrophs contribute to secondary production and it is dependent on primary productivity and the net primary products. Secondary production is the energy that herbivores and decomposers use and thus depends on primary productivity. Primarily herbivores and decomposers consume all the carbon from two main organic sources in aquatic ecosystems, autochthonous and allochthonous. Autochthonous carbon comes from within the ecosystem and includes aquatic plants, algae and phytoplankton. Allochthonous carbon from outside the ecosystem is mostly dead organic matter from the terrestrial ecosystem entering the water. In stream ecosystems, approximately 66% of annual energy input can be washed downstream. The remaining amount is consumed and lost as heat.
In terrestrial environments
Secondary production is often described in terms of trophic levels, and while this can be useful in explaining relationships it overemphasizes the rarer interactions. Consumers often feed at multiple trophic levels. Energy transferred above the third trophic level is relatively unimportant. The assimilation efficiency can be expressed by the amount of food the consumer has eaten, how much the consumer assimilates and what is expelled as feces or urine. While a portion of the energy is used for respiration, another portion of the energy goes towards biomass in the consumer. There are two major food chains: The primary food chain is the energy coming from autotrophs and passed on to the consumers; and the second major food chain is when carnivores eat the herbivores or decomposers that consume the autotrophic energy. Consumers are broken down into primary consumers, secondary consumers and tertiary consumers. Carnivores have a much higher assimilation of energy, about 80% and herbivores have a much lower efficiency of approximately 20 to 50%. Energy in a system can be affected by animal emigration/immigration. The movements of organisms are significant in terrestrial ecosystems. Energetic consumption by herbivores in terrestrial ecosystems has a low range of ~3-7%. The flow of energy is similar in many terrestrial environments. The fluctuation in the amount of net primary product consumed by herbivores is generally low. This is in large contrast to aquatic environments of lakes and ponds where grazers have a much higher consumption of around ~33%. Ectotherms and endotherms have very different assimilation efficiencies.
Detritivores
Detritivores consume organic material that is decomposing and are in turn consumed by carnivores. Predator productivity is correlated with prey productivity. This confirms that the primary productivity in ecosystems affects all productivity following.
Detritus is a large portion of organic material in ecosystems. Organic material in temperate forests is mostly made up of dead plants, approximately 62%.
In an aquatic ecosystem, leaf matter that falls into streams gets wet and begins to leech organic material. This happens rather quickly and will attract microbes and invertebrates. The leaves can be broken down into large pieces called coarse particulate organic matter (CPOM). The CPOM is rapidly colonized by microbes. Meiofauna is extremely important to secondary production in stream ecosystems. Microbes breaking down and colonizing this leaf matter are very important to the detritovores. The detritovores make the leaf matter more edible by releasing compounds from the tissues; it ultimately helps soften them. As leaves decay nitrogen will decrease since cellulose and lignin in the leaves is difficult to break down. Thus the colonizing microbes bring in nitrogen in order to aid in the decomposition. Leaf breakdown can depend on initial nitrogen content, season, and species of trees. The species of trees can have variation when their leaves fall. Thus the breakdown of leaves is happening at different times, which is called a mosaic of microbial populations.
Species effect and diversity in an ecosystem can be analyzed through their performance and efficiency. In addition, secondary production in streams can be influenced heavily by detritus that falls into the streams; production of benthic fauna biomass and abundance decreased an additional 47–50% during a study of litter removal and exclusion.
Energy flow across ecosystems
Research has demonstrated that primary producers fix carbon at similar rates across ecosystems. Once carbon has been introduced into a system as a viable source of energy, the mechanisms that govern the flow of energy to higher trophic levels vary across ecosystems. Among aquatic and terrestrial ecosystems, patterns have been identified that can account for this variation and have been divided into two main pathways of control: top-down and bottom-up. The acting mechanisms within each pathway ultimately regulate community and trophic level structure within an ecosystem to varying degrees. Bottom-up controls involve mechanisms that are based on resource quality and availability, which control primary productivity and the subsequent flow of energy and biomass to higher trophic levels. Top-down controls involve mechanisms that are based on consumption by consumers. These mechanisms control the rate of energy transfer from one trophic level to another as herbivores or predators feed on lower trophic levels.
Aquatic vs terrestrial ecosystems
Much variation in the flow of energy is found within each type of ecosystem, creating a challenge in identifying variation between ecosystem types. In a general sense, the flow of energy is a function of primary productivity with temperature, water availability, and light availability. For example, among aquatic ecosystems, higher rates of production are usually found in large rivers and shallow lakes than in deep lakes and clear headwater streams. Among terrestrial ecosystems, marshes, swamps, and tropical rainforests have the highest primary production rates, whereas tundra and alpine ecosystems have the lowest. The relationships between primary production and environmental conditions have helped account for variation within ecosystem types, allowing ecologists to demonstrate that energy flows more efficiently through aquatic ecosystems than terrestrial ecosystems due to the various bottom-up and top-down controls in play.
Bottom-up
The strength of bottom-up controls on energy flow are determined by the nutritional quality, size, and growth rates of primary producers in an ecosystem. Photosynthetic material is typically rich in nitrogen (N) and phosphorus (P) and supplements the high herbivore demand for N and P across all ecosystems. Aquatic primary production is dominated by small, single-celled phytoplankton that are mostly composed of photosynthetic material, providing an efficient source of these nutrients for herbivores. In contrast, multi-cellular terrestrial plants contain many large supporting cellulose structures of high carbon but low nutrient value. Because of this structural difference, aquatic primary producers have less biomass per photosynthetic tissue stored within the aquatic ecosystem than in the forests and grasslands of terrestrial ecosystems. This low biomass relative to photosynthetic material in aquatic ecosystems allows for a more efficient turnover rate compared to terrestrial ecosystems. As phytoplankton are consumed by herbivores, their enhanced growth and reproduction rates sufficiently replace lost biomass and, in conjunction with their nutrient dense quality, support greater secondary production.
Additional factors impacting primary production includes inputs of N and P, which occurs at a greater magnitude in aquatic ecosystems. These nutrients are important in stimulating plant growth and, when passed to higher trophic levels, stimulate consumer biomass and growth rate. If either of these nutrients are in short supply, they can limit overall primary production. Within lakes, P tends to be the greater limiting nutrient while both N and P limit primary production in rivers. Due to these limiting effects, nutrient inputs can potentially alleviate the limitations on net primary production of an aquatic ecosystem. Allochthonous material washed into an aquatic ecosystem introduces N and P as well as energy in the form of carbon molecules that are readily taken up by primary producers. Greater inputs and increased nutrient concentrations support greater net primary production rates, which in turn supports greater secondary production.
Top-down
Top-down mechanisms exert greater control on aquatic primary producers due to the roll of consumers within an aquatic food web. Among consumers, herbivores can mediate the impacts of trophic cascades by bridging the flow of energy from primary producers to predators in higher trophic levels. Across ecosystems, there is a consistent association between herbivore growth and producer nutritional quality. However, in aquatic ecosystems, primary producers are consumed by herbivores at a rate four times greater than in terrestrial ecosystems. Although this topic is highly debated, researchers have attributed the distinction in herbivore control to several theories, including producer to consumer size ratios and herbivore selectivity.
Modeling of top-down controls on primary producers suggests that the greatest control on the flow of energy occurs when the size ratio of consumer to primary producer is the highest. The size distribution of organisms found within a single trophic level in aquatic systems is much narrower than that of terrestrial systems. On land, the consumer size ranges from smaller than the plant it consumes, such as an insect, to significantly larger, such as an ungulate, while in aquatic systems, consumer body size within a trophic level varies much less and is strongly correlated with trophic position. As a result, the size difference between producers and consumers is consistently larger in aquatic environments than on land, resulting in stronger herbivore control over aquatic primary producers.
Herbivores can potentially control the fate of organic matter as it is cycled through the food web. Herbivores tend to select nutritious plants while avoiding plants with structural defense mechanisms. Like support structures, defense structures are composed of nutrient poor, high carbon cellulose. Access to nutritious food sources enhances herbivore metabolism and energy demands, leading to greater removal of primary producers. In aquatic ecosystems, phytoplankton are highly nutritious and generally lack defense mechanisms. This results in greater top-down control because consumed plant matter is quickly released back into the system as labile organic waste. In terrestrial ecosystems, primary producers are less nutritionally dense and are more likely to contain defense structures. Because herbivores prefer nutritionally dense plants and avoid plants or plant parts with defense structures, a greater amount of plant matter is left unconsumed within the ecosystem. Herbivore avoidance of low-quality plant matter may be why terrestrial systems exhibit weaker top-down control on the flow of energy.
See also
References
Further reading
Ecology terminology
Energy
Environmental science
Ecological economics | 0.798816 | 0.995515 | 0.795233 |
Homogeneity and heterogeneity | Homogeneity and heterogeneity are concepts relating to the uniformity of a substance, process or image. A homogeneous feature is uniform in composition or character (i.e. color, shape, size, weight, height, distribution, texture, language, income, disease, temperature, radioactivity, architectural design, etc.); one that is heterogeneous is distinctly nonuniform in at least one of these qualities.
Etymology and spelling
The words homogeneous and heterogeneous come from Medieval Latin homogeneus and heterogeneus, from Ancient Greek ὁμογενής (homogenēs) and ἑτερογενής (heterogenēs), from ὁμός (homos, "same") and ἕτερος (heteros, "other, another, different") respectively, followed by γένος (genos, "kind"); -ous is an adjectival suffix.
Alternate spellings omitting the last -e- (and the associated pronunciations) are common, but mistaken: homogenous is strictly a biological/pathological term which has largely been replaced by homologous. But use of homogenous to mean homogeneous has seen a rise since 2000, enough for it to now be considered an "established variant". Similarly, heterogenous is a spelling traditionally reserved to biology and pathology, referring to the property of an object in the body having its origin outside the body.
Scaling
The concepts are the same to every level of complexity. From atoms to galaxies, plants, animals, humans, and other living organisms all share both a common or unique set of complexities.
Hence, an element may be homogeneous on a larger scale, compared to being heterogeneous on a smaller scale. This is known as an effective medium approximation.
Examples
Various disciplines understand heterogeneity, or being heterogeneous, in different ways.
Biology
Environmental heterogeneity
Environmental heterogeneity (EH) is a hypernym for different environmental factors that contribute to the diversity of species, like climate, topography, and land cover. Biodiversity is correlated with geodiversity on a global scale. Heterogeneity in geodiversity features and environmental variables are indicators of environmental heterogeneity. They drive biodiversity at local and regional scales.
Scientific literature in ecology contains a big number of different terms for environmental heterogeneity, often undefined or conflicting in their meaning. and are a synonyms of environmental heterogeneity.
Chemistry
Homogeneous and heterogeneous mixtures
In chemistry, a heterogeneous mixture consists of either or both of 1) multiple states of matter or 2) hydrophilic and hydrophobic substances in one mixture; an example of the latter would be a mixture of water, octane, and silicone grease. Heterogeneous solids, liquids, and gases may be made homogeneous by melting, stirring, or by allowing time to pass for diffusion to distribute the molecules evenly. For example, adding dye to water will create a heterogeneous solution at first, but will become homogeneous over time. Entropy allows for heterogeneous substances to become homogeneous over time.
A heterogeneous mixture is a mixture of two or more compounds. Examples are: mixtures of sand and water or sand and iron filings, a conglomerate rock, water and oil, a salad, trail mix, and concrete (not cement). A mixture can be determined to be homogeneous when everything is settled and equal, and the liquid, gas, the object is one color or the same form. Various models have been proposed to model the concentrations in different phases. The phenomena to be considered are mass rates and reaction.
Homogeneous and heterogeneous reactions
Homogeneous reactions are chemical reactions in which the reactants and products are in the same phase, while heterogeneous reactions have reactants in two or more phases. Reactions that take place on the surface of a catalyst of a different phase are also heterogeneous. A reaction between two gases or two miscible liquids is homogeneous. A reaction between a gas and a liquid, a gas and a solid or a liquid and a solid is heterogeneous.
Geology
Earth is a heterogeneous substance in many aspects; for instance, rocks (geology) are inherently heterogeneous, usually occurring at the micro-scale and mini-scale.
Linguistics
In formal semantics, homogeneity is the phenomenon in which plural expressions imply "all" when asserted but "none" when negated. For example, the English sentence "Robin read the books" means that Robin read all the books, while "Robin didn't read the books" means that she read none of them. Neither sentence can be asserted if Robin read exactly half of the books. This is a puzzle because the negative sentence does not appear to be the classical negation of the sentence. A variety of explanations have been proposed including that natural language operates on a trivalent logic.
Information technology
With information technology, heterogeneous computing occurs in a network comprising different types of computers, potentially with vastly differing memory sizes, processing power and even basic underlying architecture.
Mathematics and statistics
In algebra, homogeneous polynomials have the same number of factors of a given kind.
In the study of binary relations, a homogeneous relation R is on a single set (R ⊆ X × X) while a heterogeneous relation concerns possibly distinct sets (R ⊆ X × Y, X = Y or X ≠ Y).
In statistical meta-analysis, study heterogeneity is when multiple studies on an effect are measuring somewhat different effects due to differences in subject population, intervention, choice of analysis, experimental design, etc.; this can cause problems in attempts to summarize the meaning of the studies.
Medicine
In medicine and genetics, a genetic or allelic heterogeneous condition is one where the same disease or condition can be caused, or contributed to, by several factors, or in genetic terms, by varying or different genes or alleles.
In cancer research, cancer cell heterogeneity is thought to be one of the underlying reasons that make treatment of cancer difficult.
Physics
In physics, "heterogeneous" is understood to mean "having physical properties that vary within the medium".
Sociology
In sociology, "heterogeneous" may refer to a society or group that includes individuals of differing ethnicities, cultural backgrounds, sexes, or ages. Diverse is the more common synonym in the context.
See also
Complete spatial randomness
Heterologous
Epidemiology
Spatial analysis
Statistical hypothesis testing
Homogeneity blockmodeling
References
External links
The following cited pages in this book cover the meaning of "homogeneity" across disciplines:
Chemical reactions
Scientific terminology
de:Heterogenität
eu:Homogeneo eta heterogeneo | 0.798593 | 0.994774 | 0.79442 |
Ecosystem | An ecosystem (or ecological system) is a system that environments and their organisms form through their interaction. The biotic and abiotic components are linked together through nutrient cycles and energy flows.
Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them.
Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere.
Plants allow energy to enter the system through photosynthesis, building up plant tissue. Animals play an important role in the movement of matter and energy through the system, by feeding on plants and on one another. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes.
Ecosystems provide a variety of goods and services upon which people depend, and may be part of. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration can contribute to achieving the Sustainable Development Goals.
Definition
An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact. The biotic and abiotic components are linked together through nutrient cycles and energy flows.
"Ecosystem processes" are the transfers of energy and materials from one pool to another. Ecosystem processes are known to "take place at a wide range of scales". Therefore, the correct scale of study depends on the question asked.
Origin and development of the term
The term "ecosystem" was first used in 1935 in a publication by British ecologist Arthur Tansley. The term was coined by Arthur Roy Clapham, who came up with the word at Tansley's request. Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment. He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment". Tansley regarded ecosystems not simply as natural units, but as "mental isolates". Tansley later defined the spatial extent of ecosystems using the term "ecotope".
G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a "systems approach" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems.
Processes
External and internal factors
Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that "most strongly determines ecosystem processes and structure". Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem.
Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside.
Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present. The introduction of non-native species can cause substantial shifts in ecosystem function.
Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors.
Primary production
Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect.
Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP). About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP). Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.
Energy flow
Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system.
Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem.
Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion.
In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level.
The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains which present a number of common, non random properties in the topology of their network.
Decomposition
The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted.
Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones.
Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material.
The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.
Decomposition rates
Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available.
Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth.
Dynamics and resilience
Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance.
Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply."
The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times.
From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests. Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene.
Nutrient cycling
Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer. Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production. Over the long term, phosphorus availability can also be critical.
Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium. Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium.
Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants. Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust. Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.
When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification. Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification.
Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots. This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function.
Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics). Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.
Function and biodiversity
Biodiversity plays an important role in ecosystem functioning. Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment. Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise, one species would competitively exclude the other. Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness, additional species may have little additive effect unless they differ substantially from species already present. This is the case for example for exotic species.
The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.
An ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat.
Study approaches
Ecosystem ecology
Ecosystem ecology is the "study of the interactions between organisms and their environment as an integrated system". The size of ecosystems can range up to ten orders of magnitude, from the surface layers of rocks to the surface of the planet.
The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem. Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades.
Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studying microcosms or mesocosms (simplified representations of ecosystems). American ecologist Stephen R. Carpenter has argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics.
Classifications
Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as "wet coastal needle-leafed forests".
Biomes vary due to global variations in climate. Biomes are often defined by their structure: at a general level, for example, tropical forests, temperate grasslands, and arctic tundra. There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafed boreal forests or wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (see anthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g. novel ecosystem). Each of these taxonomies of ecosystems tends to emphasize different structural or functional properties. None of these is the "best" classification.
Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system.
Human interactions with ecosystems
Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.
Ecosystem goods and services
Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species.
Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.
The Millennium Ecosystem Assessment is an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services. It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both their resilience and biocapacity. The report refers to natural systems as humanity's "life-support system", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition.
The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues of biodiversity and ecosystem services. It is intended to serve a similar role to the Intergovernmental Panel on Climate Change.
Ecosystem services are limited and also threatened by human activities. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species.
Degradation and decline
As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas.
Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species.
These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered collapsed (see also IUCN Red List of Ecosystems). Ecosystem collapse could be reversible and in this way differs from species extinction. Quantitative assessments of the risk of collapse are used as measures of conservation status and trends.
Management
When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management. Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem; "intergenerational sustainability [is] a precondition for management, not an afterthought". While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (see, for example, agroecosystem and close to nature forestry).
Restoration and sustainable development
Integrated conservation and development projects (ICDPs) aim to address conservation and human livelihood (sustainable development) concerns in developing countries together, rather than separately as was often done in the past.
See also
Complex system
Earth science
Ecoregion
Ecosystem-based adaptation
Types
The following articles are types of ecosystems for particular types of regions or zones:
Aquatic ecosystem
Freshwater ecosystem
Lake ecosystem (lentic ecosystem)
River ecosystem (lotic ecosystem)
Marine ecosystem
Large marine ecosystem
Tropical salt pond ecosystem
Terrestrial ecosystem
Boreal ecosystem
Groundwater-dependent ecosystems
Montane ecosystem
Urban ecosystem
Ecosystems grouped by condition
Agroecosystem
Closed ecosystem
Depauperate ecosystem
Novel ecosystem
Reference ecosystem
Instances
Ecosystem instances in specific regions of the world:
Greater Yellowstone Ecosystem
Leuser Ecosystem
Longleaf pine Ecosystem
Tarangire Ecosystem
References
External links | 0.794548 | 0.99929 | 0.793983 |
Biochemistry | Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology.
History
At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists.
The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister.
It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level.
Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression.
Starting materials: the chemical elements of life
Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts).
Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more.
Biomolecules
The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.
Carbohydrates
Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications.
The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose.
In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare.
Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance.
When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals.
Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).
Lipids
Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid.
Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and maybe saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain).
Most lipids have some polar character and are largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol).
In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.
Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilizers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome).
Proteins
Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that makes each amino acid different, and the properties of the side chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.
Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain.
The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.
The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.
Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.
If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein.
A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release ammonia into the water where it is quickly diluted. In general, mammals convert ammonia into urea, via the urea cycle.
In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function.
Nucleic acids
Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group.
The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid. Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine, thymine, and uracil contain two hydrogen bonds, while hydrogen bonds formed between cytosine and guanine are three.
Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.
Metabolism
Carbohydrates as energy source
Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.
Glycolysis (anaerobic)
Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.
Aerobic
In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.
Gluconeogenesis
In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate.
The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle.
Relationship to other "molecular-scale" biological sciences
Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields:
Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level.
Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies.
Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA.
Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules).
See also
Lists
Important publications in biochemistry (chemistry)
List of biochemistry topics
List of biochemists
List of biomolecules
See also
Astrobiology
Biochemistry (journal)
Biological Chemistry (journal)
Biophysics
Chemical ecology
Computational biomodeling
Dedicated bio-based chemical
EC number
Hypothetical types of biochemistry
International Union of Biochemistry and Molecular Biology
Metabolome
Metabolomics
Molecular biology
Molecular medicine
Plant biochemistry
Proteolysis
Small molecule
Structural biology
TCA cycle
Notes
References
Cited literature
Further reading
Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999.
Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell
4th Edition, Routledge, March, 2002, hardcover, 1616 pp.
3rd Edition, Garland, 1994,
2nd Edition, Garland, 1989,
Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982.
External links
The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI.
SystemsX.ch – The Swiss Initiative in Systems Biology
Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook.
Biotechnology
Molecular biology | 0.794984 | 0.998529 | 0.793814 |
Systems biology | Systems biology is the computational and mathematical analysis and modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research.
Particularly from the year 2000 onwards, the concept has been used widely in biology in a variety of contexts. The Human Genome Project is an example of applied systems thinking in biology which has led to new, collaborative ways of working on problems in the biological field of genetics. One of the aims of systems biology is to model and discover emergent properties, properties of cells, tissues and organisms functioning as a system whose theoretical description is only possible using techniques of systems biology. These typically involve metabolic networks or cell signaling networks.
Overview
Systems biology can be considered from a number of different aspects.
As a field of study, particularly, the study of the interactions between the components of biological systems, and how these interactions give rise to the function and behavior of that system (for example, the enzymes and metabolites in a metabolic pathway or the heart beats).
As a paradigm, systems biology is usually defined in antithesis to the so-called reductionist paradigm (biological organisation), although it is consistent with the scientific method. The distinction between the two paradigms is referred to in these quotations: "the reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge ... the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." (Sauer et al.) "Systems biology ... is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. ... It means changing our philosophy, in the full sense of the term." (Denis Noble)
As a series of operational protocols used for performing research, namely a cycle composed of theory, analytic or computational modelling to propose specific testable hypotheses about a biological system, experimental validation, and then using the newly acquired quantitative description of cells or cell processes to refine the computational model or theory. Since the objective is a model of the interactions in a system, the experimental techniques that most suit systems biology are those that are system-wide and attempt to be as complete as possible. Therefore, transcriptomics, metabolomics, proteomics and high-throughput techniques are used to collect quantitative data for the construction and validation of models.
As the application of dynamical systems theory to molecular biology. Indeed, the focus on the dynamics of the studied systems is the main conceptual difference between systems biology and bioinformatics.
As a socioscientific phenomenon defined by the strategy of pursuing integration of complex data about the interactions in biological systems from diverse experimental sources using interdisciplinary tools and personnel.
History
Although the concept of a systems view of cellular function has been well understood since at least the 1930s, technological limitations made it difficult to make systems wide measurements. The advent of microarray technology in the 1990s opened up an entire new visa for studying cells at the systems level. In 2000, the Institute for Systems Biology was established in Seattle in an effort to lure "computational" type people who it was felt were not attracted to the academic settings of the university. The institute did not have a clear definition of what the field actually was: roughly bringing together people from diverse fields to use computers to holistically study biology in new ways. A Department of Systems Biology at Harvard Medical School was launched in 2003. In 2006 it was predicted that the buzz generated by the "very fashionable" new concept would cause all the major universities to need a systems biology department, thus that there would be careers available for graduates with a modicum of ability in computer programming and biology. In 2006 the National Science Foundation put forward a challenge to build a mathematical model of the whole cell. In 2012 the first whole-cell model of Mycoplasma genitalium was achieved by the Covert Laboratory at Stanford University. The whole-cell model is able to predict viability of M. genitalium cells in response to genetic mutations.
An earlier precursor of systems biology, as a distinct discipline, may have been by systems theorist Mihajlo Mesarovic in 1966 with an international symposium at the Case Institute of Technology in Cleveland, Ohio, titled Systems Theory and Biology. Mesarovic predicted that perhaps in the future there would be such a thing as "systems biology". Other early precursors that focused on the view that biology should be analyzed as a system, rather than a simple collection of parts, were Metabolic Control Analysis, developed by Henrik Kacser and Jim Burns later thoroughly revised, and Reinhart Heinrich and Tom Rapoport, and Biochemical Systems Theory developed by Michael Savageau
According to Robert Rosen in the 1960s, holistic biology had become passé by the early 20th century, as more empirical science dominated by molecular chemistry had become popular. Echoing him forty years later in 2006 Kling writes that the success of molecular biology throughout the 20th century had suppressed holistic computational methods. By 2011 the National Institutes of Health had made grant money available to support over ten systems biology centers in the United States, but by 2012 Hunter writes that systems biology still has someway to go to achieve its full potential. Nonetheless, proponents hoped that it might once prove more useful in the future.
An important milestone in the development of systems biology has become the international project Physiome.
Associated disciplines
According to the interpretation of systems biology as using large data sets using interdisciplinary tools, a typical application is metabolomics, which is the complete set of all the metabolic products, metabolites, in the system at the organism, cell, or tissue level.
Items that may be a computer database include: phenomics, organismal variation in phenotype as it changes during its life span; genomics, organismal deoxyribonucleic acid (DNA) sequence, including intra-organismal cell specific variation. (i.e., telomere length variation); epigenomics/epigenetics, organismal and corresponding cell specific transcriptomic regulating factors not empirically coded in the genomic sequence. (i.e., DNA methylation, Histone acetylation and deacetylation, etc.); transcriptomics, organismal, tissue or whole cell gene expression measurements by DNA microarrays or serial analysis of gene expression; interferomics, organismal, tissue, or cell-level transcript correcting factors (i.e., RNA interference), proteomics, organismal, tissue, or cell level measurements of proteins and peptides via two-dimensional gel electrophoresis, mass spectrometry or multi-dimensional protein identification techniques (advanced HPLC systems coupled with mass spectrometry). Sub disciplines include phosphoproteomics, glycoproteomics and other methods to detect chemically modified proteins; glycomics, organismal, tissue, or cell-level measurements of carbohydrates; lipidomics, organismal, tissue, or cell level measurements of lipids.
The molecular interactions within the cell are also studied, this is called interactomics. A discipline in this field of study is protein–protein interactions, although interactomics includes the interactions of other molecules. Neuroelectrodynamics, where the computer's or a brain's computing function as a dynamic system is studied along with its (bio)physical mechanisms; and fluxomics, measurements of the rates of metabolic reactions in a biological system (cell, tissue, or organism).
In approaching a systems biology problem there are two main approaches. These are the top down and bottom up approach. The top down approach takes as much of the system into account as possible and relies largely on experimental results. The RNA-Seq technique is an example of an experimental top down approach. Conversely, the bottom up approach is used to create detailed models while also incorporating experimental data. An example of the bottom up approach is the use of circuit models to describe a simple gene network.
Various technologies utilized to capture dynamic changes in mRNA, proteins, and post-translational modifications. Mechanobiology, forces and physical properties at all scales, their interplay with other regulatory mechanisms; biosemiotics, analysis of the system of sign relations of an organism or other biosystems; Physiomics, a systematic study of physiome in biology.
Cancer systems biology is an example of the systems biology approach, which can be distinguished by the specific object of study (tumorigenesis and treatment of cancer). It works with the specific data (patient samples, high-throughput data with particular attention to characterizing cancer genome in patient tumour samples) and tools (immortalized cancer cell lines, mouse models of tumorigenesis, xenograft models, high-throughput sequencing methods, siRNA-based gene knocking down high-throughput screenings, computational modeling of the consequences of somatic mutations and genome instability). The long-term objective of the systems biology of cancer is ability to better diagnose cancer, classify it and better predict the outcome of a suggested treatment, which is a basis for personalized cancer medicine and virtual cancer patient in more distant prospective. Significant efforts in computational systems biology of cancer have been made in creating realistic multi-scale in silico models of various tumours.
The systems biology approach often involves the development of mechanistic models, such as the reconstruction of dynamic systems from the quantitative properties of their elementary building blocks. For instance, a cellular network can be modelled mathematically using methods coming from chemical kinetics and control theory. Due to the large number of parameters, variables and constraints in cellular networks, numerical and computational techniques are often used (e.g., flux balance analysis).
Bioinformatics and data analysis
Other aspects of computer science, informatics, and statistics are also used in systems biology. These include new forms of computational models, such as the use of process calculi to model biological processes (notable approaches include stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, and Brane calculus) and constraint-based modeling; integration of information from the literature, using techniques of information extraction and text mining; development of online databases and repositories for sharing data and models, approaches to database integration and software interoperability via loose coupling of software, websites and databases, or commercial suits; network-based approaches for analyzing high dimensional genomic data sets. For example, weighted correlation network analysis is often used for identifying clusters (referred to as modules), modeling the relationship between clusters, calculating fuzzy measures of cluster (module) membership, identifying intramodular hubs, and for studying cluster preservation in other data sets; pathway-based methods for omics data analysis, e.g. approaches to identify and score pathways with differential activity of their gene, protein, or metabolite members. Much of the analysis of genomic data sets also include identifying correlations. Additionally, as much of the information comes from different fields, the development of syntactically and semantically sound ways of representing biological models is needed.
Creating biological models
Researchers begin by choosing a biological pathway and diagramming all of the protein, gene, and/or metabolic pathways. After determining all of the interactions, mass action kinetics or enzyme kinetic rate laws are used to describe the speed of the reactions in the system. Using mass-conservation, the differential equations for the biological system can be constructed. Experiments or parameter fitting can be done to determine the parameter values to use in the differential equations. These parameter values will be the various kinetic constants required to fully describe the model. This model determines the behavior of species in biological systems and bring new insight to the specific activities of system. Sometimes it is not possible to gather all reaction rates of a system. Unknown reaction rates are determined by simulating the model of known parameters and target behavior which provides possible parameter values.
The use of constraint-based reconstruction and analysis (COBRA) methods has become popular among systems biologists to simulate and predict the metabolic phenotypes, using genome-scale models. One of the methods is the flux balance analysis (FBA) approach, by which one can study the biochemical networks and analyze the flow of metabolites through a particular metabolic network, by optimizing the objective function of interest (e.g. maximizing biomass production to predict growth).
See also
Biochemical systems equation
Biological computation
BioSystems (journal)
Computational biology
Exposome
Interactome
List of omics topics in biology
List of systems biology modeling software
Living systems
Metabolic Control Analysis
Metabolic network modelling
Modelling biological systems
Molecular pathological epidemiology
Network biology
Network medicine
Synthetic biology
Systems biomedicine
Systems immunology
Systems medicine
TIARA (database)
References
Further reading
provides a comparative review of three books:
External links
Biological Systems in bio-physics-wiki
Bioinformatics
Computational fields of study | 0.7992 | 0.992762 | 0.793416 |
Synthetic biology | Synthetic biology (SynBio) is a multidisciplinary field of science that focuses on living systems and organisms, and it applies engineering principles to develop new biological parts, devices, and systems or to redesign existing systems found in nature.
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biochemistry, biotechnology, biomaterials, material science/engineering, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.
It includes designing and constructing biological modules, biological systems, and biological machines, or re-designing existing biological systems for useful purposes.
Additionally, it is the branch of science that focuses on the new abilities of engineering into existing organisms to redesign them for useful purposes.
In order to produce predictable and robust systems with novel functionalities that do not already exist in nature, it is also necessary to apply the engineering paradigm of systems design to biological systems. According to the European Commission, this possibly involves a molecular assembler based on biomolecular systems such as the ribosome.
History
1910: First identifiable use of the term synthetic biology in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.
1944: Canadian-American scientist Oswald Avery shows that DNA is the material of which genes and chromosomes are made. This becomes the bedrock on which all subsequent genetic research is built.
1953: Francis Crick and James Watson publish the structure of the DNA in Nature.
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al. This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the International Genetically Engineered Machine (iGEM) competition founded at MIT in the following year.
2003: Researchers engineer an artemisinin precursor pathway in E. coli.
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at MIT.
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.
2011: Functional synthetic chromosome arms are engineered in yeast.
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage. This technology greatly simplified and expanded eukaryotic gene editing.
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.
2020: Scientists created the first xenobot, a programmable synthetic organism derived from frog cells and designed by AI.
2021: Scientists reported that xenobots are able to self-replicate by gathering loose cells in the environment and then forming new xenobots.
Perspectives
It is a field whose scope is expanding in terms of systems integration, engineered organisms, and practical findings.
Engineers view biology as technology (in other words, a given system includes biotechnology or its biological engineering). Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goal of being able to design and build engineered live biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health, as well as advance fundamental knowledge of biological systems and our environment.
Researchers and companies working in synthetic biology are using nature's power to solve issues in agriculture, manufacturing, and medicine.
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market. Synthetic biology currently has no generally accepted definition. Here are a few examples:
It is the science of emerging genetic and physical engineering to produce new (and, therefore, synthetic) life forms. To develop organisms with novel or enhanced characteristics, this emerging field of study combines biology, engineering, and related disciplines' knowledge and techniques to design chemically synthesised DNA.
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes or minimal organisms like Mycoplasma laboratorium.
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches shares a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level. Optimizing these exogenous pathways in unnatural systems takes iterative fine-tuning of the individual biomolecular components to select the highest concentrations of the desired product.
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.
Categories
Bioengineering, synthetic genomics, protocell synthetic biology, unconventional molecular biology, and in silico techniques are the five categories of synthetic biology.
It is necessary to review the distinctions and analogies between the categories of synthetic biology for its social and ethical assessment, to distinguish between issues affecting the whole field and particular to a specific one.
Bioengineering
The subfield of bioengineering concentrates on creating novel metabolic and regulatory pathways, and is currently the one that likely draws the attention of most researchers and funding. It is primarily motivated by the desire to establish biotechnology as a legitimate engineering discipline. When referring to this area of synthetic biology, the word "bioengineering" should not be confused with "traditional genetic engineering", which involves introducing a single transgene into the intended organism. Bioengineers adapted synthetic biology to provide a substantially more integrated perspective on how to alter organisms or metabolic systems.
A typical example of single-gene genetic engineering is the insertion of the human insulin gene into bacteria to create transgenic proteins. The creation of whole new signalling pathways, containing numerous genes and regulatory components (such as an oscillator circuit to initiate the periodic production of green fluorescent protein (GFP) in mammalian cells), is known as bioengineering as part of synthetic biology.
By utilising simplified and abstracted metabolic and regulatory modules as well as other standardized parts that may be freely combined to create new pathways or creatures, bioengineering aims to create innovative biological systems. In addition to creating infinite opportunities for novel applications, this strategy is anticipated to make bioengineering more predictable and controllable than traditional biotechnology.
Synthetic genomics
The formation of animals with a chemically manufactured (minimal) genome is another facet of synthetic biology that is highlighted by synthetic genomics. This area of synthetic biology has been made possible by ongoing advancements in DNA synthesis technology, which now makes it feasible to produce DNA molecules with thousands of base pairs at a reasonable cost. The goal is to combine these molecules into complete genomes and transplant them into living cells, replacing the host cell's genome and reprogramming its metabolism to perform different functions.
Scientists have previously demonstrated the potential of this approach by creating infectious viruses by synthesising the genomes of multiple viruses. These significant advances in science and technology triggered the initial public concerns concerning the risks associated with this technology.
A simple genome might also work as a "chassis genome" that could be enlarged quickly by gene inclusion created for particular tasks. Such "chassis creatures" would be more suited for the insertion of new functions than wild organisms since they would have fewer biological pathways that could potentially conflict with the new functionalities in addition to having specific insertion sites. Synthetic genomics strives to create creatures with novel "architectures," much like the bioengineering method. It adopts an integrative or holistic perspective of the organism. In this case, the objective is the creation of chassis genomes based on necessary genes and other required DNA sequences rather than the design of metabolic or regulatory pathways based on abstract criteria.
Protocell synthetic biology
The in vitro generation of synthetic cells is the protocell branch of synthetic biology. Lipid vesicles, which have all the necessary components to function as a complete system, can be used to create these artificial cells. In the end, these synthetic cells should meet the requirements for being deemed alive, namely the capacity for self-replication, self-maintenance, and evolution. The protocell technique has this as its end aim, however there are other intermediary steps that fall short of meeting all the criteria for a living cell. In order to carry out a specific function, these lipid vesicles contain cell extracts or more specific sets of biological macromolecules and complex structures, such as enzymes, nucleic acids, or ribosomes. For instance, liposomes may carry out particular polymerase chain reactions or synthesise a particular protein.
Protocell synthetic biology takes artificial life one step closer to reality by eventually synthesizing not only the genome but also every component of the cell in vitro, as opposed to the synthetic genomics approach, which relies on coercing a natural cell to carry out the instructions encoded by the introduced synthetic genome. Synthetic biologists in this field view their work as basic study into the conditions necessary for life to exist and its origin more than in any of the other techniques. The protocell technique, however, also lends itself well to applications; similar to other synthetic biology byproducts, protocells could be employed for the manufacture of biopolymers and medicines.
Unconventional molecular biology
The objective of the "unnatural molecular biology" strategy is to create new varieties of life that are based on a different kind of molecular biology, such as new types of nucleic acids or a new genetic code. The creation of new types of nucleotides that can be built into unique nucleic acids could be accomplished by changing certain DNA or RNA constituents, such as the bases or the backbone sugars.
The normal genetic code is being altered by inserting quadruplet codons or changing some codons to encode new amino acids, which would subsequently permit the use of non-natural amino acids with unique features in protein production. It is a scientific and technological problem to adjust the enzymatic machinery of the cell for both approaches.
A new sort of life would be formed by organisms with a genome built on synthetic nucleic acids or on a totally new coding system for synthetic amino acids. This new style of life would have some benefits but also some new dangers. On release into the environment, there would be no horizontal gene transfer or outcrossing of genes with natural species. Furthermore, these kinds of synthetic organisms might be created to require non-natural materials for protein or nucleic acid synthesis, rendering them unable to thrive in the wild if they accidentally escaped.
On the other hand, if these organisms ultimately were able to survive outside of controlled space, they might have a particular benefit over natural organisms because they would be resistant to predatory living organisms or natural viruses, that could lead to an unmanaged spread of the synthetic organisms.
In silico technique
Synthetic biology in silico and the various strategies are interconnected. The development of complex designs, whether they are metabolic pathways, fundamental cellular processes, or chassis genomes, is one of the major difficulties faced by the four synthetic-biology methods outlined above. Because of this, synthetic biology has a robust in silico branch, similar to systems biology, that aims to create computational models for the design of common biological components or synthetic circuits, which are essentially simulations of synthetic organisms.
The practical application of simulations and models through bioengineering or other fields of synthetic biology is the long-term goal of in silico synthetic biology. Many of the computational simulations of synthetic organisms up to this point possess little to no direct analogy to living things. Due to this, in silico synthetic biology is regarded as a separate group in this article.
It is sensible to integrate the five areas under the umbrella of synthetic biology as an unified area of study. Even though they focus on various facets of life, such as metabolic regulation, essential elements, or biochemical makeup, these five strategies all work toward the same end: creating new types of living organisms. Additionally, the varied methodologies begin with numerous methodological approaches, which leads to the diversity of synthetic biology approaches.
Synthetic biology is an interdisciplinary field that draws from and is inspired by many different scientific disciplines, not one single field or technique. Synthetic biologists all have the same underlying objective of designing and producing new forms of life, despite the fact that they may employ various methodologies, techniques, and research instruments. Any evaluation of synthetic biology, whether it examines ethical, legal, or safety considerations, must take into account the fact that while some questions, risks, and issues are unique to each technique, in other circumstances, synthetic biology as a whole must be taken into consideration.
Four engineering approaches
Synthetic biology has traditionally been divided into four different engineering approaches: top down, parallel, orthogonal and bottom up.
To replicate emergent behaviours from natural biology and build artificial life, unnatural chemicals are used. The other looks for interchangeable components from biological systems to put together and create systems that do not work naturally. In either case, a synthetic objective compels researchers to venture into new area in order to engage and resolve issues that cannot be readily resolved by analysis. Due to this, new paradigms are driven to arise in ways that analysis cannot easily do. In addition to equipments that oscillate, creep, and play tic-tac-toe, synthetic biology has produced diagnostic instruments that enhance the treatment of patients with infectious diseases.
Top-down approach
It involves using metabolic and genetic engineering techniques to impart new functions to living cells. By comparing universal genes and eliminating non-essential ones to create a basic genome, this method seeks to lessen the complexity of existing cells. These initiatives are founded on the hypothesis of a single genesis for cellular life, the so-called Last Universal Common Ancestor, which supports the presence of a universal minimal genome that gave rise to all living things. Recent studies, however, raise the possibility that the eukaryotic and prokaryotic cells that make up the tree of life may have evolved from a group of primordial cells rather than from a single cell. As a result, even while the Holy Grail-like pursuit of the "minimum genome" has grown elusive, cutting out a number of non-essential functions impairs an organism's fitness and leads to "fragile" genomes.
Bottom-up approach
This approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.
Reproduction, replication, and assembly are three crucial self-organizational principles that are taken into account in order to accomplish this. Cells, which are made up of a container and a metabolism, are considered "hardware" in the definition of reproduction, whereas replication occurs when a system duplicates a perfect copy of itself, as in the case of DNA, which is considered "software." When vesicles or containers (such as Oparin's coacervates) formed of tiny droplets of molecules that are organic like lipids or liposomes, membrane-like structures comprising phospholipids, aggregate, assembly occur.
The study of protocells exists along with other in vitro synthetic biology initiatives that seek to produce minimum cells, metabolic pathways, or "never-born proteins" as well as to mimic physiological functions including cell division and growth. The in vitro enhancement of synthetic pathways does have the potential to have an effect on some other synthetic biology sectors, including metabolic engineering, despite the fact that it no longer classified as synthetic biology research. This research, which is primarily essential, deserves proper recognition as synthetic biology research.
Parallel approach
Parallel engineering is also known as bioengineering. The basic genetic code is the foundation for parallel engineering research, which uses conventional biomolecules like nucleic acids and the 20 amino acids to construct biological systems. For a variety of applications in biocomputing, bioenergy, biofuels, bioremediation, optogenetics, and medicine, it involves the standardisation of DNA components, engineering of switches, biosensors, genetic circuits, logic gates, and cellular communication operators. For directing the expression of two or more genes and/or proteins, the majority of these applications often rely on the use of one or more vectors (or plasmids). Small, circular, double-strand DNA units known as plasmids, which are primarily found in prokaryotic but can also occasionally be detected in eukaryotic cells, may replicate autonomously of chromosomal DNA.
Orthogonal approach
It is also known as perpendicular engineering. This strategy, also referred to as "chemical synthetic biology," principally seeks to alter or enlarge the genetic codes of living systems utilising artificial DNA bases and/or amino acids. This subfield is also connected to xenobiology, a newly developed field that combines systems chemistry, synthetic biology, exobiology, and research into the origins of life. In recent decades, researchers have created compounds that are structurally similar to the DNA canonical bases to see if those "alien" or xeno (XNA) molecules may be employed as genetic information carriers. Similar to this, noncanonical moieties have taken the place of the DNA sugar (deoxyribose). In order to express information other than the 20 conventional amino acids of proteins, the genetic code can be altered or enlarged. One method involves incorporating a specified unnatural, noncanonical, or xeno amino acid (XAA) into one or more proteins at one or more precise places using orthogonal enzymes and a transfer RNA adaptor from an other organism. By using "directed evolution," which entails repeated cycles of gene mutagenesis (genotypic diversity production), screening or selection (of a specific phenotypic trait), and amplification of a better variant for the following iterative round, orthogonal enzymes are produced Numerous XAAs have been effectively incorporated into proteins in more complex creatures like worms and flies as well as in bacteria, yeast, and human cell lines. As a result of canonical DNA sequence changes, directed evolution also enables the development of orthogonal ribosomes, which make it easier to incorporate XAAs into proteins or create "mirror life," or biological systems that contain biomolecules made up of enantiomers with different chiral orientations.
Enabling technologies
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. DNA serves as the guide for how biological processes should function, like the score to a complex symphony of life. Our ability to comprehend and design biological systems has undergone significant modifications as a result of developments in the previous few decades in both reading (sequencing) and writing (synthesis) DNA sequences. These developments have produced ground-breaking techniques for designing, assembling, and modifying DNA-encoded genes, materials, circuits, and metabolic pathways, enabling an ever-increasing amount of control over biological systems and even entire organisms.
Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).
DNA and gene synthesis
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002, researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003, the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.
In 2007, it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.). This favors a synthesis-from-scratch approach.
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in biohacking.
Sequencing
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.
Modularity
This is the ability of a system or component to operate without reference to its context.
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by tens of thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. BioBrick Assembly Standard 10 promotes modularity by allowing BioBrick coding sequences to be spliced out and exchanged using restriction enzymes EcoRI or XbaI (BioBrick prefix) and SpeI and PstI (BioBrick suffix).
Sequence overlap between two genetic elements (genes or coding sequences), called overlapping genes, can prevent their individual manipulation. To increase genome modularity, the practice of genome refactoring or improving "the internal structure of an existing system for future use, while simultaneously maintaining external system function" has been adopted across synthetic biology disciplines. Some notable examples of refactoring including the nitrogen fixation cluster and type III secretion system along with bacteriophages T7 and ΦX174.
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as coiled coils, SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition, it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.
Modeling
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.
Only extensive modelling can enable the exploration of dynamic gene expression in a form suitable for research and design due to the numerous involved species and the intricacy of their relationships. Dynamic simulations of the entire biomolecular interconnection involved in regulation, transport, transcription, induction, and translation enable the molecular level detailing of designs. As opposed to modelling artificial networks a posteriori, this is contrasted.
Microfluidics
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyze and characterize them. It is widely employed in screening assays.
Synthetic transcription factors
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms.
Applications
Synthetic biology initiatives frequently aim to redesign organisms so that they can create a material, such as a drug or fuel, or acquire a new function, such as the ability to sense something in the environment. Examples of what researchers are creating using synthetic biology include:
Utilizing microorganisms for bioremediation to remove contaminants from our water, soil, and air.
Production of complex natural products that are usually extracted from plants but cannot be obtained in sufficient amounts, e.g. drugs of natural origin, such as artemisinin and paclitaxel.
Beta-carotene, a substance typically associated with carrots that prevents vitamin A deficiency, is produced by rice that has been modified. Every year, between 250,000 and 500,000 children lose their vision due to vitamin A deficiency, which also significantly raises their chance of dying from infectious infections.
As a sustainable and environmentally benign alternative to the fresh roses that perfumers use to create expensive smells, yeast has been created to produce rose oil.
Biosensors
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.
Biosensors could also be used to detect pathogenic signatures—such as of SARS-CoV-2—and can be wearable.
For the purpose of detecting and reacting to various and temporary environmental factors, cells have developed a wide range of regulatory circuits, ranging from transcriptional to post-translational. These circuits are made up of transducer modules that filter the signals and activate a biological response, as well as carefully designed sensitive sections that attach analytes and regulate signal-detection thresholds. Modularity and selectivity are programmed to biosensor circuits at the transcriptional, translational, and post-translational levels, to achieve the delicate balancing of the two basic sensing modules.
Food and drink
However, not all synthetic nutrition products are animal food products – for instance, as of 2021, there are also products of synthetic coffee that are reported to be close to commercialization. Similar fields of research and production based on synthetic biology that can be used for the production of food and drink are:
Genetically engineered microbial food cultures (e.g. for solar-energy-based protein powder)
Cell-free artificial synthesis (e.g. synthetic starch; )
Materials
Photosynthetic microbial cells have been used as a step to synthetic production of spider silk.
Biological computers
A biological computer refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of logic gates in a number of organisms, and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In 2007, in human cells, research demonstrated a universal logic evaluator that operates in mammalian cells. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. In 2016, another group of researchers demonstrated that principles of computer engineering can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells. In 2019, researchers implemented a perceptron in biological systems opening the way for machine learning in these systems.
Cell transformation
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.
By integrating synthetic biology with materials science, it would be possible to use cells as microscopic molecular foundries to produce materials whose properties were genetically encoded. Re-engineering has produced Curli fibers, the amyloid component of extracellular material of biofilms, as a platform for programmable nanomaterial. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.
Designed proteins
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods: a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only nine amino acids were used.
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".
Designed nucleic acid systems
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.
Space exploration
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of occupied outposts with less dependence on Earth. Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.
Synthetic life
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.
A living "artificial cell" has been defined as a completely synthetic cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to mutate. Nobody has been able to create such a cell.
A completely synthetic bacterial chromosome was produced in 2010 by Craig Venter, and his team introduced it to genomically emptied bacterial host cells. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM.
In May 2019, in a milestone effort, researchers reported the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.
In 2017, the international Build-a-Cell large-scale open-source research collaboration for the construction of synthetic living cells was started, followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.
In 2023, researchers were able to create the first synthetically made human embryos derived from stem cells.
Drug delivery platforms
In therapeutics, synthetic biology has achieved significant advancements in altering and simplifying the therapeutics scope in a relatively short period of time. In fact, new therapeutic platforms, from the discovery of disease mechanisms and drug targets to the manufacture and transport of small molecules, are made possible by the logical and model-guided design construction of biological components.
Synthetic biology devices have been designed to act as therapies in therapeutic treatment. It is possible to control complete created viruses and organisms to target particular pathogens and diseased pathways. Thus, in two independent studies 91,92, researchers utilised genetically modified bacteriophages to fight antibiotic-resistant bacteria by giving them genetic features that specifically target and hinder bacterial defences against antibiotic activity.
In the therapy of cancer, since conventional medicines frequently indiscriminately target tumours and normal tissues, artificially created viruses and organisms that can identify and connect their therapeutic action to pathological signals may be helpful. For example, p53 pathway activity in human cells was put into adenoviruses to control how they replicated.
Engineered bacteria-based platform
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. Then the bacteria only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.
Engineered yeast-based platform
Synthetic biologists are developing genetically modified live yeast that can deliver therapeutic biologic medicines. When orally delivered, these live yeast act like micro-factories and will make therapeutic molecules directly in the gastrointestinal tract. Because yeast are eukaryotic, a key benefit is that they can be administered together with antibiotics. Probiotic yeast expressing human P2Y2 purinergic receptor suppressed intestinal inflammation in mouse models of inflammatory bowel disease. A live S. boulardii yeast delivering a tetra-specific anti-toxin that potently neutralizes Toxin A and Toxin B of Clostridioides difficile has been developed. This therapeutic anti-toxin is a fusion of four single-domain antibodies (nanobodies) that potently and broadly neutralize the two major virulence factors of C. difficile at the site of infection in preclinical models. The first in human clinical trial of engineered live yeast for the treatment of Clostridium difficile infection is anticipated in 2024 and will be sponsored by the developer Fzata, Inc.
Cell-based platform
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.
T cell receptors were engineered and 'trained' to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. Multiple second generation CAR-based therapies have been approved by FDA.
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.
Biofuels, pharmaceuticals and biomaterials
The most popular biofuel is ethanol produced from corn or sugar cane, but this method of producing biofuels is troublesome and constrained due to the high agricultural cost and inadequate fuel characteristics of ethanol. An substitute and potential source of renewable energy is microbes that have had their metabolic pathways altered to be more efficient at converting biomass into biofuels. Only if their production costs could be made to match or even exceed those of present fuel production can these techniques be expected to be successful. Related to this, there are several medicines whose pricey manufacturing procedures prevent them from having a larger therapeutic range. The creation of new materials and the microbiological manufacturing of biomaterials would both benefit substantially from novel artificial biology tools.
CRISPR/Cas9
The clustered frequently interspaced short palindromic repetitions (CRISPR)/CRISPR associated (Cas) system is a powerful method of genome engineering in a range of organisms because of its simplicity, modularity, and scalability. In this technique, a guide RNA (gRNA) attracts the CRISPR nuclease Cas9 to a particular spot in the genome, causing a double strand break. Several DNA repair processes, including homology-directed recombination and non-homology end joining, can be used to accomplish the desired genome change (i.e., gene deletion or insertion). Additionally, dCas9 (dead Cas9 or nuclease-deficient Cas9), a Cas9 double mutant (H840A, D10A), has been utilised to control gene expression in bacteria or when linked to a stimulation of suppression site in yeast.
Regulatory elements
To build and develop biological systems, regulating components including regulators, ribosome-binding sites (RBSs), and terminators are crucial. Despite years of study, there are many various varieties and numbers of promoters and terminators for Escherichia coli, but also for the well-researched model organism Saccharomyces cerevisiae, as well as for other organisms of interest, these tools are quite scarce. Numerous techniques have been invented for the finding and identification of promoters and terminators in order to overcome this constraint, including genome mining, random mutagenesis, hybrid engineering, biophysical modelling, combinatorial design, and rational design.
Organoids
Synthetic biology has been used for organoids, which are lab-grown organs with application to medical research and transplantation.
Bioprinted organs
Other transplants and induced regeneration
There is ongoing research and development into synthetic biology based methods for inducing regeneration in humans as well the creation of transplantable artificial organs.
Nanoparticles, artificial cells and micro-droplets
Synthetic biology can be used for creating nanoparticles which can be used for drug-delivery as well as for other purposes. Complementing research and development seeks to and has created synthetic cells that mimics functions of biological cells. Applications include medicine such as designer-nanoparticles that make blood cells eat away—from the inside out—portions of atherosclerotic plaque that cause heart attacks. Synthetic micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors, for example, could be used to produce hydrogen as hydrogen economy biotechnology.
Electrogenetics
Mammalian designer cells are engineered by humans to behave a specific way, such as an immune cell that expresses a synthetic receptor designed to combat a specific disease. Electrogenetics is an application of synthetic biology that involves utilizing electrical fields to stimulate a response in engineered cells. Controlling the designer cells can be done with relative ease through the use of common electronic devices, such as smartphones. Additionally, electrogenetics allows for the possibility of creating devices that are much smaller and compact than devices that use other stimulus through the use of microscopic electrodes. One example of how electrogenetics is used to benefit public health is through stimulating designer cells that are able to produce/deliver therapeutics. This was implemented in ElectroHEK cells, cells that contain voltage-gated calcium channels that are electrosensitive, meaning that the ion channel can be controlled by electrical conduction between electrodes and the ElectroHEK cells. The expression levels of the artificial gene that these ElectroHEK cells contained was shown to be able to be controlled by changing the voltage or electrical pulse length. Further studies have expanded on this robust system, one of which is a beta cell line system designed to control the release of insulin based on electric signals.
Ethics
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.
Common ethical questions include:
Is it morally right to tamper with nature?
Is one playing God when creating new life?
What happens if a synthetic organism accidentally escapes?
What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?
Who will have control of and access to the products of synthetic biology?
Who will gain from these innovations? Investors? Medical patients? Industrial farmers?
Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?
What if a new creation is deserving of moral or legal status?
The ethical aspects of synthetic biology has three main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".
The "creation" of life
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature's "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how algal blooms kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to sense pain, sentience, and self-perception. There is an ongoing debate as to whether such life forms should be granted moral or legal rights, though no consensus exists as to how these rights would be administered or enforced.
Ethical support for synthetic biology
Ethics and moral rationales that support certain applications of synthetic biology include their potential mitigation of substantial global problems of detrimental environmental impacts of conventional agriculture (including meat production), animal welfare, food security, and human health, as well as potential reduction of human labor needs and, via therapies of diseases, reduction of human suffering and prolonged life.
Biosafety and biocontainment
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.
In general, existing hazard controls, risk assessment methodologies, and regulations developed for traditional genetically modified organisms (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" biocontainment methods in a laboratory context include physical containment through biosafety cabinets and gloveboxes, as well as personal protective equipment. In an agricultural context, they include isolation distances and pollen barriers, similar to methods for biocontainment of GMOs. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent horizontal gene transfer to natural organisms. Examples of intrinsic biocontainment include auxotrophy, biological kill switches, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of xenobiological organisms using alternative biochemistry, for example using artificial xeno nucleic acids (XNA) instead of DNA. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.
Biosecurity and bioterrorism
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates, and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.
Additionally, the development of synthetic biology tools has made it easier for individuals with less education, training, and access to equipment to modify and use pathogenic organisms as bioweapons. This increases the threat of bioterrorism, especially as terrorist groups become aware of the significant social, economic, and political disruption caused by pandemics like COVID-19. As new techniques are developed in the field of synthetic biology, the risk of bioterrorism is likely to continue to grow. Juan Zarate, who served as Deputy National Security Advisor for Combating Terrorism from 2005 to 2009, noted that "the severity and extreme disruption of a novel coronavirus will likely spur the imagination of the most creative and dangerous groups and individuals to reconsider bioterrorist attacks."
European Union
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics, and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".
United States
In January 2009, the Alfred P. Sloan Foundation funded the Woodrow Wilson Center, the Hastings Center, and the J. Craig Venter Institute to examine the public perception, ethics and policy implications of synthetic biology.
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter's achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the "creation of life". It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact". The proliferation of such technology could also make the production of biological and chemical weapons available to a wider array of state and non-state actors. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".
Opposition
On March 13, 2012, over 100 environmental and civil society groups, including Friends of the Earth, the International Center for Technology Assessment, and the ETC Group, issued the manifesto The Principles for the Oversight of Synthetic Biology. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the human genome or human microbiome. Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".
Health and safety
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms. Synthetic biology is an example of a dual-use technology with the potential to be used in ways that could intentionally or unintentionally harm humans and/or damage the environment. Often "scientists, their host institutions and funding bodies" consider whether the planned research could be misused and sometimes implement measures to reduce the likelihood of misuse.
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.
See also
References
NHGRI. (2019, March 13). Synthetic Biology. Genome.gov. https://www.genome.gov/about-genomics/policy-issues/Synthetic-Biology
Bibliography
External links
Engineered Pathogens and Unnatural Biological Weapons: The Future Threat of Synthetic Biology . Threats and considerations
Synthetic biology books popular science book and textbooks
Introductory Summary of Synthetic Biology . Concise overview of synthetic biology concepts, developments and applications
Collaborative overview article on Synthetic Biology
Controversial DNA startup wants to let customers create creatures (2015-01-03), San Francisco Chronicle
It's Alive, But Is It Life: Synthetic Biology and the Future of Creation (28 September 2016), World Science Festival
Biotechnology
Molecular genetics
Systems biology
Bioinformatics
Biocybernetics
Appropriate technology
Gene expression programming
Bioterrorism | 0.796792 | 0.995606 | 0.793291 |
Metabolism | Metabolism (, from metabolē, "change") is the set of life-sustaining chemical reactions in organisms. The three main functions of metabolism are: the conversion of the energy in food to energy available to run cellular processes; the conversion of food to building blocks of proteins, lipids, nucleic acids, and some carbohydrates; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. The word metabolism can also refer to the sum of all chemical reactions that occur in living organisms, including digestion and the transportation of substances into and between different cells, in which case the above described set of reactions within the cells is called intermediary (or intermediate) metabolism.
Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy.
The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy and will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly—and they also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells.
The metabolic system of a particular organism determines which substances it will find nutritious and which poisonous. For example, some prokaryotes use hydrogen sulfide as a nutrient, yet this gas is poisonous to animals. The basal metabolic rate of an organism is the measure of the amount of energy consumed by all of these chemical reactions.
A striking feature of metabolism is the similarity of the basic metabolic pathways among vastly different species. For example, the set of carboxylic acids that are best known as the intermediates in the citric acid cycle are present in all known organisms, being found in species as diverse as the unicellular bacterium Escherichia coli and huge multicellular organisms like elephants. These similarities in metabolic pathways are likely due to their early appearance in evolutionary history, and their retention is likely due to their efficacy. In various diseases, such as type II diabetes, metabolic syndrome, and cancer, normal metabolism is disrupted. The metabolism of cancer cells is also different from the metabolism of normal cells, and these differences can be used to find targets for therapeutic intervention in cancer.
Key biochemicals
Most of the structures that make up animals, plants and microbes are made from four basic classes of molecules: amino acids, carbohydrates, nucleic acid and lipids (often called fats). As these molecules are vital for life, metabolic reactions either focus on making these molecules during the construction of cells and tissues, or on breaking them down and using them to obtain energy, by their digestion. These biochemicals can be joined to make polymers such as DNA and proteins, essential macromolecules of life.
Amino acids and proteins
Proteins are made of amino acids arranged in a linear chain joined by peptide bonds. Many proteins are enzymes that catalyze the chemical reactions in metabolism. Other proteins have structural or mechanical functions, such as those that form the cytoskeleton, a system of scaffolding that maintains the cell shape. Proteins are also important in cell signaling, immune responses, cell adhesion, active transport across membranes, and the cell cycle. Amino acids also contribute to cellular energy metabolism by providing a carbon source for entry into the citric acid cycle (tricarboxylic acid cycle), especially when a primary source of energy, such as glucose, is scarce, or when cells undergo metabolic stress.
Lipids
Lipids are the most diverse group of biochemicals. Their main structural uses are as part of internal and external biological membranes, such as the cell membrane. Their chemical energy can also be used. Lipids contain a long, non-polar hydrocarbon chain with a small polar region containing oxygen. Lipids are usually defined as hydrophobic or amphipathic biological molecules but will dissolve in organic solvents such as ethanol, benzene or chloroform. The fats are a large group of compounds that contain fatty acids and glycerol; a glycerol molecule attached to three fatty acids by ester linkages is called a triacylglyceride. Several variations of the basic structure exist, including backbones such as sphingosine in sphingomyelin, and hydrophilic groups such as phosphate in phospholipids. Steroids such as sterol are another major class of lipids.
Carbohydrates
Carbohydrates are aldehydes or ketones, with many hydroxyl groups attached, that can exist as straight chains or rings. Carbohydrates are the most abundant biological molecules, and fill numerous roles, such as the storage and transport of energy (starch, glycogen) and structural components (cellulose in plants, chitin in animals). The basic carbohydrate units are called monosaccharides and include galactose, fructose, and most importantly glucose. Monosaccharides can be linked together to form polysaccharides in almost limitless ways.
Nucleotides
The two nucleic acids, DNA and RNA, are polymers of nucleotides. Each nucleotide is composed of a phosphate attached to a ribose or deoxyribose sugar group which is attached to a nitrogenous base. Nucleic acids are critical for the storage and use of genetic information, and its interpretation through the processes of transcription and protein biosynthesis. This information is protected by DNA repair mechanisms and propagated through DNA replication. Many viruses have an RNA genome, such as HIV, which uses reverse transcription to create a DNA template from its viral RNA genome. RNA in ribozymes such as spliceosomes and ribosomes is similar to enzymes as it can catalyze chemical reactions. Individual nucleosides are made by attaching a nucleobase to a ribose sugar. These bases are heterocyclic rings containing nitrogen, classified as purines or pyrimidines. Nucleotides also act as coenzymes in metabolic-group-transfer reactions.
Coenzymes
Metabolism involves a vast array of chemical reactions, but most fall under a few basic types of reactions that involve the transfer of functional groups of atoms and their bonds within molecules. This common chemistry allows cells to use a small set of metabolic intermediates to carry chemical groups between different reactions. These group-transfer intermediates are called coenzymes. Each class of group-transfer reactions is carried out by a particular coenzyme, which is the substrate for a set of enzymes that produce it, and a set of enzymes that consume it. These coenzymes are therefore continuously made, consumed and then recycled.
One central coenzyme is adenosine triphosphate (ATP), the energy currency of cells. This nucleotide is used to transfer chemical energy between different chemical reactions. There is only a small amount of ATP in cells, but as it is continuously regenerated, the human body can use about its own weight in ATP per day. ATP acts as a bridge between catabolism and anabolism. Catabolism breaks down molecules, and anabolism puts them together. Catabolic reactions generate ATP, and anabolic reactions consume it. It also serves as a carrier of phosphate groups in phosphorylation reactions.
A vitamin is an organic compound needed in small quantities that cannot be made in cells. In human nutrition, most vitamins function as coenzymes after modification; for example, all water-soluble vitamins are phosphorylated or are coupled to nucleotides when they are used in cells. Nicotinamide adenine dinucleotide (NAD+), a derivative of vitamin B3 (niacin), is an important coenzyme that acts as a hydrogen acceptor. Hundreds of separate types of dehydrogenases remove electrons from their substrates and reduce NAD+ into NADH. This reduced form of the coenzyme is then a substrate for any of the reductases in the cell that need to transfer hydrogen atoms to their substrates. Nicotinamide adenine dinucleotide exists in two related forms in the cell, NADH and NADPH. The NAD+/NADH form is more important in catabolic reactions, while NADP+/NADPH is used in anabolic reactions.
Mineral and cofactors
Inorganic elements play critical roles in metabolism; some are abundant (e.g. sodium and potassium) while others function at minute concentrations. About 99% of a human's body weight is made up of the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. Organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen; most of the oxygen and hydrogen is present as water.
The abundant inorganic elements act as electrolytes. The most important ions are sodium, potassium, calcium, magnesium, chloride, phosphate and the organic ion bicarbonate. The maintenance of precise ion gradients across cell membranes maintains osmotic pressure and pH. Ions are also critical for nerve and muscle function, as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cell's fluid, the cytosol. Electrolytes enter and leave cells through proteins in the cell membrane called ion channels. For example, muscle contraction depends upon the movement of calcium, sodium and potassium through ion channels in the cell membrane and T-tubules.
Transition metals are usually present as trace elements in organisms, with zinc and iron being most abundant of those. Metal cofactors are bound tightly to specific sites in proteins; although enzyme cofactors can be modified during catalysis, they always return to their original state by the end of the reaction catalyzed. Metal micronutrients are taken up into organisms by specific transporters and bind to storage proteins such as ferritin or metallothionein when not in use.
Catabolism
Catabolism is the set of metabolic processes that break down large molecules. These include breaking down and oxidizing food molecules. The purpose of the catabolic reactions is to provide the energy and components needed by anabolic reactions which build molecules. The exact nature of these catabolic reactions differ from organism to organism, and organisms can be classified based on their sources of energy, hydrogen, and carbon (their primary nutritional groups), as shown in the table below. Organic molecules are used as a source of hydrogen atoms or electrons by organotrophs, while lithotrophs use inorganic substrates. Whereas phototrophs convert sunlight to chemical energy, chemotrophs depend on redox reactions that involve the transfer of electrons from reduced donor molecules such as organic molecules, hydrogen, hydrogen sulfide or ferrous ions to oxygen, nitrate or sulfate. In animals, these reactions involve complex organic molecules that are broken down to simpler molecules, such as carbon dioxide and water. Photosynthetic organisms, such as plants and cyanobacteria, use similar electron-transfer reactions to store energy absorbed from sunlight.
The most common set of catabolic reactions in animals can be separated into three main stages. In the first stage, large organic molecules, such as proteins, polysaccharides or lipids, are digested into their smaller components outside cells. Next, these smaller molecules are taken up by cells and converted to smaller molecules, usually acetyl coenzyme A (acetyl-CoA), which releases some energy. Finally, the acetyl group on acetyl-CoA is oxidized to water and carbon dioxide in the citric acid cycle and electron transport chain, releasing more energy while reducing the coenzyme nicotinamide adenine dinucleotide (NAD+) into NADH.
Digestion
Macromolecules cannot be directly processed by cells. Macromolecules must be broken into smaller units before they can be used in cell metabolism. Different classes of enzymes are used to digest these polymers. These digestive enzymes include proteases that digest proteins into amino acids, as well as glycoside hydrolases that digest polysaccharides into simple sugars known as monosaccharides.
Microbes simply secrete digestive enzymes into their surroundings, while animals only secrete these enzymes from specialized cells in their guts, including the stomach and pancreas, and in salivary glands. The amino acids or sugars released by these extracellular enzymes are then pumped into cells by active transport proteins.
Energy from organic compounds
Carbohydrate catabolism is the breakdown of carbohydrates into smaller units. Carbohydrates are usually taken into cells after they have been digested into monosaccharides such as glucose and fructose. Once inside, the major route of breakdown is glycolysis, in which glucose is converted into pyruvate. This process generates the energy-conveying molecule NADH from NAD+, and generates ATP from ADP for use in powering many processes within the cell. Pyruvate is an intermediate in several metabolic pathways, but the majority is converted to acetyl-CoA and fed into the citric acid cycle, which enables more ATP production by means of oxidative phosphorylation. This oxidation consumes molecular oxygen and releases water and the waste product carbon dioxide. When oxygen is lacking, or when pyruvate is temporarily produced faster than it can be consumed by the citric acid cycle (as in intense muscular exertion), pyruvate is converted to lactate by the enzyme lactate dehydrogenase, a process that also oxidizes NADH back to NAD+ for re-use in further glycolysis, allowing energy production to continue. The lactate is later converted back to pyruvate for ATP production where energy is needed, or back to glucose in the Cori cycle. An alternative route for glucose breakdown is the pentose phosphate pathway, which produces less energy but supports anabolism (biomolecule synthesis). This pathway reduces the coenzyme NADP+ to NADPH and produces pentose compounds such as ribose 5-phosphate for synthesis of many biomolecules such as nucleotides and aromatic amino acids.
Fats are catabolized by hydrolysis to free fatty acids and glycerol. The glycerol enters glycolysis and the fatty acids are broken down by beta oxidation to release acetyl-CoA, which then is fed into the citric acid cycle. Fatty acids release more energy upon oxidation than carbohydrates. Steroids are also broken down by some bacteria in a process similar to beta oxidation, and this breakdown process involves the release of significant amounts of acetyl-CoA, propionyl-CoA, and pyruvate, which can all be used by the cell for energy. M. tuberculosis can also grow on the lipid cholesterol as a sole source of carbon, and genes involved in the cholesterol-use pathway(s) have been validated as important during various stages of the infection lifecycle of M. tuberculosis.
Amino acids are either used to synthesize proteins and other biomolecules, or oxidized to urea and carbon dioxide to produce energy. The oxidation pathway starts with the removal of the amino group by a transaminase. The amino group is fed into the urea cycle, leaving a deaminated carbon skeleton in the form of a keto acid. Several of these keto acids are intermediates in the citric acid cycle, for example α-ketoglutarate formed by deamination of glutamate. The glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Energy transformations
Oxidative phosphorylation
In oxidative phosphorylation, the electrons removed from organic molecules in areas such as the citric acid cycle are transferred to oxygen and the energy released is used to make ATP. This is done in eukaryotes by a series of proteins in the membranes of mitochondria called the electron transport chain. In prokaryotes, these proteins are found in the cell's inner membrane. These proteins use the energy from reduced molecules like NADH to pump protons across a membrane.
Pumping protons out of the mitochondria creates a proton concentration difference across the membrane and generates an electrochemical gradient. This force drives protons back into the mitochondrion through the base of an enzyme called ATP synthase. The flow of protons makes the stalk subunit rotate, causing the active site of the synthase domain to change shape and phosphorylate adenosine diphosphate—turning it into ATP.
Energy from inorganic compounds
Chemolithotrophy is a type of metabolism found in prokaryotes where energy is obtained from the oxidation of inorganic compounds. These organisms can use hydrogen, reduced sulfur compounds (such as sulfide, hydrogen sulfide and thiosulfate), ferrous iron (Fe(II)) or ammonia as sources of reducing power and they gain energy from the oxidation of these compounds. These microbial processes are important in global biogeochemical cycles such as acetogenesis, nitrification and denitrification and are critical for soil fertility.
Energy from light
The energy in sunlight is captured by plants, cyanobacteria, purple bacteria, green sulfur bacteria and some protists. This process is often coupled to the conversion of carbon dioxide into organic compounds, as part of photosynthesis, which is discussed below. The energy capture and carbon fixation systems can, however, operate separately in prokaryotes, as purple bacteria and green sulfur bacteria can use sunlight as a source of energy, while switching between carbon fixation and the fermentation of organic compounds.
In many organisms, the capture of solar energy is similar in principle to oxidative phosphorylation, as it involves the storage of energy as a proton concentration gradient. This proton motive force then drives ATP synthesis. The electrons needed to drive this electron transport chain come from light-gathering proteins called photosynthetic reaction centres. Reaction centers are classified into two types depending on the nature of photosynthetic pigment present, with most photosynthetic bacteria only having one type, while plants and cyanobacteria have two.
In plants, algae, and cyanobacteria, photosystem II uses light energy to remove electrons from water, releasing oxygen as a waste product. The electrons then flow to the cytochrome b6f complex, which uses their energy to pump protons across the thylakoid membrane in the chloroplast. These protons move back through the membrane as they drive the ATP synthase, as before. The electrons then flow through photosystem I and can then be used to reduce the coenzyme NADP+.
Anabolism
Anabolism is the set of constructive metabolic processes where the energy released by catabolism is used to synthesize complex molecules. In general, the complex molecules that make up cellular structures are constructed step-by-step from smaller and simpler precursors. Anabolism involves three basic stages. First, the production of precursors such as amino acids, monosaccharides, isoprenoids and nucleotides, secondly, their activation into reactive forms using energy from ATP, and thirdly, the assembly of these precursors into complex molecules such as proteins, polysaccharides, lipids and nucleic acids.
Anabolism in organisms can be different according to the source of constructed molecules in their cells. Autotrophs such as plants can construct the complex organic molecules in their cells such as polysaccharides and proteins from simple molecules like carbon dioxide and water. Heterotrophs, on the other hand, require a source of more complex substances, such as monosaccharides and amino acids, to produce these complex molecules. Organisms can be further classified by ultimate source of their energy: photoautotrophs and photoheterotrophs obtain energy from light, whereas chemoautotrophs and chemoheterotrophs obtain energy from oxidation reactions.
Carbon fixation
Photosynthesis is the synthesis of carbohydrates from sunlight and carbon dioxide (CO2). In plants, cyanobacteria and algae, oxygenic photosynthesis splits water, with oxygen produced as a waste product. This process uses the ATP and NADPH produced by the photosynthetic reaction centres, as described above, to convert CO2 into glycerate 3-phosphate, which can then be converted into glucose. This carbon-fixation reaction is carried out by the enzyme RuBisCO as part of the Calvin–Benson cycle. Three types of photosynthesis occur in plants, C3 carbon fixation, C4 carbon fixation and CAM photosynthesis. These differ by the route that carbon dioxide takes to the Calvin cycle, with C3 plants fixing CO2 directly, while C4 and CAM photosynthesis incorporate the CO2 into other compounds first, as adaptations to deal with intense sunlight and dry conditions.
In photosynthetic prokaryotes the mechanisms of carbon fixation are more diverse. Here, carbon dioxide can be fixed by the Calvin–Benson cycle, a reversed citric acid cycle, or the carboxylation of acetyl-CoA. Prokaryotic chemoautotrophs also fix CO2 through the Calvin–Benson cycle, but use energy from inorganic compounds to drive the reaction.
Carbohydrates and glycans
In carbohydrate anabolism, simple organic acids can be converted into monosaccharides such as glucose and then used to assemble polysaccharides such as starch. The generation of glucose from compounds like pyruvate, lactate, glycerol, glycerate 3-phosphate and amino acids is called gluconeogenesis. Gluconeogenesis converts pyruvate to glucose-6-phosphate through a series of intermediates, many of which are shared with glycolysis. However, this pathway is not simply glycolysis run in reverse, as several steps are catalyzed by non-glycolytic enzymes. This is important as it allows the formation and breakdown of glucose to be regulated separately, and prevents both pathways from running simultaneously in a futile cycle.
Although fat is a common way of storing energy, in vertebrates such as humans the fatty acids in these stores cannot be converted to glucose through gluconeogenesis as these organisms cannot convert acetyl-CoA into pyruvate; plants do, but animals do not, have the necessary enzymatic machinery. As a result, after long-term starvation, vertebrates need to produce ketone bodies from fatty acids to replace glucose in tissues such as the brain that cannot metabolize fatty acids. In other organisms such as plants and bacteria, this metabolic problem is solved using the glyoxylate cycle, which bypasses the decarboxylation step in the citric acid cycle and allows the transformation of acetyl-CoA to oxaloacetate, where it can be used for the production of glucose. Other than fat, glucose is stored in most tissues, as an energy resource available within the tissue through glycogenesis which was usually being used to maintained glucose level in blood.
Polysaccharides and glycans are made by the sequential addition of monosaccharides by glycosyltransferase from a reactive sugar-phosphate donor such as uridine diphosphate glucose (UDP-Glc) to an acceptor hydroxyl group on the growing polysaccharide. As any of the hydroxyl groups on the ring of the substrate can be acceptors, the polysaccharides produced can have straight or branched structures. The polysaccharides produced can have structural or metabolic functions themselves, or be transferred to lipids and proteins by the enzymes oligosaccharyltransferases.
Fatty acids, isoprenoids and sterol
Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups: in animals and fungi, all these fatty acid synthase reactions are carried out by a single multifunctional type I protein, while in plant plastids and bacteria separate type II enzymes perform each step in the pathway.
Terpenes and isoprenoids are a large class of lipids that include the carotenoids and form the largest class of plant natural products. These compounds are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is sterol biosynthesis. Here, the isoprene units are joined to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other sterols such as cholesterol and ergosterol.
Proteins
Organisms vary in their ability to synthesize the 20 common amino acids. Most bacteria and plants can synthesize all twenty, but mammals can only synthesize eleven nonessential amino acids, so nine essential amino acids must be obtained from food. Some simple parasites, such as the bacteria Mycoplasma pneumoniae, lack all amino acid synthesis and take their amino acids directly from their hosts. All amino acids are synthesized from intermediates in glycolysis, the citric acid cycle, or the pentose phosphate pathway. Nitrogen is provided by glutamate and glutamine. Nonessensial amino acid synthesis depends on the formation of the appropriate alpha-keto acid, which is then transaminated to form an amino acid.
Amino acids are made into proteins by being joined in a chain of peptide bonds. Each different protein has a unique sequence of amino acid residues: this is its primary structure. Just as the letters of the alphabet can be combined to form an almost endless variety of words, amino acids can be linked in varying sequences to form a huge variety of proteins. Proteins are made from amino acids that have been activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA precursor is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which joins the amino acid onto the elongating protein chain, using the sequence information in a messenger RNA.
Nucleotide synthesis and salvage
Nucleotides are made from amino acids, carbon dioxide and formic acid in pathways that require large amounts of metabolic energy. Consequently, most organisms have efficient systems to salvage preformed nucleotides. Purines are synthesized as nucleosides (bases attached to ribose). Both adenine and guanine are made from the precursor nucleoside inosine monophosphate, which is synthesized using atoms from the amino acids glycine, glutamine, and aspartic acid, as well as formate transferred from the coenzyme tetrahydrofolate. Pyrimidines, on the other hand, are synthesized from the base orotate, which is formed from glutamine and aspartate.
Xenobiotics and redox metabolism
All organisms are constantly exposed to compounds that they cannot use as foods and that would be harmful if they accumulated in cells, as they have no metabolic function. These potentially damaging compounds are called xenobiotics. Xenobiotics such as synthetic drugs, natural poisons and antibiotics are detoxified by a set of xenobiotic-metabolizing enzymes. In humans, these include cytochrome P450 oxidases, UDP-glucuronosyltransferases, and glutathione S-transferases. This system of enzymes acts in three stages to firstly oxidize the xenobiotic (phase I) and then conjugate water-soluble groups onto the molecule (phase II). The modified water-soluble xenobiotic can then be pumped out of cells and in multicellular organisms may be further metabolized before being excreted (phase III). In ecology, these reactions are particularly important in microbial biodegradation of pollutants and the bioremediation of contaminated land and oil spills. Many of these microbial reactions are shared with multicellular organisms, but due to the incredible diversity of types of microbes these organisms are able to deal with a far wider range of xenobiotics than multicellular organisms, and can degrade even persistent organic pollutants such as organochloride compounds.
A related problem for aerobic organisms is oxidative stress. Here, processes including oxidative phosphorylation and the formation of disulfide bonds during protein folding produce reactive oxygen species such as hydrogen peroxide. These damaging oxidants are removed by antioxidant metabolites such as glutathione and enzymes such as catalases and peroxidases.
Thermodynamics of living organisms
Living organisms must obey the laws of thermodynamics, which describe the transfer of heat and work. The second law of thermodynamics states that in any isolated system, the amount of entropy (disorder) cannot decrease. Although living organisms' amazing complexity appears to contradict this law, life is possible as all organisms are open systems that exchange matter and energy with their surroundings. Living systems are not in equilibrium, but instead are dissipative systems that maintain their state of high complexity by causing a larger increase in the entropy of their environments. The metabolism of a cell achieves this by coupling the spontaneous processes of catabolism to the non-spontaneous processes of anabolism. In thermodynamic terms, metabolism maintains order by creating disorder.
Regulation and control
As the environments of most organisms are constantly changing, the reactions of metabolism must be finely regulated to maintain a constant set of conditions within cells, a condition called homeostasis. Metabolic regulation also allows organisms to respond to signals and interact actively with their environments. Two closely linked concepts are important for understanding how metabolic pathways are controlled. Firstly, the regulation of an enzyme in a pathway is how its activity is increased and decreased in response to signals. Secondly, the control exerted by this enzyme is the effect that these changes in its activity have on the overall rate of the pathway (the flux through the pathway). For example, an enzyme may show large changes in activity (i.e. it is highly regulated) but if these changes have little effect on the flux of a metabolic pathway, then this enzyme is not involved in the control of the pathway.
There are multiple levels of metabolic regulation. In intrinsic regulation, the metabolic pathway self-regulates to respond to changes in the levels of substrates or products; for example, a decrease in the amount of product can increase the flux through the pathway to compensate. This type of regulation often involves allosteric regulation of the activities of multiple enzymes in the pathway. Extrinsic control involves a cell in a multicellular organism changing its metabolism in response to signals from other cells. These signals are usually in the form of water-soluble messengers such as hormones and growth factors and are detected by specific receptors on the cell surface. These signals are then transmitted inside the cell by second messenger systems that often involved the phosphorylation of proteins.
A very well understood example of extrinsic control is the regulation of glucose metabolism by the hormone insulin. Insulin is produced in response to rises in blood glucose levels. Binding of the hormone to insulin receptors on cells then activates a cascade of protein kinases that cause the cells to take up glucose and convert it into storage molecules such as fatty acids and glycogen. The metabolism of glycogen is controlled by activity of phosphorylase, the enzyme that breaks down glycogen, and glycogen synthase, the enzyme that makes it. These enzymes are regulated in a reciprocal fashion, with phosphorylation inhibiting glycogen synthase, but activating phosphorylase. Insulin causes glycogen synthesis by activating protein phosphatases and producing a decrease in the phosphorylation of these enzymes.
Evolution
The central pathways of metabolism described above, such as glycolysis and the citric acid cycle, are present in all three domains of living things and were present in the last universal common ancestor. This universal ancestral cell was prokaryotic and probably a methanogen that had extensive amino acid, nucleotide, carbohydrate and lipid metabolism. The retention of these ancient pathways during later evolution may be the result of these reactions having been an optimal solution to their particular metabolic problems, with pathways such as glycolysis and the citric acid cycle producing their end products highly efficiently and in a minimal number of steps. The first pathways of enzyme-based metabolism may have been parts of purine nucleotide metabolism, while previous metabolic pathways were a part of the ancient RNA world.
Many models have been proposed to describe the mechanisms by which novel metabolic pathways evolve. These include the sequential addition of novel enzymes to a short ancestral pathway, the duplication and then divergence of entire pathways as well as the recruitment of pre-existing enzymes and their assembly into a novel reaction pathway. The relative importance of these mechanisms is unclear, but genomic studies have shown that enzymes in a pathway are likely to have a shared ancestry, suggesting that many pathways have evolved in a step-by-step fashion with novel functions created from pre-existing steps in the pathway. An alternative model comes from studies that trace the evolution of proteins' structures in metabolic networks, this has suggested that enzymes are pervasively recruited, borrowing enzymes to perform similar functions in different metabolic pathways (evident in the MANET database) These recruitment processes result in an evolutionary enzymatic mosaic. A third possibility is that some parts of metabolism might exist as "modules" that can be reused in different pathways and perform similar functions on different molecules.
As well as the evolution of new metabolic pathways, evolution can also cause the loss of metabolic functions. For example, in some parasites metabolic processes that are not essential for survival are lost and preformed amino acids, nucleotides and carbohydrates may instead be scavenged from the host. Similar reduced metabolic capabilities are seen in endosymbiotic organisms.
Investigation and manipulation
Classically, metabolism is studied by a reductionist approach that focuses on a single metabolic pathway. Particularly valuable is the use of radioactive tracers at the whole-organism, tissue and cellular levels, which define the paths from precursors to final products by identifying radioactively labelled intermediates and products. The enzymes that catalyze these chemical reactions can then be purified and their kinetics and responses to inhibitors investigated. A parallel approach is to identify the small molecules in a cell or tissue; the complete set of these molecules is called the metabolome. Overall, these studies give a good view of the structure and function of simple metabolic pathways, but are inadequate when applied to more complex systems such as the metabolism of a complete cell.
An idea of the complexity of the metabolic networks in cells that contain thousands of different enzymes is given by the figure showing the interactions between just 43 proteins and 40 metabolites to the right: the sequences of genomes provide lists containing anything up to 26.500 genes. However, it is now possible to use this genomic data to reconstruct complete networks of biochemical reactions and produce more holistic mathematical models that may explain and predict their behavior. These models are especially powerful when used to integrate the pathway and metabolite data obtained through classical methods with data on gene expression from proteomic and DNA microarray studies. Using these techniques, a model of human metabolism has now been produced, which will guide future drug discovery and biochemical research. These models are now used in network analysis, to classify human diseases into groups that share common proteins or metabolites.
Bacterial metabolic networks are a striking example of bow-tie organization, an architecture able to input a wide range of nutrients and produce a large variety of products and complex macromolecules using a relatively few intermediate common currencies.
A major technological application of this information is metabolic engineering. Here, organisms such as yeast, plants or bacteria are genetically modified to make them more useful in biotechnology and aid the production of drugs such as antibiotics or industrial chemicals such as 1,3-propanediol and shikimic acid. These genetic modifications usually aim to reduce the amount of energy used to produce the product, increase yields and reduce the production of wastes.
History
The term metabolism is derived from the Ancient Greek word μεταβολή—"metabole" for "a change" which is derived from μεταβάλλειν—"metaballein", meaning "to change"
Greek philosophy
Aristotle's The Parts of Animals sets out enough details of his views on metabolism for an open flow model to be made. He believed that at each stage of the process, materials from food were transformed, with heat being released as the classical element of fire, and residual materials being excreted as urine, bile, or faeces.
Ibn al-Nafis described metabolism in his 1260 AD work titled Al-Risalah al-Kamiliyyah fil Siera al-Nabawiyyah (The Treatise of Kamil on the Prophet's Biography) which included the following phrase "Both the body and its parts are in a continuous state of dissolution and nourishment, so they are inevitably undergoing permanent change."
Application of the scientific method and Modern metabolic theories
The history of the scientific study of metabolism spans several centuries and has moved from examining whole animals in early studies, to examining individual metabolic reactions in modern biochemistry. The first controlled experiments in human metabolism were published by Santorio Santorio in 1614 in his book Ars de statica medicina. He described how he weighed himself before and after eating, sleep, working, sex, fasting, drinking, and excreting. He found that most of the food he took in was lost through what he called "insensible perspiration".
In these early studies, the mechanisms of these metabolic processes had not been identified and a vital force was thought to animate living tissue. In the 19th century, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that fermentation was catalyzed by substances within the yeast cells he called "ferments". He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." This discovery, along with the publication by Friedrich Wöhler in 1828 of a paper on the chemical synthesis of urea, and is notable for being the first organic compound prepared from wholly inorganic precursors. This proved that the organic compounds and chemical reactions found in cells were no different in principle than any other part of chemistry.
It was the discovery of enzymes at the beginning of the 20th century by Eduard Buchner that separated the study of the chemical reactions of metabolism from the biological study of cells, and marked the beginnings of biochemistry. The mass of biochemical knowledge grew rapidly throughout the early 20th century. One of the most prolific of these modern biochemists was Hans Krebs who made huge contributions to the study of metabolism. He discovered the urea cycle and later, working with Hans Kornberg, the citric acid cycle and the glyoxylate cycle.
See also
, a "metabolism first" theory of the origin of life
Microphysiometry
Oncometabolism
References
Further reading
Introductory
Advanced
External links
General information
The Biochemistry of Metabolism (archived 8 March 2005)
Sparknotes SAT biochemistry Overview of biochemistry. School level.
MIT Biology Hypertextbook Undergraduate-level guide to molecular biology.
Human metabolism
Topics in Medical Biochemistry Guide to human metabolic pathways. School level.
THE Medical Biochemistry Page Comprehensive resource on human metabolism.
Databases
Flow Chart of Metabolic Pathways at ExPASy
IUBMB-Nicholson Metabolic Pathways Chart
SuperCYP: Database for Drug-Cytochrome-Metabolism
Metabolic pathways
Metabolism reference Pathway
Underwater diving physiology | 0.793726 | 0.999218 | 0.793105 |
Abiotic component | In biology and ecology, abiotic components or abiotic factors are non-living chemical and physical parts of the environment that affect living organisms and the functioning of ecosystems. Abiotic factors and the phenomena associated with them underpin biology as a whole. They affect a plethora of species, in all forms of environmental conditions, such as marine or terrestrial animals. Humans can make or change abiotic factors in a species' environment. For instance, fertilizers can affect a snail's habitat, or the greenhouse gases which humans utilize can change marine pH levels.
Abiotic components include physical conditions and non-living resources that affect living organisms in terms of growth, maintenance, and reproduction. Resources are distinguished as substances or objects in the environment required by one organism and consumed or otherwise made unavailable for use by other organisms. Component degradation of a substance occurs by chemical or physical processes, e.g. hydrolysis. All non-living components of an ecosystem, such as atmospheric conditions and water resources, are called abiotic components.
Factors
In biology, abiotic factors can include water, light, radiation, temperature, humidity, atmosphere, acidity, salinity, precipitation, altitude, minerals, tides, rain, dissolved oxygen nutrients, and soil. The macroscopic climate often influences each of the above. Pressure and sound waves may also be considered in the context of marine or sub-terrestrial environments. Abiotic factors in ocean environments also include aerial exposure, substrate, water clarity, solar energy and tides.
Consider the differences in the mechanics of C3, C4, and CAM plants in regulating the influx of carbon dioxide to the Calvin-Benson Cycle in relation to their abiotic stressors. C3 plants have no mechanisms to manage photorespiration, whereas C4 and CAM plants utilize a separate PEP carboxylase enzyme to prevent photorespiration, thus increasing the yield of photosynthesis processes in certain high energy environments.
Examples
Many Archea require very high temperatures, pressures or unusual concentrations of chemical substances such as sulfur; this is due to their specialization into extreme conditions. In addition, fungi have also evolved to survive at the temperature, the humidity, and stability of their environment.
For example, there is a significant difference in access in both water and humidity between temperate rain forests and deserts. This difference in water availability causes a diversity in the organisms that survive in these areas. These differences in abiotic components alter the species present both by creating boundaries of what species can survive within the environment, and influencing competition between two species. Abiotic factors such as salinity can give one species a competitive advantage over another, creating pressures that lead to speciation and alteration of a species to and from generalist and specialist competitors.
See also
Biotic component, a living part of an ecosystem that affects and shapes it.
Abiogenesis, the gradual process of increasing complexity of non-living into living matter.
Nitrogen cycle
Phosphorus cycle
References
Environmental science | 0.795372 | 0.99639 | 0.7925 |
Landscape ecology | Landscape ecology is the science of studying and improving relationships between ecological processes in the environment and particular ecosystems. This is done within a variety of landscape scales, development spatial patterns, and organizational levels of research and policy. Landscape ecology can be described as the science of "landscape diversity" as the synergetic result of biodiversity and geodiversity.
As a highly interdisciplinary field in systems science, landscape ecology integrates biophysical and analytical approaches with humanistic and holistic perspectives across the natural sciences and social sciences. Landscapes are spatially heterogeneous geographic areas characterized by diverse interacting patches or ecosystems, ranging from relatively natural terrestrial and aquatic systems such as forests, grasslands, and lakes to human-dominated environments including agricultural and urban settings.
The most salient characteristics of landscape ecology are its emphasis on the relationship among pattern, process and scales, and its focus on broad-scale ecological and environmental issues. These necessitate the coupling between biophysical and socioeconomic sciences. Key research topics in landscape ecology include ecological flows in landscape mosaics, land use and land cover change, scaling, relating landscape pattern analysis with ecological processes, and landscape conservation and sustainability. Landscape ecology also studies the role of human impacts on landscape diversity in the development and spreading of new human pathogens that could trigger epidemics.
Terminology
The German term – thus landscape ecology – was coined by German geographer Carl Troll in 1939. He developed this terminology and many early concepts of landscape ecology as part of his early work, which consisted of applying aerial photograph interpretation to studies of interactions between environment and vegetation.
Explanation
Heterogeneity is the measure of how parts of a landscape differ from one another. Landscape ecology looks at how this spatial structure affects organism abundance at the landscape level, as well as the behavior and functioning of the landscape as a whole. This includes studying the influence of pattern, or the internal order of a landscape, on process, or the continuous operation of functions of organisms. Landscape ecology also includes geomorphology as applied to the design and architecture of landscapes. Geomorphology is the study of how geological formations are responsible for the structure of a landscape.
History
Evolution of theory
One central landscape ecology theory originated from MacArthur & Wilson's The Theory of Island Biogeography. This work considered the biodiversity on islands as the result of competing forces of colonization from a mainland stock and stochastic extinction. The concepts of island biogeography were generalized from physical islands to abstract patches of habitat by Levins' metapopulation model (which can be applied e.g. to forest islands in the agricultural landscape). This generalization spurred the growth of landscape ecology by providing conservation biologists a new tool to assess how habitat fragmentation affects population viability. Recent growth of landscape ecology owes much to the development of geographic information systems (GIS) and the availability of large-extent habitat data (e.g. remotely sensed datasets).
Development as a discipline
Landscape ecology developed in Europe from historical planning on human-dominated landscapes. Concepts from general ecology theory were integrated in North America. While general ecology theory and its sub-disciplines focused on the study of more homogenous, discrete community units organized in a hierarchical structure (typically as ecosystems, populations, species, and communities), landscape ecology built upon heterogeneity in space and time. It frequently included human-caused landscape changes in theory and application of concepts.
By 1980, landscape ecology was a discrete, established discipline. It was marked by the organization of the International Association for Landscape Ecology (IALE) in 1982. Landmark book publications defined the scope and goals of the discipline, including Naveh and Lieberman and Forman and Godron. Forman wrote that although study of "the ecology of spatial configuration at the human scale" was barely a decade old, there was strong potential for theory development and application of the conceptual framework.
Today, theory and application of landscape ecology continues to develop through a need for innovative applications in a changing landscape and environment. Landscape ecology relies on advanced technologies such as remote sensing, GIS, and models. There has been associated development of powerful quantitative methods to examine the interactions of patterns and processes. An example would be determining the amount of carbon present in the soil based on landform over a landscape, derived from GIS maps, vegetation types, and rainfall data for a region. Remote sensing work has been used to extend landscape ecology to the field of predictive vegetation mapping, for instance by Janet Franklin.
Definitions/conceptions of landscape ecology
Nowadays, at least six different conceptions of landscape ecology can be identified: one group tending toward the more disciplinary concept of ecology (subdiscipline of biology; in conceptions 2, 3, and 4) and another group—characterized by the interdisciplinary study of relations between human societies and their environment—inclined toward the integrated view of geography (in conceptions 1, 5, and 6):
Interdisciplinary analysis of subjectively defined landscape units (e.g. Neef School): Landscapes are defined in terms of uniformity in land use. Landscape ecology explores the landscape's natural potential in terms of functional utility for human societies. To analyse this potential, it is necessary to draw on several natural sciences.
Topological ecology at the landscape scale 'Landscape' is defined as a heterogeneous land area composed of a cluster of interacting ecosystems (woods, meadows, marshes, villages, etc.) that is repeated in similar form throughout. It is explicitly stated that landscapes are areas at a kilometres wide human scale of perception, modification, etc. Landscape ecology describes and explains the landscapes' characteristic patterns of ecosystems and investigates the flux of energy, mineral nutrients, and species among their component ecosystems, providing important knowledge for addressing land-use issues.
Organism-centered, multi-scale topological ecology (e.g. John A. Wiens): Explicitly rejecting views expounded by Troll, Zonneveld, Naveh, Forman & Godron, etc., landscape and landscape ecology are defined independently of human perceptions, interests, and modifications of nature. 'Landscape' is defined – regardless of scale – as the 'template' on which spatial patterns influence ecological processes. Not humans, but rather the respective species being studied is the point of reference for what constitutes a landscape.
Topological ecology at the landscape level of biological organisation (e.g. Urban et al.): On the basis of ecological hierarchy theory, it is presupposed that nature is working at multiple scales and has different levels of organisation which are part of a rate-structured, nested hierarchy. Specifically, it is claimed that, above the ecosystem level, a landscape level exists which is generated and identifiable by high interaction intensity between ecosystems, a specific interaction frequency and, typically, a corresponding spatial scale. Landscape ecology is defined as ecology that focuses on the influence exerted by spatial and temporal patterns on the organisation of, and interaction among, functionally integrated multispecies ecosystems.
Analysis of social-ecological systems using the natural and social sciences and humanities (e.g. Leser; Naveh; Zonneveld): Landscape ecology is defined as an interdisciplinary super-science that explores the relationship between human societies and their specific environment, making use of not only various natural sciences, but also social sciences and humanities. This conception is grounded in the assumption that social systems are linked to their specific ambient ecological system in such a way that both systems together form a co-evolutionary, self-organising unity called 'landscape'. Societies' cultural, social and economic dimensions are regarded as an integral part of the global ecological hierarchy, and landscapes are claimed to be the manifest systems of the 'total human ecosystem' (Naveh) which encompasses both the physical ('geospheric') and mental ('noospheric') spheres.
Ecology guided by cultural meanings of lifeworldly landscapes (frequently pursued in practice but not defined, but see, e.g., Hard; Trepl): Landscape ecology is defined as ecology that is guided by an external aim, namely, to maintain and develop lifeworldly landscapes. It provides the ecological knowledge necessary to achieve these goals. It investigates how to sustain and develop those populations and ecosystems which (i) are the material 'vehicles' of lifeworldly, aesthetic and symbolic landscapes and, at the same time, (ii) meet societies' functional requirements, including provisioning, regulating, and supporting ecosystem services. Thus landscape ecology is concerned mainly with the populations and ecosystems which have resulted from traditional, regionally specific forms of land use.
Relationship to ecological theory
Some research programmes of landscape ecology theory, namely those standing in the European tradition, may be slightly outside of the "classical and preferred domain of scientific disciplines" because of the large, heterogeneous areas of study. However, general ecology theory is central to landscape ecology theory in many aspects. Landscape ecology consists of four main principles: the development and dynamics of spatial heterogeneity, interactions and exchanges across heterogeneous landscapes, influences of spatial heterogeneity on biotic and abiotic processes, and the management of spatial heterogeneity. The main difference from traditional ecological studies, which frequently assume that systems are spatially homogenous, is the consideration of spatial patterns.
Important terms
Landscape ecology not only created new terms, but also incorporated existing ecological terms in new ways. Many of the terms used in landscape ecology are as interconnected and interrelated as the discipline itself.
Landscape
Certainly, 'landscape' is a central concept in landscape ecology. It is, however, defined in quite different ways. For example: Carl Troll conceives of landscape not as a mental construct but as an objectively given 'organic entity', a harmonic individuum of space.
Ernst Neef defines landscapes as sections within the uninterrupted earth-wide interconnection of geofactors which are defined as such on the basis of their uniformity in terms of a specific land use, and are thus defined in an anthropocentric and relativistic way.
According to Richard Forman and Michel Godron, a landscape is a heterogeneous land area composed of a cluster of interacting ecosystems that is repeated in similar form throughout, whereby they list woods, meadows, marshes and villages as examples of a landscape's ecosystems, and state that a landscape is an area at least a few kilometres wide.
John A. Wiens opposes the traditional view expounded by Carl Troll, Isaak S. Zonneveld, Zev Naveh, Richard T. T. Forman/Michel Godron and others that landscapes are arenas in which humans interact with their environments on a kilometre-wide scale; instead, he defines 'landscape'—regardless of scale—as "the template on which spatial patterns influence ecological processes". Some define 'landscape' as an area containing two or more ecosystems in close proximity.
Scale and heterogeneity (incorporating composition, structure, and function)
A main concept in landscape ecology is scale. Scale represents the real world as translated onto a map, relating distance on a map image and the corresponding distance on earth. Scale is also the spatial or temporal measure of an object or a process, or amount of spatial resolution. Components of scale include composition, structure, and function, which are all important ecological concepts. Applied to landscape ecology, composition refers to the number of patch types (see below) represented on a landscape and their relative abundance. For example, the amount of forest or wetland, the length of forest edge, or the density of roads can be aspects of landscape composition. Structure is determined by the composition, the configuration, and the proportion of different patches across the landscape, while function refers to how each element in the landscape interacts based on its life cycle events. Pattern is the term for the contents and internal order of a heterogeneous area of land.
A landscape with structure and pattern implies that it has spatial heterogeneity, or the uneven distribution of objects across the landscape. Heterogeneity is a key element of landscape ecology that separates this discipline from other branches of ecology. Landscape heterogeneity is able to quantify with agent-based methods as well.
Patch and mosaic
Patch, a term fundamental to landscape ecology, is defined as a relatively homogeneous area that differs from its surroundings. Patches are the basic unit of the landscape that change and fluctuate, a process called patch dynamics. Patches have a definite shape and spatial configuration, and can be described compositionally by internal variables such as number of trees, number of tree species, height of trees, or other similar measurements.
Matrix is the "background ecological system" of a landscape with a high degree of connectivity. Connectivity is the measure of how connected or spatially continuous a corridor, network, or matrix is. For example, a forested landscape (matrix) with fewer gaps in forest cover (open patches) will have higher connectivity. Corridors have important functions as strips of a particular type of landscape differing from adjacent land on both sides. A network is an interconnected system of corridors while mosaic describes the pattern of patches, corridors, and matrix that form a landscape in its entirety.
Boundary and edge
Landscape patches have a boundary between them which can be defined or fuzzy. The zone composed of the edges of adjacent ecosystems is the boundary. Edge means the portion of an ecosystem near its perimeter, where influences of the adjacent patches can cause an environmental difference between the interior of the patch and its edge. This edge effect includes a distinctive species composition or abundance. For example, when a landscape is a mosaic of perceptibly different types, such as a forest adjacent to a grassland, the edge is the location where the two types adjoin. In a continuous landscape, such as a forest giving way to open woodland, the exact edge location is fuzzy and is sometimes determined by a local gradient exceeding a threshold, such as the point where the tree cover falls below thirty-five percent.
Ecotones, ecoclines, and ecotopes
A type of boundary is the ecotone, or the transitional zone between two communities. Ecotones can arise naturally, such as a lakeshore, or can be human-created, such as a cleared agricultural field from a forest. The ecotonal community retains characteristics of each bordering community and often contains species not found in the adjacent communities. Classic examples of ecotones include fencerows, forest to marshlands transitions, forest to grassland transitions, or land-water interfaces such as riparian zones in forests. Characteristics of ecotones include vegetational sharpness, physiognomic change, occurrence of a spatial community mosaic, many exotic species, ecotonal species, spatial mass effect, and species richness higher or lower than either side of the ecotone.
An ecocline is another type of landscape boundary, but it is a gradual and continuous change in environmental conditions of an ecosystem or community. Ecoclines help explain the distribution and diversity of organisms within a landscape because certain organisms survive better under certain conditions, which change along the ecocline. They contain heterogeneous communities which are considered more environmentally stable than those of ecotones. An ecotope is a spatial term representing the smallest ecologically distinct unit in mapping and classification of landscapes. Relatively homogeneous, they are spatially explicit landscape units used to stratify landscapes into ecologically distinct features. They are useful for the measurement and mapping of landscape structure, function, and change over time, and to examine the effects of disturbance and fragmentation.
Disturbance and fragmentation
Disturbance is an event that significantly alters the pattern of variation in the structure or function of a system. Fragmentation is the breaking up of a habitat, ecosystem, or land-use type into smaller parcels. Disturbance is generally considered a natural process. Fragmentation causes land transformation, an important process in landscapes as development occurs.
An important consequence of repeated, random clearing (whether by natural disturbance or human activity) is that contiguous cover can break down into isolated patches. This happens when the area cleared exceeds a critical level, which means that landscapes exhibit two phases: connected and disconnected.
Theory
Landscape ecology theory stresses the role of human impacts on landscape structures and functions. It also proposes ways for restoring degraded landscapes. Landscape ecology explicitly includes humans as entities that cause functional changes on the landscape. Landscape ecology theory includes the landscape stability principle, which emphasizes the importance of landscape structural heterogeneity in developing resistance to disturbances, recovery from disturbances, and promoting total system stability. This principle is a major contribution to general ecological theories which highlight the importance of relationships among the various components of the landscape.
Integrity of landscape components helps maintain resistance to external threats, including development and land transformation by human activity. Analysis of land use change has included a strongly geographical approach which has led to the acceptance of the idea of multifunctional properties of landscapes. There are still calls for a more unified theory of landscape ecology due to differences in professional opinion among ecologists and its interdisciplinary approach (Bastian 2001).
An important related theory is hierarchy theory, which refers to how systems of discrete functional elements operate when linked at two or more scales. For example, a forested landscape might be hierarchically composed of drainage basins, which in turn are composed of local ecosystems, which are in turn composed of individual trees and gaps. Recent theoretical developments in landscape ecology have emphasized the relationship between pattern and process, as well as the effect that changes in spatial scale has on the potential to extrapolate information across scales. Several studies suggest that the landscape has critical thresholds at which ecological processes will show dramatic changes, such as the complete transformation of a landscape by an invasive species due to small changes in temperature characteristics which favor the invasive's habitat requirements.
Application
Research directions
Developments in landscape ecology illustrate the important relationships between spatial patterns and ecological processes. These developments incorporate quantitative methods that link spatial patterns and ecological processes at broad spatial and temporal scales. This linkage of time, space, and environmental change can assist managers in applying plans to solve environmental problems. The increased attention in recent years on spatial dynamics has highlighted the need for new quantitative methods that can analyze patterns, determine the importance of spatially explicit processes, and develop reliable models. Multivariate analysis techniques are frequently used to examine landscape level vegetation patterns. Studies use statistical techniques, such as cluster analysis, canonical correspondence analysis (CCA), or detrended correspondence analysis (DCA), for classifying vegetation. Gradient analysis is another way to determine the vegetation structure across a landscape or to help delineate critical wetland habitat for conservation or mitigation purposes (Choesin and Boerner 2002).
Climate change is another major component in structuring current research in landscape ecology. Ecotones, as a basic unit in landscape studies, may have significance for management under climate change scenarios, since change effects are likely to be seen at ecotones first because of the unstable nature of a fringe habitat. Research in northern regions has examined landscape ecological processes, such as the accumulation of snow, melting, freeze-thaw action, percolation, soil moisture variation, and temperature regimes through long-term measurements in Norway. The study analyzes gradients across space and time between ecosystems of the central high mountains to determine relationships between distribution patterns of animals in their environment. Looking at where animals live, and how vegetation shifts over time, may provide insight into changes in snow and ice over long periods of time across the landscape as a whole.
Other landscape-scale studies maintain that human impact is likely the main determinant of landscape pattern over much of the globe. Landscapes may become substitutes for biodiversity measures because plant and animal composition differs between samples taken from sites within different landscape categories. Taxa, or different species, can "leak" from one habitat into another, which has implications for landscape ecology. As human land use practices expand and continue to increase the proportion of edges in landscapes, the effects of this leakage across edges on assemblage integrity may become more significant in conservation. This is because taxa may be conserved across landscape levels, if not at local levels.
Land change modeling
Land change modeling is an application of landscape ecology designed to predict future changes in land use. Land change models are used in urban planning, geography, GIS, and other disciplines to gain a clear understanding of the course of a landscape. In recent years, much of the Earth's land cover has changed rapidly, whether from deforestation or the expansion of urban areas.
Relationship to other disciplines
Landscape ecology has been incorporated into a variety of ecological subdisciplines. For example, it is closely linked to land change science, the interdisciplinary of land use and land cover change and their effects on surrounding ecology. Another recent development has been the more explicit consideration of spatial concepts and principles applied to the study of lakes, streams, and wetlands in the field of landscape limnology. Seascape ecology is a marine and coastal application of landscape ecology. In addition, landscape ecology has important links to application-oriented disciplines such as agriculture and forestry. In agriculture, landscape ecology has introduced new options for the management of environmental threats brought about by the intensification of agricultural practices. Agriculture has always been a strong human impact on ecosystems.
In forestry, from structuring stands for fuelwood and timber to ordering stands across landscapes to enhance aesthetics, consumer needs have affected conservation and use of forested landscapes. Landscape forestry provides methods, concepts, and analytic procedures for landscape forestry. Landscape ecology has been cited as a contributor to the development of fisheries biology as a distinct biological science discipline, and is frequently incorporated in study design for wetland delineation in hydrology. It has helped shape integrated landscape management. Lastly, landscape ecology has been very influential for progressing sustainability science and sustainable development planning. For example, a recent study assessed sustainable urbanization across Europe using evaluation indices, country-landscapes, and landscape ecology tools and methods.
Landscape ecology has also been combined with population genetics to form the field of landscape genetics, which addresses how landscape features influence the population structure and gene flow of plant and animal populations across space and time and on how the quality of intervening landscape, known as "matrix", influences spatial variation. After the term was coined in 2003, the field of landscape genetics had expanded to over 655 studies by 2010, and continues to grow today. As genetic data has become more readily accessible, it is increasingly being used by ecologists to answer novel evolutionary and ecological questions, many with regard to how landscapes effect evolutionary processes, especially in human-modified landscapes, which are experiencing biodiversity loss.
See also
Agroecology
Biogeography
Conservation communities
Concepts and Techniques in Modern Geography
Ecology
Ecotope
European Landscape Convention
Historical ecology
Integrated landscape management
Land change modeling
Landscape epidemiology
Landscape limnology
Landscape planning
Landscape connectivity
Patch dynamics
Total human ecosystem
Sustainable landscaping
Landscape architecture
Land development
Tobler's first law of geography
Tobler's second law of geography
References
External links
Computer sumulation "Substrate" launch applet creates fractal iterations that resemble urban streetscape. Algorithm written 2004 by Jared Tarbell
International Association for Landscape Ecology
Napolisoundscape Urban Space Research
Systems ecology
Biogeography
Ecological restoration
Environmental soil science
Environmental design
Habitat
Landscape
Applications of geographic information systems | 0.806648 | 0.982455 | 0.792496 |
Population genetics | Population genetics is a subfield of genetics that deals with genetic differences within and among populations, and is a part of evolutionary biology. Studies in this branch of biology examine such phenomena as adaptation, speciation, and population structure.
Population genetics was a vital ingredient in the emergence of the modern evolutionary synthesis. Its primary founders were Sewall Wright, J. B. S. Haldane and Ronald Fisher, who also laid the foundations for the related discipline of quantitative genetics. Traditionally a highly mathematical discipline, modern population genetics encompasses theoretical, laboratory, and field work. Population genetic models are used both for statistical inference from DNA sequence data and for proof/disproof of concept.
What sets population genetics apart from newer, more phenotypic approaches to modelling evolution, such as evolutionary game theory and adaptive dynamics, is its emphasis on such genetic phenomena as dominance, epistasis, the degree to which genetic recombination breaks linkage disequilibrium, and the random phenomena of mutation and genetic drift. This makes it appropriate for comparison to population genomics data.
History
Population genetics began as a reconciliation of Mendelian inheritance and biostatistics models. Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural or sexual selection implausible. The Hardy–Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. According to this principle, the frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift.
The next key step was the work of the British biologist and statistician Ronald Fisher. In a series of papers starting in 1918 and culminating in his 1930 book The Genetical Theory of Natural Selection, Fisher showed that the continuous variation measured by the biometricians could be produced by the combined action of many discrete genes, and that natural selection could change allele frequencies in a population, resulting in evolution. In a series of papers beginning in 1924, another British geneticist, J. B. S. Haldane, worked out the mathematics of allele frequency change at a single gene locus under a broad range of conditions. Haldane also applied statistical analysis to real-world examples of natural selection, such as peppered moth evolution and industrial melanism, and showed that selection coefficients could be larger than Fisher assumed, leading to more rapid adaptive evolution as a camouflage strategy following increased pollution.
The American biologist Sewall Wright, who had a background in animal breeding experiments, focused on combinations of interacting genes, and the effects of inbreeding on small, relatively isolated populations that exhibited genetic drift. In 1932 Wright introduced the concept of an adaptive landscape and argued that genetic drift and inbreeding could drive a small, isolated sub-population away from an adaptive peak, allowing natural selection to drive it towards different adaptive peaks.
The work of Fisher, Haldane and Wright founded the discipline of population genetics. This integrated natural selection with Mendelian genetics, which was the critical first step in developing a unified theory of how evolution worked. John Maynard Smith was Haldane's pupil, whilst W. D. Hamilton was influenced by the writings of Fisher. The American George R. Price worked with both Hamilton and Maynard Smith. American Richard Lewontin and Japanese Motoo Kimura were influenced by Wright and Haldane.
Modern synthesis
The mathematics of population genetics were originally developed as the beginning of the modern synthesis. Authors such as Beatty have asserted that population genetics defines the core of the modern synthesis. For the first few decades of the 20th century, most field naturalists continued to believe that Lamarckism and orthogenesis provided the best explanation for the complexity they observed in the living world. During the modern synthesis, these ideas were purged, and only evolutionary causes that could be expressed in the mathematical framework of population genetics were retained. Consensus was reached as to which evolutionary factors might influence evolution, but not as to the relative importance of the various factors.
Theodosius Dobzhansky, a postdoctoral worker in T. H. Morgan's lab, had been influenced by the work on genetic diversity by Russian geneticists such as Sergei Chetverikov. He helped to bridge the divide between the foundations of microevolution developed by the population geneticists and the patterns of macroevolution observed by field biologists, with his 1937 book Genetics and the Origin of Species. Dobzhansky examined the genetic diversity of wild populations and showed that, contrary to the assumptions of the population geneticists, these populations had large amounts of genetic diversity, with marked differences between sub-populations. The book also took the highly mathematical work of the population geneticists and put it into a more accessible form. Many more biologists were influenced by population genetics via Dobzhansky than were able to read the highly mathematical works in the original.
In Great Britain E. B. Ford, the pioneer of ecological genetics, continued throughout the 1930s and 1940s to empirically demonstrate the power of selection due to ecological factors including the ability to maintain genetic diversity through genetic polymorphisms such as human blood types. Ford's work, in collaboration with Fisher, contributed to a shift in emphasis during the modern synthesis towards natural selection as the dominant force.
Neutral theory and origin-fixation dynamics
The original, modern synthesis view of population genetics assumes that mutations provide ample raw material, and focuses only on the change in frequency of alleles within populations. The main processes influencing allele frequencies are natural selection, genetic drift, gene flow and recurrent mutation. Fisher and Wright had some fundamental disagreements about the relative roles of selection and drift.
The availability of molecular data on all genetic differences led to the neutral theory of molecular evolution. In this view, many mutations are deleterious and so never observed, and most of the remainder are neutral, i.e. are not under selection. With the fate of each neutral mutation left to chance (genetic drift), the direction of evolutionary change is driven by which mutations occur, and so cannot be captured by models of change in the frequency of (existing) alleles alone.
The origin-fixation view of population genetics generalizes this approach beyond strictly neutral mutations, and sees the rate at which a particular change happens as the product of the mutation rate and the fixation probability.
Four processes
Selection
Natural selection, which includes sexual selection, is the fact that some traits make it more likely for an organism to survive and reproduce. Population genetics describes natural selection by defining fitness as a propensity or probability of survival and reproduction in a particular environment. The fitness is normally given by the symbol w=1-s where s is the selection coefficient. Natural selection acts on phenotypes, so population genetic models assume relatively simple relationships to predict the phenotype and hence fitness from the allele at one or a small number of loci. In this way, natural selection converts differences in the fitness of individuals with different phenotypes into changes in allele frequency in a population over successive generations.
Before the advent of population genetics, many biologists doubted that small differences in fitness were sufficient to make a large difference to evolution. Population geneticists addressed this concern in part by comparing selection to genetic drift. Selection can overcome genetic drift when s is greater than 1 divided by the effective population size. When this criterion is met, the probability that a new advantageous mutant becomes fixed is approximately equal to 2s. The time until fixation of such an allele is approximately .
Dominance
Dominance means that the phenotypic and/or fitness effect of one allele at a locus depends on which allele is present in the second copy for that locus. Consider three genotypes at one locus, with the following fitness values
s is the selection coefficient and h is the dominance coefficient. The value of h yields the following information:
Epistasis
Epistasis means that the phenotypic and/or fitness effect of an allele at one locus depends on which alleles are present at other loci. Selection does not act on a single locus, but on a phenotype that arises through development from a complete genotype. However, many population genetics models of sexual species are "single locus" models, where the fitness of an individual is calculated as the product of the contributions from each of its loci—effectively assuming no epistasis.
In fact, the genotype to fitness landscape is more complex. Population genetics must either model this complexity in detail, or capture it by some simpler average rule. Empirically, beneficial mutations tend to have a smaller fitness benefit when added to a genetic background that already has high fitness: this is known as diminishing returns epistasis. When deleterious mutations also have a smaller fitness effect on high fitness backgrounds, this is known as "synergistic epistasis". However, the effect of deleterious mutations tends on average to be very close to multiplicative, or can even show the opposite pattern, known as "antagonistic epistasis".
Synergistic epistasis is central to some theories of the purging of mutation load and to the evolution of sexual reproduction.
Mutation
The genetic process of mutation takes place within an individual, resulting in heritable changes to the genetic material. This process is often characterized by a description of the starting and ending states, or the kind of change that has happened at the level of DNA (e.g,. a T-to-C mutation, a 1-bp deletion), of genes or proteins (e.g., a null mutation, a loss-of-function mutation), or at a higher phenotypic level (e.g., red-eye mutation). Single-nucleotide changes are frequently the most common type of mutation, but many other types of mutation are possible, and they occur at widely varying rates that may show systematic asymmetries or biases (mutation bias).
Mutations can involve large sections of DNA becoming duplicated, usually through genetic recombination. This leads to copy-number variation within a population. Duplications are a major source of raw material for evolving new genes. Other types of mutation occasionally create new genes from previously noncoding DNA.
In the distribution of fitness effects (DFE) for new mutations, only a minority of mutations are beneficial. Mutations with gross effects are typically deleterious. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial.
This biological process of mutation is represented in population-genetic models in one of two ways, either as a deterministic pressure of recurrent mutation on allele frequencies, or a source of variation. In deterministic theory, evolution begins with a predetermined set of alleles and proceeds by shifts in continuous frequencies, as if the population is infinite. The occurrence of mutations in individuals is represented by a population-level "force" or "pressure" of mutation, i.e., the force of innumerable events of mutation with a scaled magnitude u applied to shifting frequencies f(A1) to f(A2). For instance, in the classic mutation–selection balance model, the force of mutation pressure pushes the frequency of an allele upward, and selection against its deleterious effects pushes the frequency downward, so that a balance is reached at equilibrium, given (in the simplest case) by f = u/s.
This concept of mutation pressure is mostly useful for considering the implications of deleterious mutation, such as the mutation load and its implications for the evolution of the mutation rate. Transformation of populations by mutation pressure is unlikely. Haldane argued that it would require high mutation rates unopposed by selection, and Kimura concluded even more pessimistically that even this was unlikely, as the process would take too long (see evolution by mutation pressure).
However, evolution by mutation pressure is possible under some circumstances and has long been suggested as a possible cause for the loss of unused traits.
For example, pigments are no longer useful when animals live in the darkness of caves, and tend to be lost. An experimental example involves the loss of sporulation in experimental populations of B. subtilis. Sporulation is a complex trait encoded by many loci, such that the mutation rate for loss of the trait was estimated as an unusually high value, . Loss of sporulation in this case can occur by recurrent mutation, without requiring selection for the loss of sporulation ability. When there is no selection for loss of function, the speed at which loss evolves depends more on the mutation rate than it does on the effective population size, indicating that it is driven more by mutation than by genetic drift.
The role of mutation as a source of novelty is different from these classical models of mutation pressure.
When population-genetic models include a rate-dependent process of mutational introduction or origination, i.e., a process that introduces new alleles including neutral and beneficial ones, then the properties of mutation may have a more direct impact on the rate and direction of evolution, even if the rate of mutation is very low. That is, the spectrum of mutation may become very important, particularly mutation biases, predictable differences in the rates of occurrence for different types of mutations, because bias in the introduction of variation can impose biases on the course of evolution.
Mutation plays a key role in other classical and recent theories including Muller%27s ratchet, subfunctionalization, Eigen's concept of an error catastrophe and Lynch's mutational hazard hypothesis.
Genetic drift
Genetic drift is a change in allele frequencies caused by random sampling. That is, the alleles in the offspring are a random sample of those in the parents. Genetic drift may cause gene variants to disappear completely, and thereby reduce genetic variability. In contrast to natural selection, which makes gene variants more common or less common depending on their reproductive success, the changes due to genetic drift are not driven by environmental or adaptive pressures, and are equally likely to make an allele more common as less common.
The effect of genetic drift is larger for alleles present in few copies than when an allele is present in many copies. The population genetics of genetic drift are described using either branching processes or a diffusion equation describing changes in allele frequency. These approaches are usually applied to the Wright-Fisher and Moran models of population genetics. Assuming genetic drift is the only evolutionary force acting on an allele, after t generations in many replicated populations, starting with allele frequencies of p and q, the variance in allele frequency across those populations is
Ronald Fisher held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. No population genetics perspective have ever given genetic drift a central role by itself, but some have made genetic drift important in combination with another non-selective force. The shifting balance theory of Sewall Wright held that the combination of population structure and genetic drift was important. Motoo Kimura's neutral theory of molecular evolution claims that most genetic differences within and between populations are caused by the combination of neutral mutations and genetic drift.
The role of genetic drift by means of sampling error in evolution has been criticized by John H Gillespie and Will Provine, who argue that selection on linked sites is a more important stochastic force, doing the work traditionally ascribed to genetic drift by means of sampling error. The mathematical properties of genetic draft are different from those of genetic drift. The direction of the random change in allele frequency is autocorrelated across generations.
Gene flow
Because of physical barriers to migration, along with the limited tendency for individuals to move or spread (vagility), and tendency to remain or come back to natal place (philopatry), natural populations rarely all interbreed as may be assumed in theoretical random models (panmixy). There is usually a geographic range within which individuals are more closely related to one another than those randomly selected from the general population. This is described as the extent to which a population is genetically structured.
Genetic structuring can be caused by migration due to historical climate change, species range expansion or current availability of habitat. Gene flow is hindered by mountain ranges, oceans and deserts or even human-made structures such as the Great Wall of China, which has hindered the flow of plant genes.
Gene flow is the exchange of genes between populations or species, breaking down the structure. Examples of gene flow within a species include the migration and then breeding of organisms, or the exchange of pollen. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Population genetic models can be used to identify which populations show significant genetic isolation from one another, and to reconstruct their history.
Subjecting a population to isolation leads to inbreeding depression. Migration into a population can introduce new genetic variants, potentially contributing to evolutionary rescue. If a significant proportion of individuals or gametes migrate, it can also change allele frequencies, e.g. giving rise to migration load.
In the presence of gene flow, other barriers to hybridization between two diverging populations of an outcrossing species are required for the populations to become new species.
Horizontal gene transfer
Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among prokaryotes. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis may also have occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which appear to have received a range of genes from bacteria, fungi, and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and prokaryotes, during the acquisition of chloroplasts and mitochondria.
Linkage
If all genes are in linkage equilibrium, the effect of an allele at one locus can be averaged across the gene pool at other loci. In reality, one allele is frequently found in linkage disequilibrium with genes at other loci, especially with genes located nearby on the same chromosome. Recombination breaks up this linkage disequilibrium too slowly to avoid genetic hitchhiking, where an allele at one locus rises to high frequency because it is linked to an allele under selection at a nearby locus. Linkage also slows down the rate of adaptation, even in sexual populations. The effect of linkage disequilibrium in slowing down the rate of adaptive evolution arises from a combination of the Hill–Robertson effect (delays in bringing beneficial mutations together) and background selection (delays in separating beneficial mutations from deleterious hitchhikers).
Linkage is a problem for population genetic models that treat one gene locus at a time. It can, however, be exploited as a method for detecting the action of natural selection via selective sweeps.
In the extreme case of an asexual population, linkage is complete, and population genetic equations can be derived and solved in terms of a travelling wave of genotype frequencies along a simple fitness landscape. Most microbes, such as bacteria, are asexual. The population genetics of their adaptation have two contrasting regimes. When the product of the beneficial mutation rate and population size is small, asexual populations follow a "successional regime" of origin-fixation dynamics, with adaptation rate strongly dependent on this product. When the product is much larger, asexual populations follow a "concurrent mutations" regime with adaptation rate less dependent on the product, characterized by clonal interference and the appearance of a new beneficial mutation before the last one has fixed.
Applications
Explaining levels of genetic variation
Neutral theory predicts that the level of nucleotide diversity in a population will be proportional to the product of the population size and the neutral mutation rate. The fact that levels of genetic diversity vary much less than population sizes do is known as the "paradox of variation". While high levels of genetic diversity were one of the original arguments in favor of neutral theory, the paradox of variation has been one of the strongest arguments against neutral theory.
It is clear that levels of genetic diversity vary greatly within a species as a function of local recombination rate, due to both genetic hitchhiking and background selection. Most current solutions to the paradox of variation invoke some level of selection at linked sites. For example, one analysis suggests that larger populations have more selective sweeps, which remove more neutral genetic diversity. A negative correlation between mutation rate and population size may also contribute.
Life history affects genetic diversity more than population history does, e.g. r-strategists have more genetic diversity.
Detecting selection
Population genetics models are used to infer which genes are undergoing selection. One common approach is to look for regions of high linkage disequilibrium and low genetic variance along the chromosome, to detect recent selective sweeps.
A second common approach is the McDonald–Kreitman test which compares the amount of variation within a species (polymorphism) to the divergence between species (substitutions) at two types of sites; one assumed to be neutral. Typically, synonymous sites are assumed to be neutral. Genes undergoing positive selection have an excess of divergent sites relative to polymorphic sites. The test can also be used to obtain a genome-wide estimate of the proportion of substitutions that are fixed by positive selection, α. According to the neutral theory of molecular evolution, this number should be near zero. High numbers have therefore been interpreted as a genome-wide falsification of neutral theory.
Demographic inference
The simplest test for population structure in a sexually reproducing, diploid species, is to see whether genotype frequencies follow Hardy-Weinberg proportions as a function of allele frequencies. For example, in the simplest case of a single locus with two alleles denoted A and a at frequencies p and q, random mating predicts freq(AA) = p2 for the AA homozygotes, freq(aa) = q2 for the aa homozygotes, and freq(Aa) = 2pq for the heterozygotes. In the absence of population structure, Hardy-Weinberg proportions are reached within 1–2 generations of random mating. More typically, there is an excess of homozygotes, indicative of population structure. The extent of this excess can be quantified as the inbreeding coefficient, F.
Individuals can be clustered into K subpopulations. The degree of population structure can then be calculated using FST, which is a measure of the proportion of genetic variance that can be explained by population structure. Genetic population structure can then be related to geographic structure, and genetic admixture can be detected.
Coalescent theory relates genetic diversity in a sample to demographic history of the population from which it was taken. It normally assumes neutrality, and so sequences from more neutrally evolving portions of genomes are therefore selected for such analyses. It can be used to infer the relationships between species (phylogenetics), as well as the population structure, demographic history (e.g. population bottlenecks, population growth), biological dispersal, source–sink dynamics and introgression within a species.
Another approach to demographic inference relies on the allele frequency spectrum.
Evolution of genetic systems
By assuming that there are loci that control the genetic system itself, population genetic models are created to describe the evolution of dominance and other forms of robustness, the evolution of sexual reproduction and recombination rates, the evolution of mutation rates, the evolution of evolutionary capacitors, the evolution of costly signalling traits, the evolution of ageing, and the evolution of co-operation. For example, most mutations are deleterious, so the optimal mutation rate for a species may be a trade-off between the damage from a high deleterious mutation rate and the metabolic costs of maintaining systems to reduce the mutation rate, such as DNA repair enzymes.
One important aspect of such models is that selection is only strong enough to purge deleterious mutations and hence overpower mutational bias towards degradation if the selection coefficient s is greater than the inverse of the effective population size. This is known as the drift barrier and is related to the nearly neutral theory of molecular evolution. Drift barrier theory predicts that species with large effective population sizes will have highly streamlined, efficient genetic systems, while those with small population sizes will have bloated and complex genomes containing for example introns and transposable elements. However, somewhat paradoxically, species with large population sizes might be so tolerant to the consequences of certain types of errors that they evolve higher error rates, e.g. in transcription and translation, than small populations.
See also
References
External links
Population Genetics Tutorials (archived 23 January 2015)
Molecular population genetics
The ALlele FREquency Database at Yale University
EHSTRAFD.org – Earth Human STR Allele Frequencies Database (archived 13 July 2009)
History of population genetics
How Selection Changes the Genetic Composition of Population, video of lecture by Stephen C. Stearns (Yale University)
National Geographic: Atlas of the Human Journey (Haplogroup-based human migration maps)
Population genetics
Evolutionary biology
Statistical genetics | 0.798342 | 0.992449 | 0.792314 |
Natural selection | Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Charles Darwin popularised the term "natural selection", contrasting it with artificial selection, which is intentional, whereas natural selection is not.
Variation of traits, both genotypic and phenotypic, exists within all populations of organisms. However, some traits are more likely to facilitate survival and reproductive success. Thus, these traits are passed onto the next generation. These traits can also become more common within a population if the environment that favours these traits remains fixed. If new traits become more favored due to changes in a specific niche, microevolution occurs. If new traits become more favored due to changes in the broader environment, macroevolution occurs. Sometimes, new species can arise especially if these new traits are radically different from the traits possessed by their predecessors.
The likelihood of these traits being 'selected' and passed down are determined by many factors. Some are likely to be passed down because they adapt well to their environments. Others are passed down because these traits are actively preferred by mating partners, which is known as sexual selection. Female bodies also prefer traits that confer the lowest cost to their reproductive health, which is known as fecundity selection.
Natural selection is a cornerstone of modern biology. The concept, published by Darwin and Alfred Russel Wallace in a joint presentation of papers in 1858, was elaborated in Darwin's influential 1859 book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. He described natural selection as analogous to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favoured for reproduction. The concept of natural selection originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, science had yet to develop modern theories of genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical genetics formed the modern synthesis of the mid-20th century. The addition of molecular genetics has led to evolutionary developmental biology, which explains evolution at the molecular level. While genotypes can slowly change by random genetic drift, natural selection remains the primary explanation for adaptive evolution.
Historical development
Pre-Darwinian theories
Several philosophers of the classical era, including Empedocles and his intellectual successor, the Roman poet Lucretius, expressed the idea that nature produces a huge variety of creatures, randomly, and that only those creatures that manage to provide for themselves and reproduce successfully persist. Empedocles' idea that organisms arose entirely by the incidental workings of causes such as heat and cold was criticised by Aristotle in Book II of Physics. He posited natural teleology in its place, and believed that form was achieved for a purpose, citing the regularity of heredity in species as proof. Nevertheless, he accepted in his biology that new types of animals, monstrosities (τερας), can occur in very rare instances (Generation of Animals, Book IV). As quoted in Darwin's 1872 edition of The Origin of Species, Aristotle considered whether different forms (e.g., of teeth) might have appeared accidentally, but only the useful forms survived:
But Aristotle rejected this possibility in the next paragraph, making clear that he is talking about the development of animals as embryos with the phrase "either invariably or normally come about", not the origin of species:
The struggle for existence was later described by the Islamic writer Al-Jahiz in the 9th century, particularly in the context of top-down population regulation, but not in reference to individual variation or natural selection.
At the turn of the 16th century Leonardo da Vinci collected a set of fossils of ammonites as well as other biological material. He extensively reasoned in his writings that the shapes of animals are not given once and forever by the "upper power" but instead are generated in different forms naturally and then selected for reproduction by their compatibility with the environment.
The more recent classical arguments were reintroduced in the 18th century by Pierre Louis Maupertuis and others, including Darwin's grandfather, Erasmus Darwin.
Until the early 19th century, the prevailing view in Western societies was that differences between individuals of a species were uninteresting departures from their Platonic ideals (or typus) of created kinds. However, the theory of uniformitarianism in geology promoted the idea that simple, weak forces could act continuously over long periods of time to produce radical changes in the Earth's landscape. The success of this theory raised awareness of the vast scale of geological time and made plausible the idea that tiny, virtually imperceptible changes in successive generations could produce consequences on the scale of differences between species.
The early 19th-century zoologist Jean-Baptiste Lamarck suggested the inheritance of acquired characteristics as a mechanism for evolutionary change; adaptive traits acquired by an organism during its lifetime could be inherited by that organism's progeny, eventually causing transmutation of species. This theory, Lamarckism, was an influence on the Soviet biologist Trofim Lysenko's ill-fated antagonism to mainstream genetic theory as late as the mid-20th century.
Between 1835 and 1837, the zoologist Edward Blyth worked on the area of variation, artificial selection, and how a similar process occurs in nature. Darwin acknowledged Blyth's ideas in the first chapter on variation of On the Origin of Species.
Darwin's theory
In 1859, Charles Darwin set out his theory of evolution by natural selection as an explanation for adaptation and speciation. He defined natural selection as the "principle by which each slight variation [of a trait], if useful, is preserved". The concept was simple but powerful: individuals best adapted to their environments are more likely to survive and reproduce. As long as there is some variation between them and that variation is heritable, there will be an inevitable selection of individuals with the most advantageous variations. If the variations are heritable, then differential reproductive success leads to the evolution of particular populations of a species, and populations that evolve to be sufficiently different eventually become different species.
Darwin's ideas were inspired by the observations that he had made on the second voyage of HMS Beagle (1831–1836), and by the work of a political economist, Thomas Robert Malthus, who, in An Essay on the Principle of Population (1798), noted that population (if unchecked) increases exponentially, whereas the food supply grows only arithmetically; thus, inevitable limitations of resources would have demographic implications, leading to a "struggle for existence". When Darwin read Malthus in 1838 he was already primed by his work as a naturalist to appreciate the "struggle for existence" in nature. It struck him that as population outgrew resources, "favourable variations would tend to be preserved, and unfavourable ones to be destroyed. The result of this would be the formation of new species." Darwin wrote:
Once he had his theory, Darwin was meticulous about gathering and refining evidence before making his idea public. He was in the process of writing his "big book" to present his research when the naturalist Alfred Russel Wallace independently conceived of the principle and described it in an essay he sent to Darwin to forward to Charles Lyell. Lyell and Joseph Dalton Hooker decided to present his essay together with unpublished writings that Darwin had sent to fellow naturalists, and On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection was read to the Linnean Society of London announcing co-discovery of the principle in July 1858. Darwin published a detailed account of his evidence and conclusions in On the Origin of Species in 1859. In the 3rd edition of 1861 Darwin acknowledged that others—like William Charles Wells in 1813, and Patrick Matthew in 1831—had proposed similar ideas, but had neither developed them nor presented them in notable scientific publications.
Darwin thought of natural selection by analogy to how farmers select crops or livestock for breeding, which he called "artificial selection"; in his early manuscripts he referred to a "Nature" which would do the selection. At the time, other mechanisms of evolution such as evolution by genetic drift were not yet explicitly formulated, and Darwin believed that selection was likely only part of the story: "I am convinced that Natural Selection has been the main but not exclusive means of modification." In a letter to Charles Lyell in September 1860, Darwin regretted the use of the term "Natural Selection", preferring the term "Natural Preservation".
For Darwin and his contemporaries, natural selection was in essence synonymous with evolution by natural selection. After the publication of On the Origin of Species, educated people generally accepted that evolution had occurred in some form. However, natural selection remained controversial as a mechanism, partly because it was perceived to be too weak to explain the range of observed characteristics of living organisms, and partly because even supporters of evolution balked at its "unguided" and non-progressive nature, a response that has been characterised as the single most significant impediment to the idea's acceptance. However, some thinkers enthusiastically embraced natural selection; after reading Darwin, Herbert Spencer introduced the phrase survival of the fittest, which became a popular summary of the theory. The fifth edition of On the Origin of Species published in 1869 included Spencer's phrase as an alternative to natural selection, with credit given: "But the expression often used by Mr. Herbert Spencer of the Survival of the Fittest is more accurate, and is sometimes equally convenient." Although the phrase is still often used by non-biologists, modern biologists avoid it because it is tautological if "fittest" is read to mean "functionally superior" and is applied to individuals rather than considered as an averaged quantity over populations.
The modern synthesis
Natural selection relies crucially on the idea of heredity, but developed before the basic concepts of genetics. Although the Moravian monk Gregor Mendel, the father of modern genetics, was a contemporary of Darwin's, his work lay in obscurity, only being rediscovered in 1900. With the early 20th-century integration of evolution with Mendel's laws of inheritance, the so-called modern synthesis, scientists generally came to accept natural selection. The synthesis grew from advances in different fields. Ronald Fisher developed the required mathematical language and wrote The Genetical Theory of Natural Selection (1930). J. B. S. Haldane introduced the concept of the "cost" of natural selection.
Sewall Wright elucidated the nature of selection and adaptation.
In his book Genetics and the Origin of Species (1937), Theodosius Dobzhansky established the idea that mutation, once seen as a rival to selection, actually supplied the raw material for natural selection by creating genetic diversity.
A second synthesis
Ernst Mayr recognised the key importance of reproductive isolation for speciation in his Systematics and the Origin of Species (1942).
W. D. Hamilton conceived of kin selection in 1964. This synthesis cemented natural selection as the foundation of evolutionary theory, where it remains today. A second synthesis was brought about at the end of the 20th century by advances in molecular genetics, creating the field of evolutionary developmental biology ("evo-devo"), which seeks to explain the evolution of form in terms of the genetic regulatory programs which control the development of the embryo at molecular level. Natural selection is here understood to act on embryonic development to change the morphology of the adult body.
Terminology
The term natural selection is most often defined to operate on heritable traits, because these directly participate in evolution. However, natural selection is "blind" in the sense that changes in phenotype can give a reproductive advantage regardless of whether or not the trait is heritable. Following Darwin's primary usage, the term is used to refer both to the evolutionary consequence of blind selection and to its mechanisms. It is sometimes helpful to explicitly distinguish between selection's mechanisms and its effects; when this distinction is important, scientists define "(phenotypic) natural selection" specifically as "those mechanisms that contribute to the selection of individuals that reproduce", without regard to whether the basis of the selection is heritable. Traits that cause greater reproductive success of an organism are said to be selected for, while those that reduce success are selected against.
Mechanism
Heritable variation, differential reproduction
Natural variation occurs among the individuals of any population of organisms. Some differences may improve an individual's chances of surviving and reproducing such that its lifetime reproductive rate is increased, which means that it leaves more offspring. If the traits that give these individuals a reproductive advantage are also heritable, that is, passed from parent to offspring, then there will be differential reproduction, that is, a slightly higher proportion of fast rabbits or efficient algae in the next generation. Even if the reproductive advantage is very slight, over many generations any advantageous heritable trait becomes dominant in the population. In this way the natural environment of an organism "selects for" traits that confer a reproductive advantage, causing evolutionary change, as Darwin described. This gives the appearance of purpose, but in natural selection there is no intentional choice. Artificial selection is purposive where natural selection is not, though biologists often use teleological language to describe it.
The peppered moth exists in both light and dark colours in Great Britain, but during the Industrial Revolution, many of the trees on which the moths rested became blackened by soot, giving the dark-coloured moths an advantage in hiding from predators. This gave dark-coloured moths a better chance of surviving to produce dark-coloured offspring, and in just fifty years from the first dark moth being caught, nearly all of the moths in industrial Manchester were dark. The balance was reversed by the effect of the Clean Air Act 1956, and the dark moths became rare again, demonstrating the influence of natural selection on peppered moth evolution. A recent study, using image analysis and avian vision models, shows that pale individuals more closely match lichen backgrounds than dark morphs and for the first time quantifies the camouflage of moths to predation risk.
Fitness
The concept of fitness is central to natural selection. In broad terms, individuals that are more "fit" have better potential for survival, as in the well-known phrase "survival of the fittest", but the precise meaning of the term is much more subtle. Modern evolutionary theory defines fitness not by how long an organism lives, but by how successful it is at reproducing. If an organism lives half as long as others of its species, but has twice as many offspring surviving to adulthood, its genes become more common in the adult population of the next generation. Though natural selection acts on individuals, the effects of chance mean that fitness can only really be defined "on average" for the individuals within a population. The fitness of a particular genotype corresponds to the average effect on all individuals with that genotype.
A distinction must be made between the concept of "survival of the fittest" and "improvement in fitness". "Survival of the fittest" does not give an "improvement in fitness", it only represents the removal of the less fit variants from a population. A mathematical example of "survival of the fittest" is given by Haldane in his paper "The Cost of Natural Selection". Haldane called this process "substitution" or more commonly in biology, this is called "fixation". This is correctly described by the differential survival and reproduction of individuals due to differences in phenotype. On the other hand, "improvement in fitness" is not dependent on the differential survival and reproduction of individuals due to differences in phenotype, it is dependent on the absolute survival of the particular variant. The probability of a beneficial mutation occurring on some member of a population depends on the total number of replications of that variant. The mathematics of "improvement in fitness was described by Kleinman. An empirical example of "improvement in fitness" is given by the Kishony Mega-plate experiment. In this experiment, "improvement in fitness" depends on the number of replications of the particular variant for a new variant to appear that is capable of growing in the next higher drug concentration region. Fixation or substitution is not required for this "improvement in fitness". On the other hand, "improvement in fitness" can occur in an environment where "survival of the fittest" is also acting. Richard Lenski's classic E. coli long-term evolution experiment is an example of adaptation in a competitive environment, ("improvement in fitness" during "survival of the fittest"). The probability of a beneficial mutation occurring on some member of the lineage to give improved fitness is slowed by the competition. The variant which is a candidate for a beneficial mutation in this limited carrying capacity environment must first out-compete the "less fit" variants in order to accumulate the requisite number of replications for there to be a reasonable probability of that beneficial mutation occurring.
Competition
In biology, competition is an interaction between organisms in which the fitness of one is lowered by the presence of another. This may be because both rely on a limited supply of a resource such as food, water, or territory. Competition may be within or between species, and may be direct or indirect. Species less suited to compete should in theory either adapt or die out, since competition plays a powerful role in natural selection, but according to the "room to roam" theory it may be less important than expansion among larger clades.
Competition is modelled by r/K selection theory, which is based on Robert MacArthur and E. O. Wilson's work on island biogeography. In this theory, selective pressures drive evolution in one of two stereotyped directions: r- or K-selection. These terms, r and K, can be illustrated in a logistic model of population dynamics:
where r is the growth rate of the population (N), and K is the carrying capacity of its local environmental setting. Typically, r-selected species exploit empty niches, and produce many offspring, each with a relatively low probability of surviving to adulthood. In contrast, K-selected species are strong competitors in crowded niches, and invest more heavily in much fewer offspring, each with a relatively high probability of surviving to adulthood.
Classification
Natural selection can act on any heritable phenotypic trait, and selective pressure can be produced by any aspect of the environment, including sexual selection and competition with members of the same or other species. However, this does not imply that natural selection is always directional and results in adaptive evolution; natural selection often results in the maintenance of the status quo by eliminating less fit variants.
Selection can be classified in several different ways, such as by its effect on a trait, on genetic diversity, by the life cycle stage where it acts, by the unit of selection, or by the resource being competed for.
By effect on a trait
Selection has different effects on traits. Stabilizing selection acts to hold a trait at a stable optimum, and in the simplest case all deviations from this optimum are selectively disadvantageous. Directional selection favours extreme values of a trait. The uncommon disruptive selection also acts during transition periods when the current mode is sub-optimal, but alters the trait in more than one direction. In particular, if the trait is quantitative and univariate then both higher and lower trait levels are favoured. Disruptive selection can be a precursor to speciation.
By effect on genetic diversity
Alternatively, selection can be divided according to its effect on genetic diversity. Purifying or negative selection acts to remove genetic variation from the population (and is opposed by de novo mutation, which introduces new variation. In contrast, balancing selection acts to maintain genetic variation in a population, even in the absence of de novo mutation, by negative frequency-dependent selection. One mechanism for this is heterozygote advantage, where individuals with two different alleles have a selective advantage over individuals with just one allele. The polymorphism at the human ABO blood group locus has been explained in this way.
By life cycle stage
Another option is to classify selection by the life cycle stage at which it acts. Some biologists recognise just two types: viability (or survival) selection, which acts to increase an organism's probability of survival, and fecundity (or fertility or reproductive) selection, which acts to increase the rate of reproduction, given survival. Others split the life cycle into further components of selection. Thus viability and survival selection may be defined separately and respectively as acting to improve the probability of survival before and after reproductive age is reached, while fecundity selection may be split into additional sub-components including sexual selection, gametic selection, acting on gamete survival, and compatibility selection, acting on zygote formation.
By unit of selection
Selection can also be classified by the level or unit of selection. Individual selection acts on the individual, in the sense that adaptations are "for" the benefit of the individual, and result from selection among individuals. Gene selection acts directly at the level of the gene. In kin selection and intragenomic conflict, gene-level selection provides a more apt explanation of the underlying process. Group selection, if it occurs, acts on groups of organisms, on the assumption that groups replicate and mutate in an analogous way to genes and individuals. There is an ongoing debate over the degree to which group selection occurs in nature.
By resource being competed for
Finally, selection can be classified according to the resource being competed for. Sexual selection results from competition for mates. Sexual selection typically proceeds via fecundity selection, sometimes at the expense of viability. Ecological selection is natural selection via any means other than sexual selection, such as kin selection, competition, and infanticide. Following Darwin, natural selection is sometimes defined as ecological selection, in which case sexual selection is considered a separate mechanism.
Sexual selection as first articulated by Darwin (using the example of the peacock's tail) refers specifically to competition for mates, which can be intrasexual, between individuals of the same sex, that is male–male competition, or intersexual, where one gender chooses mates, most often with males displaying and females choosing. However, in some species, mate choice is primarily by males, as in some fishes of the family Syngnathidae.
Phenotypic traits can be displayed in one sex and desired in the other sex, causing a positive feedback loop called a Fisherian runaway, for example, the extravagant plumage of some male birds such as the peacock. An alternate theory proposed by the same Ronald Fisher in 1930 is the sexy son hypothesis, that mothers want promiscuous sons to give them large numbers of grandchildren and so choose promiscuous fathers for their children. Aggression between members of the same sex is sometimes associated with very distinctive features, such as the antlers of stags, which are used in combat with other stags. More generally, intrasexual selection is often associated with sexual dimorphism, including differences in body size between males and females of a species.
Arms races
Natural selection is seen in action in the development of antibiotic resistance in microorganisms. Since the discovery of penicillin in 1928, antibiotics have been used to fight bacterial diseases. The widespread misuse of antibiotics has selected for microbial resistance to antibiotics in clinical use, to the point that the methicillin-resistant Staphylococcus aureus (MRSA) has been described as a "superbug" because of the threat it poses to health and its relative invulnerability to existing drugs. Response strategies typically include the use of different, stronger antibiotics; however, new strains of MRSA have recently emerged that are resistant even to these drugs. This is an evolutionary arms race, in which bacteria develop strains less susceptible to antibiotics, while medical researchers attempt to develop new antibiotics that can kill them. A similar situation occurs with pesticide resistance in plants and insects. Arms races are not necessarily induced by man; a well-documented example involves the spread of a gene in the butterfly Hypolimnas bolina suppressing male-killing activity by Wolbachia bacteria parasites on the island of Samoa, where the spread of the gene is known to have occurred over a period of just five years.
Evolution by means of natural selection
A prerequisite for natural selection to result in adaptive evolution, novel traits and speciation is the presence of heritable genetic variation that results in fitness differences. Genetic variation is the result of mutations, genetic recombinations and alterations in the karyotype (the number, shape, size and internal arrangement of the chromosomes). Any of these changes might have an effect that is highly advantageous or highly disadvantageous, but large effects are rare. In the past, most changes in the genetic material were considered neutral or close to neutral because they occurred in noncoding DNA or resulted in a synonymous substitution. However, many mutations in non-coding DNA have deleterious effects. Although both mutation rates and average fitness effects of mutations are dependent on the organism, a majority of mutations in humans are slightly deleterious.
Some mutations occur in "toolkit" or regulatory genes. Changes in these often have large effects on the phenotype of the individual because they regulate the function of many other genes. Most, but not all, mutations in regulatory genes result in non-viable embryos. Some nonlethal regulatory mutations occur in HOX genes in humans, which can result in a cervical rib or polydactyly, an increase in the number of fingers or toes. When such mutations result in a higher fitness, natural selection favours these phenotypes and the novel trait spreads in the population.
Established traits are not immutable; traits that have high fitness in one environmental context may be much less fit if environmental conditions change. In the absence of natural selection to preserve such a trait, it becomes more variable and deteriorate over time, possibly resulting in a vestigial manifestation of the trait, also called evolutionary baggage. In many circumstances, the apparently vestigial structure may retain a limited functionality, or may be co-opted for other advantageous traits in a phenomenon known as preadaptation. A famous example of a vestigial structure, the eye of the blind mole-rat, is believed to retain function in photoperiod perception.
Speciation
Speciation requires a degree of reproductive isolation—that is, a reduction in gene flow. However, it is intrinsic to the concept of a species that hybrids are selected against, opposing the evolution of reproductive isolation, a problem that was recognised by Darwin. The problem does not occur in allopatric speciation with geographically separated populations, which can diverge with different sets of mutations. E. B. Poulton realized in 1903 that reproductive isolation could evolve through divergence, if each lineage acquired a different, incompatible allele of the same gene. Selection against the heterozygote would then directly create reproductive isolation, leading to the Bateson–Dobzhansky–Muller model, further elaborated by H. Allen Orr and Sergey Gavrilets. With reinforcement, however, natural selection can favor an increase in pre-zygotic isolation, influencing the process of speciation directly.
Genetic basis
Genotype and phenotype
Natural selection acts on an organism's phenotype, or physical characteristics. Phenotype is determined by an organism's genetic make-up (genotype) and the environment in which the organism lives. When different organisms in a population possess different versions of a gene for a certain trait, each of these versions is known as an allele. It is this genetic variation that underlies differences in phenotype. An example is the ABO blood type antigens in humans, where three alleles govern the phenotype.
Some traits are governed by only a single gene, but most traits are influenced by the interactions of many genes. A variation in one of the many genes that contributes to a trait may have only a small effect on the phenotype; together, these genes can produce a continuum of possible phenotypic values.
Directionality of selection
When some component of a trait is heritable, selection alters the frequencies of the different alleles, or variants of the gene that produces the variants of the trait. Selection can be divided into three classes, on the basis of its effect on allele frequencies: directional, stabilizing, and disruptive selection. Directional selection occurs when an allele has a greater fitness than others, so that it increases in frequency, gaining an increasing share in the population. This process can continue until the allele is fixed and the entire population shares the fitter phenotype. Far more common is stabilizing selection, which lowers the frequency of alleles that have a deleterious effect on the phenotype—that is, produce organisms of lower fitness. This process can continue until the allele is eliminated from the population. Stabilizing selection conserves functional genetic features, such as protein-coding genes or regulatory sequences, over time by selective pressure against deleterious variants. Disruptive (or diversifying) selection is selection favoring extreme trait values over intermediate trait values. Disruptive selection may cause sympatric speciation through niche partitioning.
Some forms of balancing selection do not result in fixation, but maintain an allele at intermediate frequencies in a population. This can occur in diploid species (with pairs of chromosomes) when heterozygous individuals (with just one copy of the allele) have a higher fitness than homozygous individuals (with two copies). This is called heterozygote advantage or over-dominance, of which the best-known example is the resistance to malaria in humans heterozygous for sickle-cell anaemia. Maintenance of allelic variation can also occur through disruptive or diversifying selection, which favours genotypes that depart from the average in either direction (that is, the opposite of over-dominance), and can result in a bimodal distribution of trait values. Finally, balancing selection can occur through frequency-dependent selection, where the fitness of one particular phenotype depends on the distribution of other phenotypes in the population. The principles of game theory have been applied to understand the fitness distributions in these situations, particularly in the study of kin selection and the evolution of reciprocal altruism.
Selection, genetic variation, and drift
A portion of all genetic variation is functionally neutral, producing no phenotypic effect or significant difference in fitness. Motoo Kimura's neutral theory of molecular evolution by genetic drift proposes that this variation accounts for a large fraction of observed genetic diversity. Neutral events can radically reduce genetic variation through population bottlenecks. which among other things can cause the founder effect in initially small new populations. When genetic variation does not result in differences in fitness, selection cannot directly affect the frequency of such variation. As a result, the genetic variation at those sites is higher than at sites where variation does influence fitness. However, after a period with no new mutations, the genetic variation at these sites is eliminated due to genetic drift. Natural selection reduces genetic variation by eliminating maladapted individuals, and consequently the mutations that caused the maladaptation. At the same time, new mutations occur, resulting in a mutation–selection balance. The exact outcome of the two processes depends both on the rate at which new mutations occur and on the strength of the natural selection, which is a function of how unfavourable the mutation proves to be.
Genetic linkage occurs when the loci of two alleles are close on a chromosome. During the formation of gametes, recombination reshuffles the alleles. The chance that such a reshuffle occurs between two alleles is inversely related to the distance between them. Selective sweeps occur when an allele becomes more common in a population as a result of positive selection. As the prevalence of one allele increases, closely linked alleles can also become more common by "genetic hitchhiking", whether they are neutral or even slightly deleterious. A strong selective sweep results in a region of the genome where the positively selected haplotype (the allele and its neighbours) are in essence the only ones that exist in the population. Selective sweeps can be detected by measuring linkage disequilibrium, or whether a given haplotype is overrepresented in the population. Since a selective sweep also results in selection of neighbouring alleles, the presence of a block of strong linkage disequilibrium might indicate a 'recent' selective sweep near the centre of the block.
Background selection is the opposite of a selective sweep. If a specific site experiences strong and persistent purifying selection, linked variation tends to be weeded out along with it, producing a region in the genome of low overall variability. Because background selection is a result of deleterious new mutations, which can occur randomly in any haplotype, it does not produce clear blocks of linkage disequilibrium, although with low recombination it can still lead to slightly negative linkage disequilibrium overall.
Impact
Darwin's ideas, along with those of Adam Smith and Karl Marx, had a profound influence on 19th century thought, including his radical claim that "elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner" evolved from the simplest forms of life by a few simple principles. This inspired some of Darwin's most ardent supporters—and provoked the strongest opposition. Natural selection had the power, according to Stephen Jay Gould, to "dethrone some of the deepest and most traditional comforts of Western thought", such as the belief that humans have a special place in the world.
In the words of the philosopher Daniel Dennett, "Darwin's dangerous idea" of evolution by natural selection is a "universal acid," which cannot be kept restricted to any vessel or container, as it soon leaks out, working its way into ever-wider surroundings. Thus, in the last decades, the concept of natural selection has spread from evolutionary biology to other disciplines, including evolutionary computation, quantum Darwinism, evolutionary economics, evolutionary epistemology, evolutionary psychology, and cosmological natural selection. This unlimited applicability has been called universal Darwinism.
Origin of life
How life originated from inorganic matter remains an unresolved problem in biology. One prominent hypothesis is that life first appeared in the form of short self-replicating RNA polymers. On this view, life may have come into existence when RNA chains first experienced the basic conditions, as conceived by Charles Darwin, for natural selection to operate. These conditions are: heritability, variation of type, and competition for limited resources. The fitness of an early RNA replicator would likely have been a function of adaptive capacities that were intrinsic (i.e., determined by the nucleotide sequence) and the availability of resources. The three primary adaptive capacities could logically have been: (1) the capacity to replicate with moderate fidelity (giving rise to both heritability and variation of type), (2) the capacity to avoid decay, and (3) the capacity to acquire and process resources. These capacities would have been determined initially by the folded configurations (including those configurations with ribozyme activity) of the RNA replicators that, in turn, would have been encoded in their individual nucleotide sequences.
Cell and molecular biology
In 1881, the embryologist Wilhelm Roux published Der Kampf der Theile im Organismus (The Struggle of Parts in the Organism) in which he suggested that the development of an organism results from a Darwinian competition between the parts of the embryo, occurring at all levels, from molecules to organs. In recent years, a modern version of this theory has been proposed by Jean-Jacques Kupiec. According to this cellular Darwinism, random variation at the molecular level generates diversity in cell types whereas cell interactions impose a characteristic order on the developing embryo.
Social and psychological theory
The social implications of the theory of evolution by natural selection also became the source of continuing controversy. Friedrich Engels, a German political philosopher and co-originator of the ideology of communism, wrote in 1872 that "Darwin did not know what a bitter satire he wrote on mankind, and especially on his countrymen, when he showed that free competition, the struggle for existence, which the economists celebrate as the highest historical achievement, is the normal state of the animal kingdom." Herbert Spencer and the eugenics advocate Francis Galton's interpretation of natural selection as necessarily progressive, leading to supposed advances in intelligence and civilisation, became a justification for colonialism, eugenics, and social Darwinism. For example, in 1940, Konrad Lorenz, in writings that he subsequently disowned, used the theory as a justification for policies of the Nazi state. He wrote "... selection for toughness, heroism, and social utility ... must be accomplished by some human institution, if mankind, in default of selective factors, is not to be ruined by domestication-induced degeneracy. The racial idea as the basis of our state has already accomplished much in this respect." Others have developed ideas that human societies and culture evolve by mechanisms analogous to those that apply to evolution of species.
More recently, work among anthropologists and psychologists has led to the development of sociobiology and later of evolutionary psychology, a field that attempts to explain features of human psychology in terms of adaptation to the ancestral environment. The most prominent example of evolutionary psychology, notably advanced in the early work of Noam Chomsky and later by Steven Pinker, is the hypothesis that the human brain has adapted to acquire the grammatical rules of natural language. Other aspects of human behaviour and social structures, from specific cultural norms such as incest avoidance to broader patterns such as gender roles, have been hypothesised to have similar origins as adaptations to the early environment in which modern humans evolved. By analogy to the action of natural selection on genes, the concept of memes—"units of cultural transmission," or culture's equivalents of genes undergoing selection and recombination—has arisen, first described in this form by Richard Dawkins in 1976 and subsequently expanded upon by philosophers such as Daniel Dennett as explanations for complex cultural activities, including human consciousness.
Information and systems theory
In 1922, Alfred J. Lotka proposed that natural selection might be understood as a physical principle that could be described in terms of the use of energy by a system, a concept later developed by Howard T. Odum as the maximum power principle in thermodynamics, whereby evolutionary systems with selective advantage maximise the rate of useful energy transformation.
The principles of natural selection have inspired a variety of computational techniques, such as "soft" artificial life, that simulate selective processes and can be highly efficient in 'adapting' entities to an environment defined by a specified fitness function. For example, a class of heuristic optimisation algorithms known as genetic algorithms, pioneered by John Henry Holland in the 1970s and expanded upon by David E. Goldberg, identify optimal solutions by simulated reproduction and mutation of a population of solutions defined by an initial probability distribution. Such algorithms are particularly useful when applied to problems whose energy landscape is very rough or has many local minima.
In fiction
Darwinian evolution by natural selection is pervasive in literature, whether taken optimistically in terms of how humanity may evolve towards perfection, or pessimistically in terms of the dire consequences of the interaction of human nature and the struggle for survival. Among major responses is Samuel Butler's 1872 pessimistic Erewhon ("nowhere", written mostly backwards). In 1893 H. G. Wells imagined "The Man of the Year Million", transformed by natural selection into a being with a huge head and eyes, and shrunken body.
Notes
References
Sources
Modified from Christiansen by adding survival selection in the reproductive phase.
The book is available from The Complete Work of Charles Darwin Online. Retrieved 2015-07-23.
.
The book is available from the Marxist Internet Archive.
"This book is based on a series of lectures delivered in January 1931 at the Prifysgol Cymru, Aberystwyth, and entitled 'A re-examination of Darwinism'."
.
The book is available here from Frank Elwell, Rogers State University.
Retrieved 2015-08-11.
Further reading
For technical audiences
For general audiences
Historical
External links
– Chapter 4, Natural Selection
Biological interactions
Charles Darwin
Competition
Ecological processes
Ethology
Evolution
Evolutionary biology
Selection
Sexual selection | 0.79315 | 0.998806 | 0.792203 |
Degrowth | Degrowth is an academic and social movement critical of the concept of growth in gross domestic product as a measure of human and economic development. The idea of degrowth is based on ideas and research from economic anthropology, ecological economics, environmental sciences, and development studies. It argues that modern capitalism's unitary focus on growth causes widespread ecological damage and is unnecessary for the further increase of human living standards. Degrowth theory has been met with both academic acclaim and considerable criticism.
Degrowth's main argument is that an infinite expansion of the economy is fundamentally contradictory to the finiteness of material resources on Earth. It argues that economic growth measured by GDP should be abandoned as a policy objective. Policy should instead focus on economic and social metrics such as life expectancy, health, education, housing, and ecologically sustainable work as indicators of both ecosystems and human well-being. Degrowth theorists posit that this would increase human living standards and ecological preservation even as GDP growth slows.
Degrowth theory is highly critical of free market capitalism, and it highlights the importance of extensive public services, care work, self-organization, commons, relational goods, community, and work sharing.
Degrowth theory partly orients itself as a critique of green capitalism or as a radical alternative to the market-based, sustainable development goal (SDG) model of addressing ecological overshoot and environmental collapse.
A 2024 review of degrowth studies over the past 10 years showed that most were of poor quality: almost 90% were opinions rather than analysis, few used quantitative or qualitative data, and even fewer ones used formal modelling; the latter used small samples or a focus on non-representative cases. Also most studies offered subjective policy advice, but lacked policy evaluation and integration with insights from the literature on environmental/climate policies.
Background
The "degrowth" movement arose from concerns over the consequences of the productivism and consumerism associated with industrial societies (whether capitalist or socialist) including:
The reduced availability of energy sources (see peak oil);
The destabilization of Earth's ecosystems upon which all life on Earth depends (see Holocene Extinction, Anthropocene, global warming, pollution, current biodiversity loss);
The rise of negative societal side-effects (unsustainable development, poorer health, poverty); and
The ever-expanding use of resources by Global North countries to satisfy lifestyles that consume more food and energy, and produce greater waste, at the expense of the Global South (see neocolonialism).
A 2017 review of the research literature on degrowth, found that it focused on three main goals: (1) reduction of environmental degradation; (2) redistribution of income and wealth locally and globally; (3) promotion of a social transition from economic materialism to participatory culture.
Decoupling
The concept of decoupling denotes decoupling economic growth, usually measured in GDP growth, GDP per capita growth or GNI per capita growth from the use of natural resources and greenhouse gas (GHG) emissions. Absolute decoupling refers to GDP growth coinciding with a reduction in natural resource use and GHG emissions, while relative decoupling describes an increase in resource use and GHG emissions lower than the increase in GDP growth. The degrowth movement heavily critiques this idea and argues that absolute decoupling is only possible for short periods, specific locations, or with small mitigation rates. In 2021 NGO European Environmental Bureau called stated that "not only is there no empirical evidence supporting the existence of a decoupling of economic growth from environmental pressures on anywhere near the scale needed to deal with environmental breakdown", and that reported cases of existing eco-economic decouplings either depict relative decoupling and/or are observed only temporarily and/or only on a local scale, arguing that alternatives to eco-economic decoupling are needed. This is supported by several other studies which state that absolute decoupling is highly unlikely to be achieved fast enough to prevent global warming over 1.5 °C or 2 °C, even under optimistic policy conditions.
Major criticism of this view points out that Degrowth is politically unpalatable, defaulting towards the more free market green growth orthodoxy as a set of solutions that is more politically tenable. The problems with the SDG process are political rather than technical, Ezra Klein of the New York Times claims in summary of these criticisms, and degrowth has less plausibility than green growth as a democratic political platform. However, in a recent review of efforts toward Sustain Development Goals by the Council of Foreign Relations in 2023 it was found that progress toward 50% of the minimum viable SDG's have stalled and 30% of these verticals have reversed (or are getting worse, rather than better). Thus, while it may be true that Degrowth will be 'a difficult sell' (per Ezra Klein) to introduce via democratic voluntarism, the critique of SDG's and decoupling against green capitalism leveled by Degrowth theorists appear to have predictive power.
Resource depletion
Degrowth proponents argue that economic expansion must be met with a corresponding increase in resource consumption. Non-renewable resources, like petroleum, have a limited supply and can eventually be exhausted. Similarly, renewable resources can also be depleted if they are harvested at unsustainable rates for prolonged periods. An example of this depletion is evident in the case of caviar production in the Caspian Sea.
Supporters of degrowth contend that reducing demand is the sole permanent solution to bridging the demand gap. To sustain renewable resources, both demand and production must be regulated to levels that avert depletion and ensure environmental sustainability. Transitioning to a society less reliant on oil is crucial for averting societal collapse as non-renewable resources dwindle. Degrowth can also be interpreted as a plea for resource reallocation, aiming to halt unsustainable practices of transforming certain entities into resources, such as non-renewable natural resources. Instead, the focus shifts towards identifying and utilizing alternative resources, such as renewable human capabilities.
Ecological footprint
The ecological footprint measures human demand on the Earth's ecosystems by comparing human demand with the Earth's ecological capacity to regenerate. It represents the amount of biologically productive land and sea area required to regenerate the resources a human population consumes and to absorb and render harmless the corresponding waste.
According to a 2005 Global Footprint Network report, inhabitants of high-income countries live off of 6.4 global hectares (gHa), while those from low-income countries live off of a single gHa. For example, while each inhabitant of Bangladesh lives off of what they produce from 0.56 gHa, a North American requires 12.5 gHa. Each inhabitant of North America uses 22.3 times as much land as a Bangladeshi. According to the same report, the average number of global hectares per person was 2.1, while current consumption levels have reached 2.7 hectares per person. For the world's population to attain the living standards typical of European countries, the resources of between three and eight planet Earths would be required with current levels of efficiency and means of production. For world economic equality to be achieved with the currently available resources, proponents say rich countries would have to reduce their standard of living through degrowth. The constraints on resources would eventually lead to a forced reduction in consumption. A controlled reduction of consumption would reduce the trauma of this change, assuming no technological changes increase the planet's carrying capacity. Multiple studies now demonstrate that in many affluent countries per-capita energy consumption could be decreased substantially and quality living standards still be maintained.
Sustainable development
Degrowth ideology opposes all manifestations of productivism, which advocates that economic productivity and growth should be the primary objectives of human organization. Consequently, it stands in opposition to the prevailing model of sustainable development. While the concept of sustainability aligns with some aspects of degrowth philosophy, sustainable development, as conventionally understood, is based on mainstream development principles focused on augmenting economic growth and consumption. Degrowth views sustainable development as contradictory because any development reliant on growth within a finite and ecologically strained context is deemed intrinsically unsustainable. Development based on growth in a finite, environmentally stressed world is viewed as inherently unsustainable.
Critics of degrowth argue that a slowing of economic growth would result in increased unemployment, increased poverty, and decreased income per capita. Many who believe in negative environmental consequences of growth still advocate for economic growth in the South, even if not in the North. Slowing economic growth would fail to deliver the benefits of degrowth — self-sufficiency and material responsibility — and would indeed lead to decreased employment. Rather, degrowth proponents advocate the complete abandonment of the current (growth) economic model, suggesting that relocalizing and abandoning the global economy in the Global South would allow people of the South to become more self-sufficient and would end the overconsumption and exploitation of Southern resources by the North. Supporters of degrowth view it as a potential method to shield ecosystems from human exploitation. Within this concept, there is an emphasis on communal stewardship of the environment, fostering a symbiotic relationship between humans and nature. Degrowth recognizes ecosystems as valuable entities beyond their utility as mere sources of resources. During the Second International Conference on degrowth, discussions encompassed concepts like implementing a maximum wage and promoting open borders. Degrowth advocates an ethical shift that challenges the notion that high-resource consumption lifestyles are desirable. Additionally, alternative perspectives on degrowth include addressing perceived historical injustices perpetrated by the global North through centuries of colonization and exploitation, advocating for wealth redistribution. Determining the appropriate scale of action remains a focal point of debate within degrowth movements.
Some researchers believe that the world is poised to experience a Great Transformation, either by disastrous events or intentional design. They maintain that ecological economics must incorporate Postdevelopment theories, Buen vivir, and degrowth to affect the change necessary to avoid these potentially catastrophic events.
A 2022 paper by Mark Diesendorf found that limiting global warming to 1,5 degrees with no overshoot would require a reduction of energy consumption. It describes (chapters 4–5) degrowth toward a steady state economy as possible and probably positive. The study ends with the words: "The case for a transition to a steady-state economy with low throughput and low emissions, initially in the high-income economies and then in rapidly growing economies, needs more serious attention and international cooperation.
"Rebound effect"
Technologies designed to reduce resource use and improve efficiency are often touted as sustainable or green solutions. Degrowth literature, however, warns about these technological advances due to the "rebound effect", also known as Jevons paradox. This concept is based on observations that when a less resource-exhaustive technology is introduced, behavior surrounding the use of that technology may change, and consumption of that technology could increase or even offset any potential resource savings. In light of the rebound effect, proponents of degrowth hold that the only effective "sustainable" solutions must involve a complete rejection of the growth paradigm and a move to a degrowth paradigm. There are also fundamental limits to technological solutions in the pursuit of degrowth, as all engagements with technology increase the cumulative matter-energy throughput. However, the convergence of digital commons of knowledge and design with distributed manufacturing technologies may arguably hold potential for building degrowth future scenarios.
Mitigation of climate change and determinants of 'growth'
Scientists report that degrowth scenarios, where economic output either "declines" or declines in terms of contemporary economic metrics such as current GDP, have been neglected in considerations of 1.5 °C scenarios reported by the Intergovernmental Panel on Climate Change (IPCC), finding that investigated degrowth scenarios "minimize many key risks for feasibility and sustainability compared to technology-driven pathways" with a core problem of such being feasibility in the context of contemporary decision-making of politics and globalized rebound- and relocation-effects. However, structurally realigning 'economic growth' and socioeconomic activity determination-structures may not be widely debated in both the degrowth community and in degrowth research which may largely focus on reducing economic growth either more generally or without structural alternative but with e.g. nonsystemic political interventions. Similarly, many green growth advocates suggest that contemporary socioeconomic mechanisms and metrics – including for economic growth – can be continued with forms of nonstructural "energy-GDP decoupling". A study concluded that public services are associated with higher human need satisfaction and lower energy requirements while contemporary forms of economic growth are linked with the opposite, with the contemporary economic system being fundamentally misaligned with the twin goals of meeting human needs and ensuring ecological sustainability, suggesting that prioritizing human well-being and ecological sustainability would be preferable to overgrowth in current metrics of economic growth. The word 'degrowth' was mentioned 28 times in the United Nations IPCC Sixth Assessment Report by Working Group III published in April 2022.
Open Localism
Open localism is a concept that has been promoted by the degrowth community when envisioning an alternative set of social relations and economic organization. It builds upon the political philosophies of localism and is based on values such as diversity, ecologies of knowledge, and openness. Open localism does not look to create an enclosed community but rather to circulate production locally in an open and integrative manner.
Open localism is a direct challenge to the acts of closure regarding identitarian politics. By producing and consuming as much as possible locally, community members enhance their relationships with one another and the surrounding environment.
Degrowth's ideas around open localism share similarities with ideas around the commons while also having clear differences. On the one hand, open localism promotes localized, common production in cooperative-like styles similar to some versions of how commons are organized. On the other hand, open localism does not impose any set of rules or regulations creating a defined boundary, rather it favours a cosmopolitan approach.
Feminism
The degrowth movement builds on feminist economics that has criticized measures of economic growth like the GDP as it excludes work mainly done by women such as unpaid care work (the work performed to fulfill people's needs) and reproductive work (the work sustaining life), first argued by Marilyn Waring. Further, degrowth draws on the critique of socialist feminists like Silvia Federici and Nancy Fraser claiming that capitalist growth builds on the exploitation of women's work. Instead of devaluing it, degrowth centers the economy around care, proposing that care work should be organized as a commons.
Centering care goes hand in hand with changing society's time regimes. Degrowth scholars propose a working time reduction. As this does not necessarily lead to gender justice, the redistribution of care work has to be equally pushed. A concrete proposal by Frigga Haug is the 4-in-1 perspective that proposes 4 hours of wage work per day, freeing time for 4 hours of care work, 4 hours of political activities in a direct democracy, and 4 hours of personal development through learning.
Furthermore, degrowth draws on materialist ecofeminisms that state the parallel of the exploitation of women and nature in growth-based societies and proposes a subsistence perspective conceptualized by Maria Mies and Ariel Salleh. Synergies and opportunities for cross-fertilization between degrowth and feminism were proposed in 2022, through networks including the Feminisms and Degrowth Alliance (FaDA). FaDA argued that the 2023 launch of Degrowth Journal created "a convivial space for generating and exploring knowledge and practice from diverse perspectives".
Decolonialism
A relevant concept within the theory of degrowth is decolonialism, which refers to putting an end to the perpetuation of political, social, economic, religious, racial, gender, and epistemological relations of power, domination, and hierarchy of the global north over the global south.
The foundation of this relationship lies in the claim that the imminent socio-ecological collapse is caused by capitalism, which is sustained by economic growth. This economic growth in turn can only be maintained under the eaves of colonialism and extractivism, perpetuating asymmetric power relationships between territories. Colonialism is understood as the appropriation of common goods, resources, and labor, which is antagonistic to degrowth principles.
Through colonial domination, capital depresses the prices of inputs and colonial cheapening occurs to the detriment of the oppressed countries. Degrowth criticizes these appropriation mechanisms and enclosure of one territory over another and proposes a provision of human needs through disaccumulation, de-enclosure, and decommodification. It also reconciles with social movements and seeks to recognize the ecological debt to achieve the catch-up, which is postulated as impossible without decolonization.
In practice, decolonial practices close to degrowth are observed, such as the movement of Buen vivir or sumak kawsay by various indigenous peoples.
Policies
There is a wide range of policy proposals associated with degrowth. In 2022, Nick Fitzpatrick, Timothée Parrique and Inês Cosme conducted a comprehensive survey of degrowth literature from 2005 to 2020 and found 530 specific policy proposals with "50 goals, 100 objectives, 380 instruments". The survey found that the ten most frequently cited proposals were: universal basic incomes, work-time reductions, job guarantees with a living wage, maximum income caps, declining caps on resource use and emissions, not-for-profit cooperatives, holding deliberative forums, reclaiming the commons, establishing ecovillages, and housing cooperatives.
To address the common criticism that such policies are not realistically financeable, economic anthropologist Jason Hickel sees an opportunity to learn from modern monetary theory, which argues that monetary sovereign states can issue the money needed to pay for anything available in the national economy without the need to first tax their citizens for the requisite funds. Taxation, credit regulations and price controls could be used to mitigate the inflation this may generate, while also reducing consumption.
Origins of the movement
The contemporary degrowth movement can trace its roots back to the anti-industrialist trends of the 19th century, developed in Great Britain by John Ruskin, William Morris and the Arts and Crafts movement (1819–1900), in the United States by Henry David Thoreau (1817–1862), and in Russia by Leo Tolstoy (1828–1910).
Degrowth movements draw on the values of humanism, enlightenment, anthropology and human rights.
Club of Rome reports
In 1968, the Club of Rome, a think tank headquartered in Winterthur, Switzerland, asked researchers at the Massachusetts Institute of Technology for a report on the limits of our world system and the constraints it puts on human numbers and activity. The report, called The Limits to Growth, published in 1972, became the first significant study to model the consequences of economic growth.
The reports (also known as the Meadows Reports) are not strictly the founding texts of the degrowth movement, as these reports only advise zero growth, and have also been used to support the sustainable development movement. Still, they are considered the first studies explicitly presenting economic growth as a key reason for the increase in global environmental problems such as pollution, shortage of raw materials, and the destruction of ecosystems. The Limits to Growth: The 30-Year Update was published in 2004, and in 2012, a 40-year forecast from Jørgen Randers, one of the book's original authors, was published as 2052: A Global Forecast for the Next Forty Years. In 2021, Club of Rome committee member Gaya Herrington published an article comparing the proposed models' predictions against empirical data trends. The BAU2 ("Business as Usual 2") scenario, predicting "collapse through pollution", as well as the CT ("Comprehensive Technology") scenario, predicting exceptional technological development and gradual decline, were found to align most closely with data observed as of 2019. In September 2022, the Club of Rome released updated predictive models and policy recommendations in a general-audiences book titled Earth for all – A survival guide to humanity.
Lasting influence of Georgescu-Roegen
The degrowth movement recognises Romanian American mathematician, statistician and economist Nicholas Georgescu-Roegen as the main intellectual figure inspiring the movement. In his 1971 work, The Entropy Law and the Economic Process, Georgescu-Roegen argues that economic scarcity is rooted in physical reality; that all natural resources are irreversibly degraded when put to use in economic activity; that the carrying capacity of Earth—that is, Earth's capacity to sustain human populations and consumption levels—is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse.
Georgescu-Roegen's intellectual inspiration to degrowth dates back to the 1970s. When Georgescu-Roegen delivered a lecture at the University of Geneva in 1974, he made a lasting impression on the young, newly graduated French historian and philosopher, Jacques Grinevald, who had earlier been introduced to Georgescu-Roegen's works by an academic advisor. Georgescu-Roegen and Grinevald became friends, and Grinevald devoted his research to a closer study of Georgescu-Roegen's work. As a result, in 1979, Grinevald published a French translation of a selection of Georgescu-Roegen's articles entitled Demain la décroissance: Entropie – Écologie – Économie ('Tomorrow, the Decline: Entropy – Ecology – Economy'). Georgescu-Roegen, who spoke French fluently, approved the use of the term décroissance in the title of the French translation. The book gained influence in French intellectual and academic circles from the outset. Later, the book was expanded and republished in 1995 and once again in 2006; however, the word Demain ('tomorrow') was removed from the book's title in the second and third editions.
By the time Grinevald suggested the term décroissance to form part of the title of the French translation of Georgescu-Roegen's work, the term had already permeated French intellectual circles since the early 1970s to signify a deliberate political action to downscale the economy on a permanent and voluntary basis. Simultaneously, but independently, Georgescu-Roegen criticised the ideas of The Limits to Growth and Herman Daly's steady-state economy in his article, "Energy and Economic Myths", delivered as a series of lectures from 1972, but not published before 1975. In the article, Georgescu-Roegen stated the following:
When reading this particular passage of the text, Grinevald realised that no professional economist of any orientation had ever reasoned like this before. Grinevald also realised the congruence of Georgescu-Roegen's viewpoint and the French debates occurring at the time; this resemblance was captured in the title of the French edition. The translation of Georgescu-Roegen's work into French both fed on and gave further impetus to the concept of décroissance in France—and everywhere else in the francophone world—thereby creating something of an intellectual feedback loop.
By the 2000s, when décroissance was to be translated from French back into English as the catchy banner for the new social movement, the original term "decline" was deemed inappropriate and misdirected for the purpose: "Decline" usually refers to an unexpected, unwelcome, and temporary economic recession, something to be avoided or quickly overcome. Instead, the neologism "degrowth" was coined to signify a deliberate political action to downscale the economy on a permanent, conscious basis—as in the prevailing French usage of the term—something good to be welcomed and maintained, or so followers believe.
When the first international degrowth conference was held in Paris in 2008, the participants honoured Georgescu-Roegen and his work. In his manifesto on Petit traité de la décroissance sereine ("Farewell to Growth"), the leading French champion of the degrowth movement, Serge Latouche, credited Georgescu-Roegen as the "main theoretical source of degrowth". Likewise, Italian degrowth theorist Mauro Bonaiuti considered Georgescu-Roegen's work to be "one of the analytical cornerstones of the degrowth perspective".
Schumacher and Buddhist economics
E. F. Schumacher's 1973 book Small Is Beautiful predates a unified degrowth movement but nonetheless serves as an important basis for degrowth ideas. In this book he critiques the neo-liberal model of economic development, arguing that an increasing "standard of living", based on consumption is absurd as a goal of economic activity and development. Instead, under what he refers to as Buddhist economics, we should aim to maximize well-being while minimizing consumption.
Ecological and social issues
In January 1972, Edward Goldsmith and Robert Prescott-Allen—editors of The Ecologist—published A Blueprint for Survival, which called for a radical programme of decentralisation and deindustrialization to prevent what the authors referred to as "the breakdown of society and the irreversible disruption of the life-support systems on this planet".
In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. The report was finalised in Paris. The main conclusions:
Over the last 50 years, the state of nature has deteriorated at an unprecedented and accelerating rate.
The main drivers of this deterioration have been changes in land and sea use, exploitation of living beings, climate change, pollution and invasive species. These five drivers, in turn, are caused by societal behaviors, from consumption to governance.
Damage to ecosystems undermines 35 of 44 selected UN targets, including the UN General Assembly's Sustainable Development Goals for poverty, hunger, health, water, cities' climate, oceans and land. It can cause problems with food, water and humanity's air supply.
To fix the problem, humanity needs transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management. Page 8 of the report proposes "enabling visions of a good quality of life that do not entail ever-increasing material consumption" as one of the main measures. The report states that "Some pathways chosen to achieve the goals related to energy, economic growth, industry and infrastructure and sustainable consumption and production (Sustainable Development Goals 7, 8, 9 and 12), as well as targets related to poverty, food security and cities (Sustainable Development Goals 1, 2 and 11), could have substantial positive or negative impacts on nature and therefore on the achievement of other Sustainable Development Goals".
In a June 2020 paper published in Nature Communications, a group of scientists argue that "green growth" or "sustainable growth" is a myth: "we have to get away from our obsession with economic growth—we really need to start managing our economies in a way that protects our climate and natural resources, even if this means less, no or even negative growth." They conclude that a change in economic paradigms is imperative to prevent environmental destruction, and suggest a range of ideas from the reformist to the radical, with the latter consisting of degrowth, eco-socialism and eco-anarchism.
In June 2020, the official site of one of the organizations promoting degrowth published an article by Vijay Kolinjivadi, an expert in political ecology, arguing that the emergence of COVID-19 is linked to the ecological crisis.
The 2019 World Scientists' Warning of a Climate Emergency and its 2021 update have asserted that economic growth is a primary driver of the overexploitation of ecosystems, and to preserve the biosphere and mitigate climate change civilization must, in addition to other fundamental changes including stabilizing population growth and adopting largely plant-based diets, "shift from GDP growth and the pursuit of affluence toward sustaining ecosystems and improving human well-being by prioritizing basic needs and reducing inequality." In an opinion piece published in Al Jazeera, Jason Hickel states that this paper, which has more than 11,000 scientist cosigners, demonstrates that there is a "strong scientific consensus" towards abandoning "GDP as a measure of progress."
In a 2022 comment published in Nature, Hickel, Giorgos Kallis, Juliet Schor, Julia Steinberger and others say that both the IPCC and the IPBES "suggest that degrowth policies should be considered in the fight against climate breakdown and biodiversity loss, respectively".
Movement
Conferences
The movement has included international conferences promoted by the network Research & Degrowth (R&D). The First International Conference on Economic Degrowth for Ecological Sustainability and Social Equity in Paris (2008) was a discussion about the financial, social, cultural, demographic, and environmental crisis caused by the deficiencies of capitalism and an explanation of the main principles of degrowth. Further conferences were in Barcelona (2010), Montreal (2012), Venice (2012), Leipzig (2014), Budapest (2016), Malmö (2018), and Zagreb (2023). The 10th International Degrowth Conference will be held in Pontevedra in June 2024. Separately, two conferences have been organised as cross-party initiatives of Members of the European Parliament: the Post-Growth 2018 Conference and the Beyond Growth 2023 Conference, both held in the European Parliament in Brussels.
International Degrowth Network
The conferences have also been accompanied by informal degrowth assemblies since 2018, to build community between degrowth groups across countries. The 4th Assembly in Zagreb in 2023 discussed a proposal to create a more intentional organisational structure and led to the creation of the International Degrowth Network, which organised the 5th assembly in June 2024.
Relation to other social movements
The degrowth movement has a variety of relations to other social movements and alternative economic visions, which range from collaboration to partial overlap. The Konzeptwerk Neue Ökonomie (Laboratory for New Economic Ideas), which hosted the 2014 international Degrowth conference in Leipzig, has published a project entitled "Degrowth in movement(s)" in 2017, which maps relationships with 32 other social movements and initiatives. The relation to the environmental justice movement is especially visible.
Although not explicitly called degrowth, movements inspired by similar concepts and terminologies can be found around the world, including Buen Vivir in Latin America, the Zapatistas in Mexico, the Kurdish Rojava or Eco-Swaraj in India, and the sufficiency economy in Thailand. The Cuban economic situation has also been of interest to degrowth advocates because its limits on growth were socially imposed (although as a result of geopolitics), and has resulted in positive health changes.
Another set of movements the degrowth movement finds synergy with is the wave of initiatives and networks inspired by the commons, where resources are sustainably shared in a decentralised and self-managed manner, instead of through capitalist organization. For example, initiatives inspired by commons could be food cooperatives, open-source platforms, and group management of resources such as energy or water. Commons-based peer production also guides the role of technology in degrowth, where conviviality and socially useful production are prioritised over capital gain. This could happen in the form of cosmolocalism, which offers a framework for localising collaborative forms of production while sharing resources globally as digital commons, to reduce dependence on global value chains.
Criticisms, challenges and dilemmas
Critiques of degrowth concern the poor study quality of degrowth studies, negative connotation that the term "degrowth" imparts, the misapprehension that growth is seen as unambiguously bad, the challenges and feasibility of a degrowth transition, as well as the entanglement of desirable aspects of modernity with the growth paradigm.
Criticisms
According to a highly cited scientific paper of environmental economist Jeroen C. J. M. van den Bergh, degrowth is often seen as an ambiguous concept due to its various interpretations, which can lead to confusion rather than a clear and constructive debate on environmental policy. Many interpretations of degrowth do not offer effective strategies for reducing environmental impact or transitioning to a sustainable economy. Additionally, degrowth is unlikely to gain significant social or political support, making it an ineffective strategy for achieving environmental sustainability.
Ineffectiveness and better alternatives
In his scientific paper, Jeroen C. J. M. van den Bergh concludes that a degrowth strategy, which focuses on reducing the overall scale of the economy or consumption, tends to overlook the significance of changes in production composition and technological innovation.
Van den Bergh also highlights that a focus solely on reducing consumption (or consumption degrowth) may lead to rebound effects. For instance, reducing consumption of certain goods and services might result in an increase in spending on other items, as disposable income remains unchanged. Alternatively, it could lead to savings, which would provide additional funds for others to borrow and spend.
He emphasizes the importance of (global) environmental policies, such as pricing externalities through taxes or permits, which incentivize behavior changes that reduce environmental impact and which provide essential information for consumers and help manage rebound effects. Effective environmental regulation through pricing is crucial for transitioning from polluting to cleaner consumption patterns.
Study quality
A 2024 review of degrowth studies over the past 10 years showed that most were of poor quality: almost 90% were opinions rather than analysis, few used quantitative or qualitative data, and even fewer ones used formal modelling; the latter used small samples or a focus on non-representative cases. Also most studies offered subjective policy advice, but lacked policy evaluation and integration with insights from the literature on environmental/climate policies.
Negative connotation
The use of the term "degrowth" is criticized for being detrimental to the degrowth movement because it could carry a negative connotation, in opposition to the positively perceived "growth". "Growth" is associated with the "up" direction and positive experiences, while "down" generates the opposite associations. Research in political psychology has shown that the initial negative association of a concept, such as of "degrowth" with the negatively perceived "down", can bias how the subsequent information on that concept is integrated at the unconscious level. At the conscious level, degrowth can be interpreted negatively as the contraction of the economy, although this is not the goal of a degrowth transition, but rather one of its expected consequences. In the current economic system, a contraction of the economy is associated with a recession and its ensuing austerity measures, job cuts, or lower salaries. Noam Chomsky commented on the use of the term: "When you say 'degrowth' it frightens people. It's like saying you're going to have to be poorer tomorrow than you are today, and it doesn't mean that."
Since "degrowth" contains the term "growth", there is also a risk of the term having a backfire effect, which would reinforce the initial positive attitude toward growth. "Degrowth" is also criticized for being a confusing term, since its aim is not to halt economic growth as the word implies. Instead, "a-growth" is proposed as an alternative concept that emphasizes that growth ceases to be an important policy objective, but that it can still be achieved as a side-effect of environmental and social policies.
Systems theoretical critique
In stressing the negative rather than the positive side(s) of growth, the majority of degrowth proponents remain focused on (de-)growth, thus giving continued attention to the issue of growth, leading to continued attention to the arguments that sustainable growth is possible. One way to avoid giving attention to growth might be extending from the economic concept of growth, which proponents of both growth and degrowth commonly adopt, to a broader concept of growth that allows for the observation of growth in other sociological characteristics of society. A corresponding "recoding" of "growth-obsessed", capitalist organizations was proposed by Steffen Roth.
Marxist critique
Traditional Marxists distinguish between two types of value creation: that which is useful to mankind, and that which only serves the purpose of accumulating capital. Traditional Marxists consider that it is the exploitative nature and control of the capitalist production relations that is the determinant and not the quantity. According to Jean Zin, while the justification for degrowth is valid, it is not a solution to the problem. Other Marxist writers have adopted positions close to the de-growth perspective. For example, John Bellamy Foster and Fred Magdoff, in common with David Harvey, Immanuel Wallerstein, Paul Sweezy and others focus on endless capital accumulation as the basic principle and goal of capitalism. This is the source of economic growth and, in the view of these writers, results in an unsustainable growth imperative. Foster and Magdoff develop Marx's own concept of the metabolic rift, something he noted in the exhaustion of soils by capitalist systems of food production, though this is not unique to capitalist systems of food production as seen in the Aral Sea. Many degrowth theories and ideas are based on neo-Marxist theory. Foster emphasizes that degrowth "is not aimed at austerity, but at finding a 'prosperous way down' from our current extractivist, wasteful, ecologically unsustainable, maldeveloped, exploitative, and unequal, class-hierarchical world."
Challenges
Lack of macroeconomics for sustainability
It is reasonable for society to worry about recession as economic growth has been the unanimous goal around the globe in the past decades. However, in some advanced countries, there are attempts to develop a model for a regrowth economy. For instance, the Cool Japan strategy has proven to be instructive for Japan, which has been a static economy for almost decades.
Political and social spheres
According to some scholars in Sociology, the growth imperative is deeply entrenched in market capitalist societies such that it is necessary for their stability. Moreover, the institutions of modern societies, such as the nation state, welfare, labor market, education, academia, law and finance, have co-evolved with growth to sustain them. A degrowth transition thus requires not only a change of the economic system but of all the systems on which it relies. As most people in modern societies are dependent on those growth-oriented institutions, the challenge of a degrowth transition also lies in individual resistance to move away from growth.
Land privatisation
Baumann, Alexander and Burdon suggest that "the Degrowth movement needs to give more attention to land and housing costs, which are significant barriers hindering true political and economic agency and any grassroots driven degrowth transition."
They claim that land – a necessity like land and air – privatisation creates an absolute economic growth determinant. They point out that even one who is fully committed to degrowth nevertheless has no option but decades of market growth participation to pay rent or mortgage. Because of this, land privatisation is a structural impediment to moving forward that makes degrowth economically and politically unviable. They conclude that without addressing land privatisation (the market's inaugural privatisation – primitive accumulation) the degrowth movement's strategies cannot succeed. Just as land enclosure (privatisation) initiated capitalism (economic growth), degrowth must start with reclaiming land commons.
Agriculture
When it comes to agriculture, a degrowth society would require a shift from industrial agriculture to less intensive and more sustainable agricultural practices such as permaculture or organic agriculture. Still, it is not clear if any of those alternatives could feed the current and projected global population. In the case of organic agriculture, Germany, for example, would not be able to feed its population under ideal organic yields over all of its arable land without meaningful changes to patterns of consumption, such as reducing meat consumption and food waste. Moreover, labour productivity of non-industrial agriculture is significantly lower due to the reduced use or absence of fossil fuels, which leaves much less labour for other sectors. Potential solutions to this challenge include scaling up approaches such as community-supported agriculture (CSA).
Dilemmas
Given that modernity has emerged with high levels of energy and material throughput, there is an apparent compromise between desirable aspects of modernity (e.g., social justice, gender equality, long life expectancy, low infant mortality) and unsustainable levels of energy and material use. Some researchers, however, argue that the decline in income inequality and rise in social mobility occurring under capitalism from the late 1940s to the 1960s was a product of the heavy bargaining power of labor unions and increased wealth and income redistribution during that time; while also pointing to the rise in income inequality in the 1970s following the collapse of labor unions and weakening of state welfare measures. Others also argue that modern capitalism maintains gender inequalities by means of advertising, messaging in consumer goods, and social media.
Another way of looking at the argument that the development of desirable aspects of modernity require unsustainable energy and material use is through the lens of the Marxist tradition, which relates the superstructure (culture, ideology, institutions) and the base (material conditions of life, division of labor). A degrowth society, with its drastically different material conditions, could produce equally drastic changes in society's cultural and ideological spheres. The political economy of global capitalism has generated a lot of social and environmental bads, such as socioeconomic inequality and ecological devastation, which in turn have also generated a lot of goods through individualization and increased spatial and social mobility. At the same time, some argue the widespread individualization promulgated by a capitalist political economy is a bad due to its undermining of solidarity, aligned with democracy as well as collective, secondary, and primary forms of caring, and simultaneous encouragement of mistrust of others, highly competitive interpersonal relationships, blame of failure on individual shortcomings, prioritization of one's self-interest, and peripheralization of the conceptualization of human work required to create and sustain people. In this view, the widespread individuation resulting from capitalism may impede degrowth measures, requiring a change in actions to benefit society rather than the individual self.
Some argue the political economy of capitalism has allowed social emancipation at the level of gender equality, disability, sexuality and anti-racism that has no historical precedent. However, others dispute social emancipation as being a direct product of capitalism or question the emancipation that has resulted. The feminist writer Nancy Holmstrom, for example, argues that capitalism's negative impacts on women outweigh the positive impacts, and women tend to be hurt by the system. In her examination of China following the Chinese Communist Revolution, Holmstrom notes that women were granted state-assisted freedoms to equal education, childcare, healthcare, abortion, marriage, and other social supports. Thus, whether the social emancipation achieved in Western society under capitalism may coexist with degrowth is ambiguous.
Doyal and Gough allege that the modern capitalist system is built on the exploitation of female reproductive labor as well as that of the Global South, and sexism and racism are embedded in its structure. Therefore, some theories (such as Eco-Feminism or political ecology) argue that there cannot be equality regarding gender and the hierarchy between the Global North and South within capitalism.
The structural properties of growth present another barrier to degrowth as growth shapes and is enforced by institutions, norms, culture, technology, identities, etc. The social ingraining of growth manifests in peoples' aspirations, thinking, bodies, mindsets, and relationships. Together, growth's role in social practices and in socio-economic institutions present unique challenges to the success of the degrowth movement. Another potential barrier to degrowth is the need for a rapid transition to a degrowth society due to climate change and the potential negative impacts of a rapid social transition including disorientation, conflict, and decreased well-being.
In the United States, a large barrier to the support of the degrowth movement is the modern education system, including both primary and higher learning institutions. Beginning in the second term of the Reagan administration, the education system in the US was restructured to enforce neoliberal ideology by means of privatization schemes such as commercialization and performance contracting, implementation of standards and accountability measures incentivizing schools to adopt a uniform curriculum, and higher education accreditation and curricula designed to affirm market values and current power structures and avoid critical thought concerning the relations between those in power, ethics, authority, history, and knowledge. The degrowth movement, based on the empirical assumption that resources are finite and growth is limited, clashes with the limitless growth ideology associated with neoliberalism and the market values affirmed in schools, and therefore faces a major social barrier in gaining widespread support in the US.
Nevertheless, co-evolving aspects of global capitalism, liberal modernity, and the market society, are closely tied and will be difficult to separate to maintain liberal and cosmopolitan values in a degrowth society. At the same time, the goal of the degrowth movement is progression rather than regression, and researchers point out that neoclassical economic models indicate neither negative nor zero growth would harm economic stability or full employment. Several assert the main barriers to the movement are social and structural factors clashing with implementing degrowth measures.
Healthcare
It has been pointed out that there is an apparent trade-off between the ability of modern healthcare systems to treat individual bodies to their last breath and the broader global ecological risk of such an energy and resource intensive care. If this trade-off exists, a degrowth society must choose between prioritizing the ecological integrity and the ensuing collective health or maximizing the healthcare provided to individuals. However, many degrowth scholars argue that the current system produces both psychological and physical damage to people. They insist that societal prosperity should be measured by well-being, not GDP.
See also
A Blueprint for Survival
Agrowth
Anti-consumerism
Critique of political economy
Degrowth advocates (category)
Political ecology
Postdevelopment theory
Power Down: Options and Actions for a Post-Carbon World
Paradox of thrift
The Path to Degrowth in Overdeveloped Countries
Post-capitalism
Productivism
Prosperity Without Growth
Slow movement
Steady-state economy
Transition town
Uneconomic growth
References
Reference details
Further reading
External links
List of International Degrowth conferences on degrowth.info
Research and Degrowth
International Degrowth Network
Degrowth Journal
Planned Degrowth: Ecosocialism and Sustainable Human Development. Monthly Reviewissue on "Planned Degrowth". July 1, 2023.
Simple living
Sustainability
Green politics
Ecological economics
Environmental movements
Environmental ethics
Environmental economics
Environmental social science concepts | 0.794464 | 0.996878 | 0.791984 |
Biological anthropology | Biological anthropology, also known as physical anthropology, is a social science discipline concerned with the biological and behavioral aspects of human beings, their extinct hominin ancestors, and related non-human primates, particularly from an evolutionary perspective. This subfield of anthropology systematically studies human beings from a biological perspective.
Branches
As a subfield of anthropology, biological anthropology itself is further divided into several branches. All branches are united in their common orientation and/or application of evolutionary theory to understanding human biology and behavior.
Bioarchaeology is the study of past human cultures through examination of human remains recovered in an archaeological context. The examined human remains usually are limited to bones but may include preserved soft tissue. Researchers in bioarchaeology combine the skill sets of human osteology, paleopathology, and archaeology, and often consider the cultural and mortuary context of the remains.
Evolutionary biology is the study of the evolutionary processes that produced the diversity of life on Earth, starting from a single common ancestor. These processes include natural selection, common descent, and speciation.
Evolutionary psychology is the study of psychological structures from a modern evolutionary perspective. It seeks to identify which human psychological traits are evolved adaptations – that is, the functional products of natural selection or sexual selection in human evolution.
Forensic anthropology is the application of the science of physical anthropology and human osteology in a legal setting, most often in criminal cases where the victim's remains are in the advanced stages of decomposition.
Human behavioral ecology is the study of behavioral adaptations (foraging, reproduction, ontogeny) from the evolutionary and ecologic perspectives (see behavioral ecology). It focuses on human adaptive responses (physiological, developmental, genetic) to environmental stresses.
Human biology is an interdisciplinary field of biology, biological anthropology, nutrition and medicine, which concerns international, population-level perspectives on health, evolution, anatomy, physiology, molecular biology, neuroscience, and genetics.
Paleoanthropology is the study of fossil evidence for human evolution, mainly using remains from extinct hominin and other primate species to determine the morphological and behavioral changes in the human lineage, as well as the environment in which human evolution occurred.
Paleopathology is the study of disease in antiquity. This study focuses not only on pathogenic conditions observable in bones or mummified soft tissue, but also on nutritional disorders, variation in stature or morphology of bones over time, evidence of physical trauma, or evidence of occupationally derived biomechanic stress.
Primatology is the study of non-human primate behavior, morphology, and genetics. Primatologists use phylogenetic methods to infer which traits humans share with other primates and which are human-specific adaptations.
History
Origins
Biological Anthropology looks different today from the way it did even twenty years ago. Even the name is relatively new, having been 'physical anthropology' for over a century, with some practitioners still applying that term. Biological anthropologists look back to the work of Charles Darwin as a major foundation for what they do today. However, if one traces the intellectual genealogy back to physical anthropology's beginnings—before the discovery of much of what we now know as the hominin fossil record—then the focus shifts to human biological variation. Some editors, see below, have rooted the field even deeper than formal science.
Attempts to study and classify human beings as living organisms date back to ancient Greece. The Greek philosopher Plato ( 428– 347 BC) placed humans on the scala naturae, which included all things, from inanimate objects at the bottom to deities at the top. This became the main system through which scholars thought about nature for the next roughly 2,000 years. Plato's student Aristotle ( 384–322 BC) observed in his History of Animals that human beings are the only animals to walk upright and argued, in line with his teleological view of nature, that humans have buttocks and no tails in order to give them a soft place to sit when they are tired of standing. He explained regional variations in human features as the result of different climates. He also wrote about physiognomy, an idea derived from writings in the Hippocratic Corpus. Scientific physical anthropology began in the 17th to 18th centuries with the study of racial classification (Georgius Hornius, François Bernier, Carl Linnaeus, Johann Friedrich Blumenbach).
The first prominent physical anthropologist, the German physician Johann Friedrich Blumenbach (1752–1840) of Göttingen, amassed a large collection of human skulls (Decas craniorum, published during 1790–1828), from which he argued for the division of humankind into five major races (termed Caucasian, Mongolian, Aethiopian, Malayan and American). In the 19th century, French physical anthropologists, led by Paul Broca (1824–1880), focused on craniometry while the German tradition, led by Rudolf Virchow (1821–1902), emphasized the influence of environment and disease upon the human body.
In the 1830s and 40s, physical anthropology was prominent in the debate about slavery, with the scientific, monogenist works of the British abolitionist James Cowles Prichard (1786–1848) opposing those of the American polygenist Samuel George Morton (1799–1851).
In the late 19th century, German-American anthropologist Franz Boas (1858–1942) strongly impacted biological anthropology by emphasizing the influence of culture and experience on the human form. His research showed that head shape was malleable to environmental and nutritional factors rather than a stable "racial" trait. However, scientific racism still persisted in biological anthropology, with prominent figures such as Earnest Hooton and Aleš Hrdlička promoting theories of racial superiority and a European origin of modern humans.
"New physical anthropology"
In 1951 Sherwood Washburn, a former student of Hooton, introduced a "new physical anthropology." He changed the focus from racial typology to concentrate upon the study of human evolution, moving away from classification towards evolutionary process. Anthropology expanded to include paleoanthropology and primatology. The 20th century also saw the modern synthesis in biology: the reconciling of Charles Darwin's theory of evolution and Gregor Mendel's research on heredity. Advances in the understanding of the molecular structure of DNA and the development of chronological dating methods opened doors to understanding human variation, both past and present, more accurately and in much greater detail.
Notable biological anthropologists
Zeresenay Alemseged
John Lawrence Angel
George J. Armelagos
William M. Bass
Caroline Bond Day
Jane E. Buikstra
William Montague Cobb
Carleton S. Coon
Robert Corruccini
Raymond Dart
Robin Dunbar
Egon Freiherr von Eickstedt
Linda Fedigan
A. Roberto Frisancho
Robert Foley
Jane Goodall
Joseph Henrich
Earnest Hooton
Aleš Hrdlička
Sarah Blaffer Hrdy
Anténor Firmin
Dian Fossey
Birute Galdikas
Richard Lynch Garner
Colin Groves
Yohannes Haile-Selassie
Ralph Holloway
William W. Howells
Donald Johanson
Robert Jurmain
Melvin Konner
Louis Leakey
Mary Leakey
Richard Leakey
Frank B. Livingstone
Owen Lovejoy
Ruth Mace
Jonathan M. Marks
Robert D. Martin
Russell Mittermeier
Desmond Morris
Douglas W. Owsley
David Pilbeam
Kathy Reichs
Alice Roberts
Pardis Sabeti
Robert Sapolsky
Eugenie C. Scott
Meredith Small
Chris Stringer
Phillip V. Tobias
Douglas H. Ubelaker
Frans de Waal
Sherwood Washburn
David Watts
Tim White
Milford H. Wolpoff
Richard Wrangham
Teuku Jacob
Biraja Sankar Guha
See also
Anthropometry, the measurement of the human individual
Biocultural anthropology
Ethology
Evolutionary anthropology
Evolutionary biology
Evolutionary psychology
Human evolution
Paleontology
Primatology
Race (human categorization)
Sociobiology
References
Further reading
Michael A. Little and Kenneth A.R. Kennedy, eds. Histories of American Physical Anthropology in the Twentieth Century, (Lexington Books; 2010); 259 pages; essays on the field from the late 19th to the late 20th century; topics include Sherwood L. Washburn (1911–2000) and the "new physical anthropology"
Brown, Ryan A and Armelagos, George, "Apportionment of Racial Diversity: A Review", Evolutionary Anthropology 10:34–40 2001
Modern Human Variation: Models of Classification
Redman, Samuel J. Bone Rooms: From Scientific Racism to Human Prehistory in Museums. Cambridge: Harvard University Press. 2016.
External links
American Association of Biological Anthropologists
British Association of Biological Anthropologists and Osteoarchaeologists
Human Biology Association
Canadian Association for Physical Anthropology
Homo erectus and Homo neanderthalensis reconstructions – Electronic articles published by the Division of Anthropology, American Museum of Natural History.
Istituto Italiano di Antropologia
Journal of Anthropological Sciences – free full text review articles available
Mapping Transdisciplinarity in Anthropology pdf
Fundamental Theory of Human Sciences ppt
American Journal of Human Biology
Human Biology, The International Journal of Population Genetics and Anthropology
Economics and Human Biology
Laboratory for Human Biology Research at Northwestern University
The Program in Human Biology at Stanford
Academic Genealogical Tree of Physical Anthropologists | 0.796129 | 0.994778 | 0.791972 |
Plant physiology | Plant physiology is a subdiscipline of botany concerned with the functioning, or physiology, of plants.
Plant physiologists study fundamental processes of plants, such as photosynthesis, respiration, plant nutrition, plant hormone functions, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, environmental stress physiology, seed germination, dormancy and stomata function and transpiration. Plant physiology interacts with the fields of plant morphology (structure of plants), plant ecology (interactions with the environment), phytochemistry (biochemistry of plants), cell biology, genetics, biophysics and molecular biology.
Aims
The field of plant physiology includes the study of all the internal activities of plants—those chemical and physical processes associated with life as they occur in plants. This includes study at many levels of scale of size and time. At the smallest scale are molecular interactions of photosynthesis and internal diffusion of water, minerals, and nutrients. At the largest scale are the processes of plant development, seasonality, dormancy, and reproductive control. Major subdisciplines of plant physiology include phytochemistry (the study of the biochemistry of plants) and phytopathology (the study of disease in plants). The scope of plant physiology as a discipline may be divided into several major areas of research.
First, the study of phytochemistry (plant chemistry) is included within the domain of plant physiology. To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Photosynthesis requires a large array of pigments, enzymes, and other compounds to function. Because they cannot move, plants must also defend themselves chemically from herbivores, pathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds.
Secondly, plant physiology includes the study of biological and chemical processes of individual plant cells. Plant cells have a number of features that distinguish them from cells of animals, and which lead to major differences in the way that plant life behaves and responds differently from animal life. For example, plant cells have a cell wall which maintains the shape of plant cells. Plant cells also contain chlorophyll, a chemical compound that interacts with light in a way that enables plants to manufacture their own nutrients rather than consuming other living things as animals do.
Thirdly, plant physiology deals with interactions between cells, tissues, and organs within a plant. Different cells and tissues are physically and chemically specialized to perform different functions. Roots and rhizoids function to anchor the plant and acquire minerals in the soil. Leaves catch light in order to manufacture nutrients. For both of these organs to remain living, minerals that the roots acquire must be transported to the leaves, and the nutrients manufactured in the leaves must be transported to the roots. Plants have developed a number of ways to achieve this transport, such as vascular tissue, and the functioning of the various modes of transport is studied by plant physiologists.
Fourthly, plant physiologists study the ways that plants control or regulate internal functions. Like animals, plants produce chemicals called hormones which are produced in one part of the plant to signal cells in another part of the plant to respond. Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism. The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant.
Finally, plant physiology includes the study of plant response to environmental conditions and their variation, a field known as environmental physiology. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors.
Biochemistry of plants
The chemical elements of which plants are constructed—principally carbon, oxygen, hydrogen, nitrogen, phosphorus, sulfur, etc.—are the same as for all other life forms: animals, fungi, bacteria and even viruses. Only the details of their individual molecular structures vary.
Despite this underlying similarity, plants produce a vast array of chemical compounds with unique properties which they use to cope with their environment. Pigments are used by plants to absorb or detect light, and are extracted by humans for use in dyes. Other plant products may be used for the manufacture of commercially important rubber or biofuel. Perhaps the most celebrated compounds from plants are those with pharmacological activity, such as salicylic acid from which aspirin is made, morphine, and digoxin. Drug companies spend billions of dollars each year researching plant compounds for potential medicinal benefits.
Constituent elements
Plants require some nutrients, such as carbon and nitrogen, in large quantities to survive. Some nutrients are termed macronutrients, where the prefix macro- (large) refers to the quantity needed, not the size of the nutrient particles themselves. Other nutrients, called micronutrients, are required only in trace amounts for plants to remain healthy. Such micronutrients are usually absorbed as ions dissolved in water taken from the soil, though carnivorous plants acquire some of their micronutrients from captured prey.
The following tables list element nutrients essential to plants. Uses within plants are generalized.
Pigments
Among the most important molecules for plant function are the pigments. Plant pigments include a variety of different kinds of molecules, including porphyrins, carotenoids, and anthocyanins. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment appears to the eye.
Chlorophyll is the primary pigment in plants; it is a porphyrin that absorbs red and blue wavelengths of light while reflecting green. It is the presence and relative abundance of chlorophyll that gives plants their green color. All land plants and green algae possess two forms of this pigment: chlorophyll a and chlorophyll b. Kelps, diatoms, and other photosynthetic heterokonts contain chlorophyll c instead of b, red algae possess chlorophyll a. All chlorophylls serve as the primary means plants use to intercept light to fuel photosynthesis.
Carotenoids are red, orange, or yellow tetraterpenoids. They function as accessory pigments in plants, helping to fuel photosynthesis by gathering wavelengths of light not readily absorbed by chlorophyll. The most familiar carotenoids are carotene (an orange pigment found in carrots), lutein (a yellow pigment found in fruits and vegetables), and lycopene (the red pigment responsible for the color of tomatoes). Carotenoids have been shown to act as antioxidants and to promote healthy eyesight in humans.
Anthocyanins (literally "flower blue") are water-soluble flavonoid pigments that appear red to blue, according to pH. They occur in all tissues of higher plants, providing color in leaves, stems, roots, flowers, and fruits, though not always in sufficient quantities to be noticeable. Anthocyanins are most visible in the petals of flowers, where they may make up as much as 30% of the dry weight of the tissue. They are also responsible for the purple color seen on the underside of tropical shade plants such as Tradescantia zebrina. In these plants, the anthocyanin catches light that has passed through the leaf and reflects it back towards regions bearing chlorophyll, in order to maximize the use of available light
Betalains are red or yellow pigments. Like anthocyanins they are water-soluble, but unlike anthocyanins they are indole-derived compounds synthesized from tyrosine. This class of pigments is found only in the Caryophyllales (including cactus and amaranth), and never co-occur in plants with anthocyanins. Betalains are responsible for the deep red color of beets, and are used commercially as food-coloring agents. Plant physiologists are uncertain of the function that betalains have in plants which possess them, but there is some preliminary evidence that they may have fungicidal properties.
Signals and regulators
Plants produce hormones and other growth regulators which act to signal a physiological response in their tissues. They also produce compounds such as phytochrome that are sensitive to light and which serve to trigger growth or development in response to environmental signals.
Plant hormones
Plant hormones, known as plant growth regulators (PGRs) or phytohormones, are chemicals that regulate a plant's growth. According to a standard animal definition, hormones are signal molecules produced at specific locations, that occur in very low concentrations, and cause altered processes in target cells at other locations. Unlike animals, plants lack specific hormone-producing tissues or organs. Plant hormones are often not transported to other parts of the plant and production is not limited to specific locations.
Plant hormones are chemicals that in small amounts promote and influence the growth, development and differentiation of cells and tissues. Hormones are vital to plant growth; affecting processes in plants from flowering to seed development, dormancy, and germination. They regulate which tissues grow upwards and which grow downwards, leaf formation and stem growth, fruit development and ripening, as well as leaf abscission and even plant death.
The most important plant hormones are abscissic acid (ABA), auxins, ethylene, gibberellins, and cytokinins, though there are many other substances that serve to regulate plant physiology.
Photomorphogenesis
While most people know that light is important for photosynthesis in plants, few realize that plant sensitivity to light plays a role in the control of plant structural development (morphogenesis). The use of light to control structural development is called photomorphogenesis, and is dependent upon the presence of specialized photoreceptors, which are chemical pigments capable of absorbing specific wavelengths of light.
Plants use four kinds of photoreceptors: phytochrome, cryptochrome, a UV-B photoreceptor, and protochlorophyllide a. The first two of these, phytochrome and cryptochrome, are photoreceptor proteins, complex molecular structures formed by joining a protein with a light-sensitive pigment. Cryptochrome is also known as the UV-A photoreceptor, because it absorbs ultraviolet light in the long wave "A" region. The UV-B receptor is one or more compounds not yet identified with certainty, though some evidence suggests carotene or riboflavin as candidates. Protochlorophyllide a, as its name suggests, is a chemical precursor of chlorophyll.
The most studied of the photoreceptors in plants is phytochrome. It is sensitive to light in the red and far-red region of the visible spectrum. Many flowering plants use it to regulate the time of flowering based on the length of day and night (photoperiodism) and to set circadian rhythms. It also regulates other responses including the germination of seeds, elongation of seedlings, the size, shape and number of leaves, the synthesis of chlorophyll, and the straightening of the epicotyl or hypocotyl hook of dicot seedlings.
Photoperiodism
Many flowering plants use the pigment phytochrome to sense seasonal changes in day length, which they take as signals to flower. This sensitivity to day length is termed photoperiodism. Broadly speaking, flowering plants can be classified as long day plants, short day plants, or day neutral plants, depending on their particular response to changes in day length. Long day plants require a certain minimum length of daylight to start flowering, so these plants flower in the spring or summer. Conversely, short day plants flower when the length of daylight falls below a certain critical level. Day neutral plants do not initiate flowering based on photoperiodism, though some may use temperature sensitivity (vernalization) instead.
Although a short day plant cannot flower during the long days of summer, it is not actually the period of light exposure that limits flowering. Rather, a short day plant requires a minimal length of uninterrupted darkness in each 24-hour period (a short daylength) before floral development can begin. It has been determined experimentally that a short day plant (long night) does not flower if a flash of phytochrome activating light is used on the plant during the night.
Plants make use of the phytochrome system to sense day length or photoperiod. This fact is utilized by florists and greenhouse gardeners to control and even induce flowering out of season, such as the poinsettia (Euphorbia pulcherrima).
Environmental physiology
Paradoxically, the subdiscipline of environmental physiology is on the one hand a recent field of study in plant ecology and on the other hand one of the oldest. Environmental physiology is the preferred name of the subdiscipline among plant physiologists, but it goes by a number of other names in the applied sciences. It is roughly synonymous with ecophysiology, crop ecology, horticulture and agronomy. The particular name applied to the subdiscipline is specific to the viewpoint and goals of research. Whatever name is applied, it deals with the ways in which plants respond to their environment and so overlaps with the field of ecology.
Environmental physiologists examine plant response to physical factors such as radiation (including light and ultraviolet radiation), temperature, fire, and wind. Of particular importance are water relations (which can be measured with the Pressure bomb) and the stress of drought or inundation, exchange of gases with the atmosphere, as well as the cycling of nutrients such as nitrogen and carbon.
Environmental physiologists also examine plant response to biological factors. This includes not only negative interactions, such as competition, herbivory, disease and parasitism, but also positive interactions, such as mutualism and pollination.
While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain as members of the animal kingdom do simply because of the lack of any pain receptors, nerves, or a brain, and, by extension, lack of consciousness. Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not, are known for their "obvious sensory abilities". Nevertheless, the plant kingdom as a whole do not feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites, since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death.
Tropisms and nastic movements
Plants may respond both to directional and non-directional stimuli. A response to a directional stimulus, such as gravity or sun light, is called a tropism. A response to a nondirectional stimulus, such as temperature or humidity, is a nastic movement.
Tropisms in plants are the result of differential cell growth, in which the cells on one side of the plant elongates more than those on the other side, causing the part to bend toward the side with less growth. Among the common tropisms seen in plants is phototropism, the bending of the plant toward a source of light. Phototropism allows the plant to maximize light exposure in plants which require additional light for photosynthesis, or to minimize it in plants subjected to intense light and heat. Geotropism allows the roots of a plant to determine the direction of gravity and grow downwards. Tropisms generally result from an interaction between the environment and production of one or more plant hormones.
Nastic movements results from differential cell growth (e.g. epinasty and hiponasty), or from changes in turgor pressure within plant tissues (e.g., nyctinasty), which may occur rapidly. A familiar example is thigmonasty (response to touch) in the Venus fly trap, a carnivorous plant. The traps consist of modified leaf blades which bear sensitive trigger hairs. When the hairs are touched by an insect or other animal, the leaf folds shut. This mechanism allows the plant to trap and digest small insects for additional nutrients. Although the trap is rapidly shut by changes in internal cell pressures, the leaf must grow slowly to reset for a second opportunity to trap insects.
Plant disease
Economically, one of the most important areas of research in environmental physiology is that of phytopathology, the study of diseases in plants and the manner in which plants resist or cope with infection. Plant are susceptible to the same kinds of disease organisms as animals, including viruses, bacteria, and fungi, as well as physical invasion by insects and roundworms.
Because the biology of plants differs with animals, their symptoms and responses are quite different. In some cases, a plant can simply shed infected leaves or flowers to prevent the spread of disease, in a process called abscission. Most animals do not have this option as a means of controlling disease. Plant diseases organisms themselves also differ from those causing disease in animals because plants cannot usually spread infection through casual physical contact. Plant pathogens tend to spread via spores or are carried by animal vectors.
One of the most important advances in the control of plant disease was the discovery of Bordeaux mixture in the nineteenth century. The mixture is the first known fungicide and is a combination of copper sulfate and lime. Application of the mixture served to inhibit the growth of downy mildew that threatened to seriously damage the French wine industry.
History
Early history
Francis Bacon published one of the first plant physiology experiments in 1627 in the book, Sylva Sylvarum. Bacon grew several terrestrial plants, including a rose, in water and concluded that soil was only needed to keep the plant upright. Jan Baptist van Helmont published what is considered the first quantitative experiment in plant physiology in 1648. He grew a willow tree for five years in a pot containing 200 pounds of oven-dry soil. The soil lost just two ounces of dry weight and van Helmont concluded that plants get all their weight from water, not soil. In 1699, John Woodward published experiments on growth of spearmint in different sources of water. He found that plants grew much better in water with soil added than in distilled water.
Stephen Hales is considered the Father of Plant Physiology for the many experiments in the 1727 book, Vegetable Staticks; though Julius von Sachs unified the pieces of plant physiology and put them together as a discipline. His Lehrbuch der Botanik was the plant physiology bible of its time.
Researchers discovered in the 1800s that plants absorb essential mineral nutrients as inorganic ions in water. In natural conditions, soil acts as a mineral nutrient reservoir but the soil itself is not essential to plant growth. When the mineral nutrients in the soil are dissolved in water, plant roots absorb nutrients readily, soil is no longer required for the plant to thrive. This observation is the basis for hydroponics, the growing of plants in a water solution rather than soil, which has become a standard technique in biological research, teaching lab exercises, crop production and as a hobby.
Economic applications
Food production
In horticulture and agriculture along with food science, plant physiology is an important topic relating to fruits, vegetables, and other consumable parts of plants. Topics studied include: climatic requirements, fruit drop, nutrition, ripening, fruit set. The production of food crops also hinges on the study of plant physiology covering such topics as optimal planting and harvesting times and post harvest storage of plant products for human consumption and the production of secondary products like drugs and cosmetics.
Crop physiology steps back and looks at a field of plants as a whole, rather than looking at each plant individually. Crop physiology looks at how plants respond to each other and how to maximize results like food production through determining things like optimal planting density.
See also
Biomechanics
Hyperaccumulator
Phytochemistry
Plant anatomy
Plant morphology
Plant secondary metabolism
Branches of botany
References
Further reading
Lincoln Taiz, Eduardo Zeiger, Ian Max Møller, Angus Murphy: Fundamentals of Plant Physiology. Sinauer, 2018.
Branches of botany | 0.800071 | 0.989696 | 0.791827 |
Mathematical and theoretical biology | Mathematical and theoretical biology, or biomathematics, is a branch of biology which employs theoretical analysis, mathematical models and abstractions of living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to test scientific theories. The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side. Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged.
Mathematical biology aims at the mathematical representation and modeling of biological processes, using techniques and tools of applied mathematics. It can be useful in both theoretical and practical research. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. This requires precise mathematical models.
Because of the complexity of the living systems, theoretical biology employs several fields of mathematics, and has contributed to the development of new techniques.
History
Early history
Mathematics has been used in biology as early as the 13th century, when Fibonacci used the famous Fibonacci series to describe a growing population of rabbits. In the 18th century, Daniel Bernoulli applied mathematics to describe the effect of smallpox on the human population. Thomas Malthus' 1789 essay on the growth of the human population was based on the concept of exponential growth. Pierre François Verhulst formulated the logistic growth model in 1836.
Fritz Müller described the evolutionary benefits of what is now called Müllerian mimicry in 1879, in an account notable for being the first use of a mathematical argument in evolutionary ecology to show how powerful the effect of natural selection would be, unless one includes Malthus's discussion of the effects of population growth that influenced Charles Darwin: Malthus argued that growth would be exponential (he uses the word "geometric") while resources (the environment's carrying capacity) could only grow arithmetically.
The term "theoretical biology" was first used as a monograph title by Johannes Reinke in 1901, and soon after by Jakob von Uexküll in 1920. One founding text is considered to be On Growth and Form (1917) by D'Arcy Thompson, and other early pioneers include Ronald Fisher, Hans Leo Przibram, Vito Volterra, Nicolas Rashevsky and Conrad Hal Waddington.
Recent growth
Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include:
The rapid growth of data-rich information sets, due to the genomics revolution, which are difficult to understand without the use of analytical tools
Recent development of mathematical tools such as chaos theory to help understand complex, non-linear mechanisms in biology
An increase in computing power, which facilitates calculations and simulations not previously possible
An increasing interest in in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and animal research
Areas of research
Several areas of specialized research in mathematical and theoretical biology as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models.
Abstract relational biology
Abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957–1958 as abstract, relational models of cellular and organismal organization.
Other approaches include the notion of autopoiesis developed by Maturana and Varela, Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints.
Algebraic biology
Algebraic biology (also known as symbolic systems biology) applies the algebraic methods of symbolic computation to the study of biological problems, especially in genomics, proteomics, analysis of molecular structures and study of genes.
Complex systems biology
An elaboration of systems biology to understand the more complex life processes was developed since 1970 in connection with molecular set theory, relational biology and algebraic biology.
Computer models and automata theory
A monograph on this topic summarizes an extensive amount of published research in this area up to 1986, including subsections in the following areas: computer modeling in biology and medicine, arterial system models, neuron models, biochemical and oscillation networks, quantum automata, quantum computers in molecular biology and genetics, cancer modelling, neural nets, genetic networks, abstract categories in relational biology, metabolic-replication systems, category theory applications in biology and medicine, automata theory, cellular automata, tessellation models and complete self-reproduction, chaotic systems in organisms, relational biology and organismic theories.
Modeling cell and molecular biology
This area has received a boost due to the growing importance of molecular biology.
Mechanics of biological tissues
Theoretical enzymology and enzyme kinetics
Cancer modelling and simulation
Modelling the movement of interacting cell populations
Mathematical modelling of scar tissue formation
Mathematical modelling of intracellular dynamics
Mathematical modelling of the cell cycle
Mathematical modelling of apoptosis
Modelling physiological systems
Modelling of arterial disease
Multi-scale modelling of the heart
Modelling electrical properties of muscle interactions, as in bidomain and monodomain models
Computational neuroscience
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system.
Evolutionary biology
Ecology and evolutionary biology have traditionally been the dominant fields of mathematical biology.
Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, is population genetics. Most population geneticists consider the appearance of new alleles by mutation, the appearance of new genotypes by recombination, and changes in the frequencies of existing alleles and genotypes at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, together with the assumption of linkage equilibrium or quasi-linkage equilibrium, one derives quantitative genetics. Ronald Fisher made fundamental advances in statistics, such as analysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development of coalescent theory is phylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic.
Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. Work in this area dates back to the 19th century, and even as far as 1798 when Thomas Malthus formulated the first principle of population dynamics, which later became known as the Malthusian growth model. The Lotka–Volterra predator-prey equations are another famous example. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, and provide important results that may be applied to health policy decisions.
In evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field of adaptive dynamics.
Mathematical biophysics
The earlier stages of mathematical biology were dominated by mathematical biophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments.
The following is a list of mathematical descriptions and their assumptions.
Deterministic processes (dynamical systems)
A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space.
Difference equations/Maps – discrete time, continuous state space.
Ordinary differential equations – continuous time, continuous state space, no spatial derivatives. See also: Numerical ordinary differential equations.
Partial differential equations – continuous time, continuous state space, spatial derivatives. See also: Numerical partial differential equations.
Logical deterministic cellular automata – discrete time, discrete state space. See also: Cellular automaton.
Stochastic processes (random dynamical systems)
A random mapping between an initial state and a final state, making the state of the system a random variable with a corresponding probability distribution.
Non-Markovian processes – generalized master equation – continuous time with memory of past events, discrete state space, waiting times of events (or transitions between states) discretely occur.
Jump Markov process – master equation – continuous time with no memory of past events, discrete state space, waiting times between events discretely occur and are exponentially distributed. See also: Monte Carlo method for numerical simulation methods, specifically dynamic Monte Carlo method and Gillespie algorithm.
Continuous Markov process – stochastic differential equations or a Fokker–Planck equation – continuous time, continuous state space, events occur continuously according to a random Wiener process.
Spatial modelling
One classic work in this area is Alan Turing's paper on morphogenesis entitled The Chemical Basis of Morphogenesis, published in 1952 in the Philosophical Transactions of the Royal Society.
Travelling waves in a wound-healing assay
Swarming behaviour
A mechanochemical theory of morphogenesis
Biological pattern formation
Spatial distribution modeling using plot samples
Turing patterns
Mathematical methods
A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or at equilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur.
Molecular set theory
Molecular set theory (MST) is a mathematical formulation of the wide-sense chemical kinetics of biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced by Anthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine.
In a more general sense, MST is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine.
Organizational biology
Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea.
For example, abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957–1958 as abstract, relational models of cellular and organismal organization.
Model example: the cell cycle
The eukaryotic cell cycle is very complex and has been the subject of intense study, since its misregulation leads to cancers.
It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006).
By means of a system of ordinary differential equations these models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called a deterministic process (whereas a model describing a statistical distribution of protein concentrations in a population of cells is called a stochastic process).
To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such as rate kinetics for stoichiometric reactions, Michaelis-Menten kinetics for enzyme substrate reactions and Goldbeter–Koshland kinetics for ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size.
To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a starting vector (list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments.
In analysis, the properties of the equations are used to investigate the behavior of the system depending on the values of the parameters and variables. A system of differential equations can be represented as a vector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: a stable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), an unstable point, either a source or a saddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate).
A better representation, which handles the large number of variables and parameters, is a bifurcation diagram using bifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called a Hopf bifurcation and an infinite period bifurcation.
See also
Biological applications of bifurcation theory
Biophysics
Biostatistics
Entropy and life
Ewens's sampling formula
Journal of Theoretical Biology
Logistic function
Mathematical modelling of infectious disease
Metabolic network modelling
Molecular modelling
Morphometrics
Population genetics
Spring school on theoretical biology
Statistical genetics
Theoretical ecology
Turing pattern
Notes
References
"Biologist Salary | Payscale". Payscale.Com, 2021, Biologist Salary | PayScale. Accessed 3 May 2021.
Theoretical biology
Further reading
External links
The Society for Mathematical Biology
The Collection of Biostatistics Research Archive | 0.796564 | 0.99322 | 0.791163 |
Morphogenesis | Morphogenesis (from the Greek morphê shape and genesis creation, literally "the generation of form") is the biological process that causes a cell, tissue or organism to develop its shape. It is one of three fundamental aspects of developmental biology along with the control of tissue growth and patterning of cellular differentiation.
The process controls the organized spatial distribution of cells during the embryonic development of an organism. Morphogenesis can take place also in a mature organism, such as in the normal maintenance of tissue by stem cells or in regeneration of tissues after damage. Cancer is an example of highly abnormal and pathological tissue morphogenesis. Morphogenesis also describes the development of unicellular life forms that do not have an embryonic stage in their life cycle. Morphogenesis is essential for the evolution of new forms.
Morphogenesis is a mechanical process involving forces that generate mechanical stress, strain, and movement of cells, and can be induced by genetic programs according to the spatial patterning of cells within tissues. Abnormal morphogenesis is called dysmorphogenesis.
History
Some of the earliest ideas and mathematical descriptions on how physical processes and constraints affect biological growth, and hence natural patterns such as the spirals of phyllotaxis, were written by D'Arcy Wentworth Thompson in his 1917 book On Growth and Form and Alan Turing in his The Chemical Basis of Morphogenesis (1952). Where Thompson explained animal body shapes as being created by varying rates of growth in different directions, for instance to create the spiral shell of a snail, Turing correctly predicted a mechanism of morphogenesis, the diffusion of two different chemical signals, one activating and one deactivating growth, to set up patterns of development, decades before the formation of such patterns was observed. The fuller understanding of the mechanisms involved in actual organisms required the discovery of the structure of DNA in 1953, and the development of molecular biology and biochemistry.
Genetic and molecular basis
Several types of molecules are important in morphogenesis. Morphogens are soluble molecules that can diffuse and carry signals that control cell differentiation via concentration gradients. Morphogens typically act through binding to specific protein receptors. An important class of molecules involved in morphogenesis are transcription factor proteins that determine the fate of cells by interacting with DNA. These can be coded for by master regulatory genes, and either activate or deactivate the transcription of other genes; in turn, these secondary gene products can regulate the expression of still other genes in a regulatory cascade of gene regulatory networks. At the end of this cascade are classes of molecules that control cellular behaviors such as cell migration, or, more generally, their properties, such as cell adhesion or cell contractility. For example, during gastrulation, clumps of stem cells switch off their cell-to-cell adhesion, become migratory, and take up new positions within an embryo where they again activate specific cell adhesion proteins and form new tissues and organs. Developmental signaling pathways implicated in morphogenesis include Wnt, Hedgehog, and ephrins.
Cellular basis
At a tissue level, ignoring the means of control, morphogenesis arises because of cellular proliferation and motility. Morphogenesis also involves changes in the cellular structure or how cells interact in tissues. These changes can result in tissue elongation, thinning, folding, invasion or separation of one tissue into distinct layers. The latter case is often referred as cell sorting. Cell "sorting out" consists of cells moving so as to sort into clusters that maximize contact between cells of the same type. The ability of cells to do this has been proposed to arise from differential cell adhesion by Malcolm Steinberg through his differential adhesion hypothesis. Tissue separation can also occur via more dramatic cellular differentiation events during which epithelial cells become mesenchymal (see Epithelial–mesenchymal transition). Mesenchymal cells typically leave the epithelial tissue as a consequence of changes in cell adhesive and contractile properties. Following epithelial-mesenchymal transition, cells can migrate away from an epithelium and then associate with other similar cells in a new location. In plants, cellular morphogenesis is tightly linked to the chemical composition and the mechanical properties of the cell wall.
Cell-to-cell adhesion
During embryonic development, cells are restricted to different layers due to differential affinities. One of the ways this can occur is when cells share the same cell-to-cell adhesion molecules. For instance, homotypic cell adhesion can maintain boundaries between groups of cells that have different adhesion molecules. Furthermore, cells can sort based upon differences in adhesion between the cells, so even two populations of cells with different levels of the same adhesion molecule can sort out. In cell culture cells that have the strongest adhesion move to the center of a mixed aggregates of cells. Moreover, cell-cell adhesion is often modulated by cell contractility, which can exert forces on the cell-cell contacts so that two cell populations with equal levels of the same adhesion molecule can sort out. The molecules responsible for adhesion are called cell adhesion molecules (CAMs). Several types of cell adhesion molecules are known and one major class of these molecules are cadherins. There are dozens of different cadherins that are expressed on different cell types. Cadherins bind to other cadherins in a like-to-like manner: E-cadherin (found on many epithelial cells) binds preferentially to other E-cadherin molecules. Mesenchymal cells usually express other cadherin types such as N-cadherin.
Extracellular matrix
The extracellular matrix (ECM) is involved in keeping tissues separated, providing structural support or providing a structure for cells to migrate on. Collagen, laminin, and fibronectin are major ECM molecules that are secreted and assembled into sheets, fibers, and gels. Multisubunit transmembrane receptors called integrins are used to bind to the ECM. Integrins bind extracellularly to fibronectin, laminin, or other ECM components, and intracellularly to microfilament-binding proteins α-actinin and talin to link the cytoskeleton with the outside. Integrins also serve as receptors to trigger signal transduction cascades when binding to the ECM. A well-studied example of morphogenesis that involves ECM is mammary gland ductal branching.
Cell contractility
Tissues can change their shape and separate into distinct layers via cell contractility. Just as in muscle cells, myosin can contract different parts of the cytoplasm to change its shape or structure. Myosin-driven contractility in embryonic tissue morphogenesis is seen during the separation of germ layers in the model organisms Caenorhabditis elegans, Drosophila and zebrafish. There are often periodic pulses of contraction in embryonic morphogenesis. A model called the cell state splitter involves alternating cell contraction and expansion, initiated by a bistable organelle at the apical end of each cell. The organelle consists of microtubules and microfilaments in mechanical opposition. It responds to local mechanical perturbations caused by morphogenetic movements. These then trigger traveling embryonic differentiation waves of contraction or expansion over presumptive tissues that determine cell type and is followed by cell differentiation. The cell state splitter was first proposed to explain neural plate morphogenesis during gastrulation of the axolotl and the model was later generalized to all of morphogenesis.
Branching morphogenesis
In the development of the lung a bronchus branches into bronchioles forming the respiratory tree. The branching is a result of the tip of each bronchiolar tube bifurcating, and the process of branching morphogenesis forms the bronchi, bronchioles, and ultimately the alveoli.
Branching morphogenesis is also evident in the ductal formation of the mammary gland. Primitive duct formation begins in development, but the branching formation of the duct system begins later in response to estrogen during puberty and is further refined in line with mammary gland development.
Cancer morphogenesis
Cancer can result from disruption of normal morphogenesis, including both tumor formation and tumor metastasis. Mitochondrial dysfunction can result in increased cancer risk due to disturbed morphogen signaling.
Virus morphogenesis
During assembly of the bacteriophage (phage) T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. Phage T4 encoded proteins that determine virion structure include major structural components, minor structural components and non-structural proteins that catalyze specific steps in the morphogenesis sequence. Phage T4 morphogenesis is divided into three independent pathways: the head, the tail and the long tail fibres as detailed by Yap and Rossman.
Computer models
An approach to model morphogenesis in computer science or mathematics can be traced to Alan Turing's 1952 paper, "The chemical basis of morphogenesis", a model now known as the Turing pattern.
Another famous model is the so-called French flag model, developed in the sixties.
Improvements in computer performance in the twenty-first century enabled the simulation of relatively complex morphogenesis models. In 2020, such a model was proposed where cell growth and differentiation is that of a cellular automaton with parametrized rules. As the rules' parameters are differentiable, they can be trained with gradient descent, a technique which has been highly optimized in recent years due to its use in machine learning. This model was limited to the generation of pictures, and is thus bi-dimensional.
A similar model to the one described above was subsequently extended to generate three-dimensional structures, and was demonstrated in the video game Minecraft, whose block-based nature made it particularly expedient for the simulation of 3D cellular automatons.
See also
Bone morphogenetic protein
Collective cell migration
Embryonic development
Pattern formation
Reaction–diffusion system
Neurulation
Gastrulation
Axon guidance
Eye development
Polycystic kidney disease 2
Drosophila embryogenesis
Cytoplasmic determinant
Madin-Darby Canine Kidney cells
Bioelectricity#Role in pattern regulation
Notes
References
Further reading
External links
Artificial Life model of multicellular morphogenesis with autonomously generated gradients for positional information
Turing's theory of morphogenesis validated
Developmental biology
Morphology (biology)
Evolutionary developmental biology | 0.796759 | 0.992782 | 0.791009 |
Systems ecology | Systems ecology is an interdisciplinary field of ecology, a subset of Earth system science, that takes a holistic approach to the study of ecological systems, especially ecosystems. Systems ecology can be seen as an application of general systems theory to ecology. Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems.
Overview
Systems ecology seeks a holistic view of the interactions and transactions within and between biological and ecological systems. Systems ecologists realise that the function of any ecosystem can be influenced by human economics in fundamental ways. They have therefore taken an additional transdisciplinary step by including economics in the consideration of ecological-economic systems. In the words of R.L. Kitching:
Systems ecology can be defined as the approach to the study of ecology of organisms using the techniques and philosophy of systems analysis: that is, the methods and tools developed, largely in engineering, for studying, characterizing and making predictions about complex entities, that is, systems..
In any study of an ecological system, an essential early procedure is to draw a diagram of the system of interest ... diagrams indicate the system's boundaries by a solid line. Within these boundaries, series of components are isolated which have been chosen to represent that portion of the world in which the systems analyst is interested ... If there are no connections across the systems' boundaries with the surrounding systems environments, the systems are described as closed. Ecological work, however, deals almost exclusively with open systems.As a mode of scientific enquiry, a central feature of Systems Ecology is the general application of the principles of energetics to all systems at any scale. Perhaps the most notable proponent of this view was Howard T. Odum - sometimes considered the father of ecosystems ecology. In this approach the principles of energetics constitute ecosystem principles. Reasoning by formal analogy from one system to another enables the Systems Ecologist to see principles functioning in an analogous manner across system-scale boundaries. H.T. Odum commonly used the Energy Systems Language as a tool for making systems diagrams and flow charts.
The fourth of these principles, the principle of maximum power efficiency, takes central place in the analysis and synthesis of ecological systems. The fourth principle suggests that the most evolutionarily advantageous system function occurs when the environmental load matches the internal resistance of the system. The further the environmental load is from matching the internal resistance, the further the system is away from its sustainable steady state. Therefore, the systems ecologist engages in a task of resistance and impedance matching in ecological engineering, just as the electronic engineer would do.
Closely related fields
Deep ecology
Deep ecology is an ideology whose metaphysical underpinnings are deeply concerned with the science of ecology. The term was coined by Arne Naess, a Norwegian philosopher, Gandhian scholar, and environmental activist. He argues that the prevailing approach to environmental management is anthropocentric, and that the natural environment is not only "more complex than we imagine, it is more complex than we can imagine." Naess formulated deep ecology in 1973 at an environmental conference in Budapest.
Joanna Macy, John Seed, and others developed Naess' thesis into a branch they called experiential deep ecology. Their efforts were motivated by a need they perceived for the development of an "ecological self", which views the human ego as an integrated part of a living system that encompasses the individual. They sought to transcend altruism with a deeper self-interest based on biospherical equality beyond human chauvinism.
Earth systems engineering and management
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion"
Ecological economics
Ecological economics is a transdisciplinary field of academic research that addresses the dynamic and spatial interdependence between human economies and natural ecosystems. Ecological economics brings together and connects different disciplines, within the natural and social sciences but especially between these broad areas. As the name suggests, the field is made up of researchers with a background in economics and ecology. An important motivation for the emergence of ecological economics has been criticism on the assumptions and approaches of traditional (mainstream) environmental and resource economics.
Ecological energetics
Ecological energetics is the quantitative study of the flow of energy through ecological systems. It aims to uncover the principles which describe the propensity of such energy flows through the trophic, or 'energy availing' levels of ecological networks. In systems ecology the principles of ecosystem energy flows or "ecosystem laws" (i.e. principles of ecological energetics) are considered formally analogous to the principles of energetics.
Ecological humanities
Ecological humanities aims to bridge the divides between the sciences and the humanities, and between Western, Eastern and Indigenous ways of knowing nature. Like ecocentric political theory, the ecological humanities are characterised by a connectivity ontology and a commitment to two fundamental axioms relating to the need to submit to ecological laws and to see humanity as part of a larger living system.
Ecosystem ecology
Ecosystem ecology is the integrated study of biotic and abiotic components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals. Ecosystem ecology examines physical and biological structure and examines how these ecosystem characteristics interact.
The relationship between systems ecology and ecosystem ecology is complex. Much of systems ecology can be considered a subset of ecosystem ecology. Ecosystem ecology also utilizes methods that have little to do with the holistic approach of systems ecology. However, systems ecology more actively considers external influences such as economics that usually fall outside the bounds of ecosystem ecology. Whereas ecosystem ecology can be defined as the scientific study of ecosystems, systems ecology is more of a particular approach to the study of ecological systems and phenomena that interact with these systems.
Industrial ecology
Industrial ecology is the study of industrial processes as linear (open loop) systems, in which resource and capital investments move through the system to become waste, to a closed loop system where wastes become inputs for new processes.
See also
Agroecology
Earth system science
Ecosystem ecology
Ecological literacy
Emergy
Energy flow (ecology)
Energy Systems Language
Holism in science
Holon (philosophy)
Holistic management
Landscape ecology
Antireductionism
Biosemiotics
Ecosemiotics
MuSIASEM
References
Bibliography
Gregory Bateson, Steps to an Ecology of Mind, 2000.
Kenneth Edmund Ferguson, Systems Analysis in Ecology, WATT, 1966, 276 pp.
Efraim Halfon, Theoretical Systems Ecology: Advances and Case Studies, Academic Press, 1979.
J. W. Haefner, Modeling Biological Systems: Principles and Applications, London., UK, Chapman and Hall 1996, 473 pp.
Richard F Johnston, Peter W Frank, Charles Duncan Michener, Annual Review of Ecology and Systematics, 1976, 307 pp.
Jorgensen, Sven E., "Introduction to Systems Ecology", CRC Press, 2012.
R.L. Kitching, Systems ecology, University of Queensland Press, 1983.
Howard T. Odum, Systems Ecology: An Introduction, Wiley-Interscience, 1983.
Howard T. Odum, Ecological and General Systems: An Introduction to Systems Ecology. University Press of Colorado, Niwot, CO, 1994.
Friedrich Recknagel, Applied Systems Ecology: Approach and Case Studies in Aquatic Ecology, 1989.
James. Sanderson & Larry D. Harris, Landscape Ecology: A Top-down Approach, 2000, 246 pp.
Sheldon Smith, Human Systems Ecology: Studies in the Integration of Political Economy, 1989.
Shugart, H.H., O’Neil, R.V. (Eds.) Systems Ecology, Dowden, Hutchinson & Ross, Inc., 1979.
Van Dyne, George M., Ecosystems, Systems Ecology, and Systems Ecologists'', ORNL- 3975. Oak Ridge National Laboratory, Oak Ridge, TN, pp. 1–40, 1966.
Patten, Bernard C. (editor), "Systems Analysis and Simulation in Ecology", Volume 1, Academic Press, 1971.
Patten, Bernard C. (editor), "Systems Analysis and Simulation in Ecology", Volume 2, Academic Press, 1972.
Patten, Bernard C. (editor), "Systems Analysis and Simulation in Ecology", Volume 3, Academic Press, 1975.
Patten, Bernard C. (editor), "Systems Analysis and Simulation in Ecology", Volume 4, Academic Press, 1976.
External links
Organisations
Systems Ecology Department at the Stockholm University.
Systems Ecology Department at the University of Amsterdam.
Systems ecology Lab at SUNY-ESF.
Systems Ecology program at the University of Florida
Systems Ecology program at the University of Montana
Terrestrial Systems Ecology of ETH Zürich.
Environmental science
Environmental social science
Formal sciences
Ecology | 0.813525 | 0.972256 | 0.790955 |
Anagenesis | Anagenesis is the gradual evolution of a species that continues to exist as an interbreeding population. This contrasts with cladogenesis, which occurs when there is branching or splitting, leading to two or more lineages and resulting in separate species. Anagenesis does not always lead to the formation of a new species from an ancestral species. When speciation does occur as different lineages branch off and cease to interbreed, a core group may continue to be defined as the original species. The evolution of this group, without extinction or species selection, is anagenesis.
Hypotheses
One hypothesis is that during the speciation event in anagenetic evolution, the original populations will increase quickly, and then rack up genetic variation over long periods of time by mutation and recombination in a stable environment. Other factors such as selection or genetic drift will have such a significant effect on genetic material and physical traits that a species can be acknowledged as being different from the previous.
Development
An alternative definition offered for anagenesis involves progeny relationships between designated taxa with one or more denominated taxa in line with a branch from the evolutionary tree. Taxa must be within the species or genus and will help identify possible ancestors. When looking at evolutionary descent, there are two mechanisms at play. The first process is when genetic information changes. This means that over time there is enough of a difference in their genomes, and in the way that species' genes interact with each other during the developmental stage, that anagenesis can thereby be viewed as the processes of sexual and natural selection, and genetic drift's effect on an evolving species over time. The second process, speciation, is closely associated with cladogenesis. Speciation includes the actual separation of lineages, into two or more new species, from one specified species of origin. Cladogenesis can be seen as a similar hypothesis to anagenesis, with the addition of speciation to its mechanisms. Diversity on a species-level is able to be achieved through anagenesis.
Anagenesis suggests that evolutionary changes can occur in a species over time to a sufficient degree that later organisms may be considered a different species, especially in the absence of fossils documenting the gradual transition from one to another. This is in contrast to cladogenesis—or speciation in a sense—in which a population is split into two or more reproductively isolated groups and these groups accumulate sufficient differences to become distinct species. The punctuated equilibria hypothesis suggests that anagenesis is rare and that the rate of evolution is most rapid immediately after a split which will lead to cladogenesis, but does not completely rule out anagenesis. Distinguishing between anagenesis and cladogenesis is particularly relevant in the fossil record, where limited fossil preservation in time and space makes it difficult to distinguish between anagenesis, cladogenesis where one species replaces the other, or simple migration patterns.
Recent evolutionary studies are looking at anagenesis and cladogenesis for possible answers in developing the hominin phylogenetic tree to understand morphological diversity and the origins of Australopithecus anamensis, and this case could possibly show anagenesis in the fossil record.
When enough mutations have occurred and become stable in a population so that it is significantly differentiated from an ancestral population, a new species name may be assigned. A series of such species is collectively known as an evolutionary lineage. The various species along an evolutionary lineage are chronospecies. If the ancestral population of a chronospecies does not go extinct, then this is cladogenesis, and the ancestral population represents a paraphyletic species or paraspecies, being an evolutionary grade.
In humans
The modern human origins debate caused researchers to look further for answers. Researchers were curious to know if present day humans originated from Africa, or if they somehow, through anagenesis, were able to evolve from a single archaic species that lived in Afro-Eurasia. Milford H. Wolpoff is a paleoanthropologist whose work, studying human fossil records, explored anagenesis as a hypothesis for hominin evolution. When looking at anagenesis in hominids, M. H. Wolpoff describes in terms of the 'single-species hypothesis,' which is characterized by thinking of the impact that culture has on a species, as an adaptive system, and as an explanation for the conditions humans tend to live in, based on the environmental conditions, or the ecological niche. When judging the effect that culture has as an Adaptive System, scientists must first look at modern Homo Sapiens. Wolpoff contended that the ecological niche of past, extinct hominidae is distinct within the line of origin. Examining early Pliocene and late Miocenes findings helps to determine the corresponding importance of Anagenesis vs. Cladogenesis during the period of morphological differences. These findings propose that branches of the human and chimpanzee once diverged from each other. The hominin fossils go as far as 5 to 7 million years ago (Mya). Diversity on a species-level is able to be achieved through anagenesis. With collected data, only one or two early hominin were found to be relatively close to the Plio-Pleistocene range. Once more research was done, specifically with the fossils of A. anamensis and A. afarensis, researchers were able to justify that these two hominin species were linked ancestrally. However, looking at data collected by William H. Kimbel and other researchers, they viewed the history of early hominin fossils and concluded that actual macroevolution change via anagenesis was scarce.
Phylogeny
DEM (or Dynamic Evolutionary Map) is a different way to track ancestors and relationships between organisms. The pattern of branching in phylogenetic trees and how far the branch grows after a species lineage has split and evolved, correlates with anagenesis and cladogenesis. However, in DEM dots depict the movement of these different species. Anagenesis is viewed by observing the dot movement across the DEM, whereas cladogenesis is viewed by observing the separation and movement of the dots across the map.
Criticism
Controversy arises among taxonomists as to when the differences are significant enough to warrant a new species classification: Anagenesis may also be referred to as gradual evolution. The distinction of speciation and lineage evolution as anagenesis or cladogenesis can be controversial, and some academics question the necessity of the terms altogether.
The philosopher of science Marc Ereshefsky argues that paraphyletic taxa are the result of anagenesis. The lineage leading to birds has diverged significantly from lizards and crocodiles, allowing evolutionary taxonomists to classify birds separately from lizards and crocodiles, which are grouped as reptiles.
Applications
Regarding social evolution, it has been suggested that social anagenesis/aromorphosis be viewed as universal or widely diffused social innovation that raises social systems' complexity, adaptability, integrity, and interconnectedness.
See also
Multigenomic organism
References
External links
Diagram contrasting Anagenesis and Cladogenesis from the University of Newfoundland
Evolutionary biology concepts
Evolutionary biology terminology
Rate of evolution
Speciation | 0.809019 | 0.977668 | 0.790952 |
Disruptive selection | In evolutionary biology, disruptive selection, also called diversifying selection, describes changes in population genetics in which extreme values for a trait are favored over intermediate values. In this case, the variance of the trait increases and the population is divided into two distinct groups. In this more individuals acquire peripheral character value at both ends of the distribution curve.
Overview
Natural selection is known to be one of the most important biological processes behind evolution. There are many variations of traits, and some cause greater or lesser reproductive success of the individual. The effect of selection is to promote certain alleles, traits, and individuals that have a higher chance to survive and reproduce in their specific environment. Since the environment has a carrying capacity, nature acts on this mode of selection on individuals to let only the most fit offspring survive and reproduce to their full potential. The more advantageous the trait is the more common it will become in the population. Disruptive selection is a specific type of natural selection that actively selects against the intermediate in a population, favoring both extremes of the spectrum.
Disruptive selection is inferred to oftentimes lead to sympatric speciation through a phyletic gradualism mode of evolution. Disruptive selection can be caused or influenced by multiple factors and also have multiple outcomes, in addition to speciation. Individuals within the same environment can develop a preference for extremes of a trait, against the intermediate. Selection can act on having divergent body morphologies in accessing food, such as beak and dental structure. It is seen that often this is more prevalent in environments where there is not a wide clinal range of resources, causing heterozygote disadvantage or selection favoring homozygotes.
Niche partitioning allows for selection of differential patterns of resource usage, which can drive speciation. To the contrast, niche conservation pulls individuals toward ancestral ecological traits in an evolutionary tug-of-war. Also, nature tends to have a 'jump on the band wagon' perspective when something beneficial is found. This can lead to the opposite occurring with disruptive selection eventually selecting against the average; when everyone starts taking advantage of that resource it will become depleted and the extremes will be favored. Furthermore, gradualism is a more realistic view when looking at speciation as compared to punctuated equilibrium.
Disruptive selection can initially rapidly intensify divergence; this is because it is only manipulating alleles that already exist. Often it is not creating new ones by mutation which takes a long time. Usually complete reproductive isolation does not occur until many generations, but behavioral or morphological differences separate the species from reproducing generally. Furthermore, generally hybrids have reduced fitness which promotes reproductive isolation.
Example
Suppose there is a population of rabbits. The colour of the rabbits is governed by two incompletely dominant traits: black fur, represented by "B", and white fur, represented by "b". A rabbit in this population with a genotype of "BB" would have a phenotype of black fur, a genotype of "Bb" would have grey fur (a display of both black and white), and a genotype of "bb" would have white fur.
If this population of rabbits occurred in an environment that had areas of black rocks as well as areas of white rocks, the rabbits with black fur would be able to hide from predators amongst the black rocks, and the rabbits with white fur likewise amongst the white rocks. The rabbits with grey fur, however, would stand out in all areas of the habitat, and would thereby suffer greater predation.
As a consequence of this type of selective pressure, our hypothetical rabbit population would be disruptively selected for extreme values of the fur colour trait: white or black, but not grey. This is an example of underdominance (heterozygote disadvantage) leading to disruptive selection.
Sympatric speciation
It is believed that disruptive selection is one of the main forces that drive sympatric speciation in natural populations. The pathways that lead from disruptive selection to sympatric speciation seldom are prone to deviation; such speciation is a domino effect that depends on the consistency of each distinct variable. These pathways are the result of disruptive selection in intraspecific competition; it may cause reproductive isolation, and finally culminate in sympatric speciation.
It is important to keep in mind that disruptive selection does not always have to be based on intraspecific competition. It is also important to know that this type of natural selection is similar to the other ones. Where it is not the major factor, intraspecific competition can be discounted in assessing the operative aspects of the course of adaptation. For example, what may drive disruptive selection instead of intraspecific competition might be polymorphisms that lead to reproductive isolation, and thence to speciation.
When disruptive selection is based on intraspecific competition, the resulting selection in turn promotes ecological niche diversification and polymorphisms. If multiple morphs (phenotypic forms) occupy different niches, such separation could be expected to promote reduced competition for resources. Disruptive selection is seen more often in high density populations rather than in low density populations because intraspecific competition tends to be more intense within higher density populations. This is because higher density populations often imply more competition for resources. The resulting competition drives polymorphisms to exploit different niches or changes in niches in order to avoid competition. If one morph has no need for resources used by another morph, then it is likely that neither would experience pressure to compete or interact, thereby supporting the persistence and possibly the intensification of the distinctness of the two morphs within the population. This theory does not necessarily have a lot of supporting evidence in natural populations, but it has been seen many times in experimental situations using existing populations. These experiments further support that, under the right situations (as described above), this theory could prove to be true in nature.
When intraspecific competition is not at work disruptive selection can still lead to sympatric speciation and it does this through maintaining polymorphisms. Once the polymorphisms are maintained in the population, if assortative mating is taking place, then this is one way that disruptive selection can lead in the direction of sympatric speciation. If different morphs have different mating preferences then assortative mating can occur, especially if the polymorphic trait is a "magic trait", meaning a trait that is under ecological selection and in turn has a side effect on reproductive behavior. In a situation where the polymorphic trait is not a magic trait then there has to be some kind of fitness penalty for those individuals who do not mate assortatively and a mechanism that causes assortative mating has to evolve in the population. For example, if a species of butterflies develops two kinds of wing patterns, crucial to mimicry purposes in their preferred habitat, then mating between two butterflies of different wing patterns leads to an unfavorable heterozygote. Therefore, butterflies will tend to mate with others of the same wing pattern promoting increased fitness, eventually eliminating the heterozygote altogether. This unfavorable heterozygote generates pressure for a mechanism that cause assortative mating which will then lead to reproductive isolation due to the production of post-mating barriers. It is actually fairly common to see sympatric speciation when disruptive selection is supporting two morphs, specifically when the phenotypic trait affects fitness rather than mate choice.
In both situations, one where intraspecific competition is at work and the other where it is not, if all these factors are in place, they will lead to reproductive isolation, which can lead to sympatric speciation.
Other outcomes
polymorphism
sexual dimorphism
phenotypic plasticity
Significance
Disruptive selection is of particular significance in the history of evolutionary study, as it is involved in one of evolution's "cardinal cases", namely the finch populations observed by Darwin in the Galápagos.
He observed that the species of finches were similar enough to ostensibly have been descended from a single species. However, they exhibited disruptive variation in beak size. This variation appeared to be adaptively related to the seed size available on the respective islands (big beaks for big seeds, small beaks for small seeds). Medium beaks had difficulty retrieving small seeds and were also not tough enough for the bigger seeds, and were hence maladaptive.
While it is true that disruptive selection can lead to speciation, this is not as quick or straightforward of a process as other types of speciation or evolutionary change. This introduces the topic of gradualism, which is a slow but continuous accumulation of changes over long periods of time. This is largely because the results of disruptive selection are less stable than the results of directional selection (directional selection favors individuals at only one end of the spectrum).
For example, let us take the mathematically straightforward yet biologically improbable case of the rabbits: Suppose directional selection were taking place. The field only has dark rocks in it, so the darker the rabbit, the more effectively it can hide from predators. Eventually there will be a lot of black rabbits in the population (hence many "B" alleles) and a lesser amount of grey rabbits (who contribute 50% chromosomes with "B" allele and 50% chromosomes with "b" allele to the population). There will be few white rabbits (not very many contributors of chromosomes with "b" allele to the population). This could eventually lead to a situation in which chromosomes with "b" allele die out, making black the only possible color for all subsequent rabbits. The reason for this is that there is nothing "boosting" the level of "b" chromosomes in the population. They can only go down, and eventually die out.
Consider now the case of disruptive selection. The result is equal numbers of black and white rabbits, and hence equal numbers of chromosomes with "B" or "b" allele, still floating around in that population. Every time a white rabbit mates with a black one, only gray rabbits results. So, in order for the results to "click", there needs to be a force causing white rabbits to choose other white rabbits, and black rabbits to choose other black ones. In the case of the finches, this "force" was geographic/niche isolation. This leads one to think that disruptive selection cannot happen and is normally because of species being geographically isolated, directional selection or by stabilising selection.
See also
Character displacement
Balancing selection
Directional selection
Negative selection (natural selection)
Stabilizing selection
Sympatric speciation
Fluctuating selection
Selection
References
Selection | 0.805265 | 0.981764 | 0.790581 |
Ecological pyramid | An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystem.
A pyramid of energy shows how much energy is retained in the form of new biomass from each trophic level, while a pyramid of biomass shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. There is also a pyramid of numbers representing the number of individual organisms at each trophic level. Pyramids of energy are normally upright, but other pyramids can be inverted (pyramid of biomass for marine region) or take other shapes (spindle shaped pyramid).
Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on). The highest level is the top of the food chain.
Biomass can be measured by a bomb calorimeter.
Pyramid of Energy
A pyramid of energy or pyramid of productivity shows the production or turnover (the rate at which energy or mass is transferred from one trophic level to the next) of biomass at each trophic level. Instead of showing a single snapshot in time, productivity pyramids show the flow of energy through the food chain. Typical units are grams per square meter per year or calories per square meter per year. As with the others, this graph shows producers at the bottom and higher trophic levels on top.
When an ecosystem is healthy, this graph produces a standard ecological pyramid. This is because, in order for the ecosystem to sustain itself, there must be more energy at lower trophic levels than there is at higher trophic levels. This allows organisms on the lower levels to not only maintain a stable population, but also to transfer energy up the pyramid. The exception to this generalization is when portions of a food web are supported by inputs of resources from outside the local community. In small, forested streams, for example, the volume of higher levels is greater than could be supported by the local primary production.
Energy usually enters ecosystems from the Sun. The primary producers at the base of the pyramid use solar radiation to power photosynthesis which produces food. However most wavelengths in solar radiation cannot be used for photosynthesis, so they are reflected back into space or absorbed elsewhere and converted to heat. Only 1 to 2 percent of the energy from the sun is absorbed by photosynthetic processes and converted into food. When energy is transferred to higher trophic levels, on average only about 10% is used at each level to build biomass, becoming stored energy. The rest goes to metabolic processes such as growth, respiration, and reproduction.
Advantages of the pyramid of energy as a representation:
It takes account of the rate of production over a period of time.
Two species of comparable biomass may have very different life spans. Thus, a direct comparison of their total biomasses is misleading, but their productivity is directly comparable.
The relative energy chain within an ecosystem can be compared using pyramids of energy; also different ecosystems can be compared.
There are no inverted pyramids.
The input of solar energy can be added.
Disadvantages of the pyramid of energy as a representation:
The rate of biomass production of an organism is required, which involves measuring growth and reproduction through time.
There is still the difficulty of assigning the organisms to a specific trophic level. As well as the organisms in the food chains there is the problem of assigning the decomposers and detritivores to a particular level.
Pyramid of biomass
A pyramid of biomass shows the relationship between biomass and trophic level by quantifying the biomass present at each trophic level of an ecological community at a particular time. It is a graphical representation of biomass (total amount of living or organic matter in an ecosystem) present in unit area in different trophic levels. Typical units are grams per square meter, or calories per square meter.
The pyramid of biomass may be "inverted". For example, in a pond ecosystem, the standing crop of phytoplankton, the major producers, at any given point will be lower than the mass of the heterotrophs, such as fish and insects. This is explained as the phytoplankton reproduce very quickly, but have much shorter individual lives.
Pyramid of Numbers
A pyramid of numbers shows graphically the population, or abundance, in terms of the number of individual organisms involved at each level in a food chain. This shows the number of organisms in each trophic level without any consideration for their individual sizes or biomass. The pyramid is not necessarily upright. For example, it will be inverted if beetles are feeding from the output of forest trees, or parasites are feeding on large host animals.
History
The concept of a pyramid of numbers ("Eltonian pyramid") was developed by Charles Elton (1927). Later, it would also be expressed in terms of biomass by Bodenheimer (1938). The idea of the pyramid of productivity or energy relies on the works of G. Evelyn Hutchinson and Raymond Lindeman (1942).
See also
Trophic cascade
References
Bibliography
Odum, E.P. 1971. Fundamentals of Ecology. Third Edition. W.B. Saunders Company, Philadelphia,
External links
Food Chains
Ecology
Food chains | 0.792855 | 0.996617 | 0.790173 |
Syntrophy | In biology, syntrophy, syntrophism, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the cooperative interaction between at least two microbial species to degrade a single substrate. This type of biological interaction typically involves the transfer of one or more metabolic intermediates between two or more metabolically diverse microbial species living in close proximity to each other. Thus, syntrophy can be considered an obligatory interdependency and a mutualistic metabolism between different microbial species, wherein the growth of one partner depends on the nutrients, growth factors, or substrates provided by the other(s).
Microbial syntrophy
Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium.
Mechanism of microbial syntrophy
The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation of these organic compounds cannot occur in fermenting microorganisms unless the hydrogen concentration is reduced to a low level by the methanogens. The key mechanism that ensures the success of syntrophy is interspecies electron transfer. The interspecies electron transfer can be carried out via three ways: interspecies hydrogen transfer, interspecies formate transfer and interspecies direct electron transfer. Reverse electron transport is prominent in syntrophic metabolism.
The metabolic reactions and the energy involved for syntrophic degradation with H2 consumption:
A classical syntrophic relationship can be illustrated by the activity of ‘Methanobacillus omelianskii’. It was isolated several times from anaerobic sediments and sewage sludge and was regarded as a pure culture of an anaerobe converting ethanol to acetate and methane. In fact, however, the culture turned out to consist of a methanogenic archaeon "organism M.o.H" and a Gram-negative Bacterium "Organism S" which involves the oxidization of ethanol into acetate and methane mediated by interspecies hydrogen transfer. Individuals of organism S are observed as obligate anaerobic bacteria that use ethanol as an electron donor, whereas M.o.H are methanogens that oxidize hydrogen gas to produce methane.
Organism S: 2 Ethanol + 2 H2O → 2 Acetate− + 2 H+ + 4 H2 (ΔG°' = +9.6 kJ per reaction)
Strain M.o.H.: 4 H2 + CO2 → Methane + 2 H2O (ΔG°' = -131 kJ per reaction)
Co-culture:2 Ethanol + CO2 → 2 Acetate− + 2 H+ + Methane (ΔG°' = -113 kJ per reaction)
The oxidization of ethanol by organism S is made possible thanks to the methanogen M.o.H, which consumes the hydrogen produced by organism S, by turning the positive Gibbs free energy into negative Gibbs free energy. This situation favors growth of organism S and also provides energy for methanogens by consuming hydrogen. Down the line, acetate accumulation is also prevented by similar syntrophic relationship. Syntrophic degradation of substrates like butyrate and benzoate can also happen without hydrogen consumption.
An example of propionate and butyrate degradation with interspecies formate transfer carried out by the mutual system of Syntrophomonas wolfei and Methanobacterium formicicum:
Propionate+2H2O+2CO2 → Acetate- +3Formate- +3H+ (ΔG°'=+65.3 kJ/mol)
Butyrate+2H2O+2CO2 → 2Acetate- +3Formate- +3H+ ΔG°'=+38.5 kJ/mol)
Direct interspecies electron transfer (DIET) which involves electron transfer without any electron carrier such as H2 or formate was reported in the co-culture system of Geobacter mettalireducens and Methanosaeto or Methanosarcina
Examples
In ruminants
The defining feature of ruminants, such as cows and goats, is a stomach called a rumen. The rumen contains billions of microbes, many of which are syntrophic. Some anaerobic fermenting microbes in the rumen (and other gastrointestinal tracts) are capable of degrading organic matter to short chain fatty acids, and hydrogen. The accumulating hydrogen inhibits the microbe's ability to continue degrading organic matter, but the presence of syntrophic hydrogen-consuming microbes allows continued growth by metabolizing the waste products. In addition, fermentative bacteria gain maximum energy yield when protons are used as electron acceptor with concurrent H2 production. Hydrogen-consuming organisms include methanogens, sulfate-reducers, acetogens, and others.
Some fermentation products, such as fatty acids longer than two carbon atoms, alcohols longer than one carbon atom, and branched chain and aromatic fatty acids, cannot directly be used in methanogenesis. In acetogenesis processes, these products are oxidized to acetate and H2 by obligated proton reducing bacteria in syntrophic relationship with methanogenic archaea as low H2 partial pressure is essential for acetogenic reactions to be thermodynamically favorable (ΔG < 0).
Biodegradation of pollutants
Syntrophic microbial food webs play an integral role in bioremediation especially in environments contaminated with crude oil and petrol. Environmental contamination with oil is of high ecological importance and can be effectively mediated through syntrophic degradation by complete mineralization of alkane, aliphatic and hydrocarbon chains. The hydrocarbons of the oil are broken down after activation by fumarate, a chemical compound that is regenerated by other microorganisms. Without regeneration, the microbes degrading the oil would eventually run out of fumarate and the process would cease. This breakdown is crucial in the processes of bioremediation and global carbon cycling.
Syntrophic microbial communities are key players in the breakdown of aromatic compounds, which are common pollutants. The degradation of aromatic benzoate to methane produces intermediate compounds such as formate, acetate, and H2. The buildup of these products makes benzoate degradation thermodynamically unfavorable. These intermediates can be metabolized syntrophically by methanogens and makes the degradation process thermodynamically favorable
Degradation of amino acids
Studies have shown that bacterial degradation of amino acids can be significantly enhanced through the process of syntrophy. Microbes growing poorly on amino acid substrates alanine, aspartate, serine, leucine, valine, and glycine can have their rate of growth dramatically increased by syntrophic H2 scavengers. These scavengers, like Methanospirillum and Acetobacterium, metabolize the H2 waste produced during amino acid breakdown, preventing a toxic build-up. Another way to improve amino acid breakdown is through interspecies electron transfer mediated by formate. Species like Desulfovibrio employ this method. Amino acid fermenting anaerobes such as Clostridium species, Peptostreptococcus asacchaarolyticus, Acidaminococcus fermentans were known to breakdown amino acids like glutamate with the help of hydrogen scavenging methanogenic partners without going through the usual Stickland fermentation pathway
Anaerobic digestion
Effective syntrophic cooperation between propionate oxidizing bacteria, acetate oxidizing bacteria and H2/acetate consuming methanogens is necessary to successfully carryout anaerobic digestion to produce biomethane
Examples of syntrophic organisms
Syntrophomonas wolfei
Syntrophobacter funaroxidans
Pelotomaculum thermopropinicium
Syntrophus aciditrophicus
Syntrophus buswellii
Syntrophus gentianae
References
Biological interactions
Food chains | 0.812208 | 0.972776 | 0.790097 |
Acclimatization | Acclimatization or acclimatisation (also called acclimation or acclimatation) is the process in which an individual organism adjusts to a change in its environment (such as a change in altitude, temperature, humidity, photoperiod, or pH), allowing it to maintain fitness across a range of environmental conditions. Acclimatization occurs in a short period of time (hours to weeks), and within the organism's lifetime (compared to adaptation, which is evolution, taking place over many generations). This may be a discrete occurrence (for example, when mountaineers acclimate to high altitude over hours or days) or may instead represent part of a periodic cycle, such as a mammal shedding heavy winter fur in favor of a lighter summer coat. Organisms can adjust their morphological, behavioral, physical, and/or biochemical traits in response to changes in their environment. While the capacity to acclimate to novel environments has been well documented in thousands of species, researchers still know very little about how and why organisms acclimate the way that they do.
Names
The nouns acclimatization and acclimation (and the corresponding verbs acclimatize and acclimate) are widely regarded as synonymous, both in general vocabulary and in medical vocabulary. The synonym acclimation is less commonly encountered, and fewer dictionaries enter it.
Methods
Biochemical
In order to maintain performance across a range of environmental conditions, there are several strategies organisms use to acclimate. In response to changes in temperature, organisms can change the biochemistry of cell membranes making them more fluid in cold temperatures and less fluid in warm temperatures by increasing the number of membrane proteins. In response to certain stressors, some organisms express so-called heat shock proteins that act as molecular chaperones and reduce denaturation by guiding the folding and refolding of proteins. It has been shown that organisms which are acclimated to high or low temperatures display relatively high resting levels of heat shock proteins so that when they are exposed to even more extreme temperatures the proteins are readily available. Expression of heat shock proteins and regulation of membrane fluidity are just two of many biochemical methods organisms use to acclimate to novel environments.
Morphological
Organisms are able to change several characteristics relating to their morphology in order to maintain performance in novel environments. For example, birds often increase their organ size to increase their metabolism. This can take the form of an increase in the mass of nutritional organs or heat-producing organs, like the pectorals (with the latter being more consistent across species).
The theory
While the capacity for acclimatization has been documented in thousands of species, researchers still know very little about how and why organisms acclimate in the way that they do. Since researchers first began to study acclimation, the overwhelming hypothesis has been that all acclimation serves to enhance the performance of the organism. This idea has come to be known as the beneficial acclimation hypothesis. Despite such widespread support for the beneficial acclimation hypothesis, not all studies show that acclimation always serves to enhance performance (See beneficial acclimation hypothesis). One of the major objections to the beneficial acclimation hypothesis is that it assumes that there are no costs associated with acclimation. However, there are likely to be costs associated with acclimation. These include the cost of sensing the environmental conditions and regulating responses, producing structures required for plasticity (such as the energetic costs in expressing heat shock proteins), and genetic costs (such as linkage of plasticity-related genes with harmful genes).
Given the shortcomings of the beneficial acclimation hypothesis, researchers are continuing to search for a theory that will be supported by empirical data.
The degree to which organisms are able to acclimate is dictated by their phenotypic plasticity or the ability of an organism to change certain traits. Recent research in the study of acclimation capacity has focused more heavily on the evolution of phenotypic plasticity rather than acclimation responses. Scientists believe that when they understand more about how organisms evolved the capacity to acclimate, they will better understand acclimation.
Examples
Plants
Many plants, such as maple trees, irises, and tomatoes, can survive freezing temperatures if the temperature gradually drops lower and lower each night over a period of days or weeks. The same drop might kill them if it occurred suddenly. Studies have shown that tomato plants that were acclimated to higher temperature over several days were more efficient at photosynthesis at relatively high temperatures than were plants that were not allowed to acclimate.
In the orchid Phalaenopsis, phenylpropanoid enzymes are enhanced in the process of plant acclimatisation at different levels of photosynthetic photon flux.
Animals
Animals acclimatize in many ways. Sheep grow very thick wool in cold, damp climates. Fish are able to adjust only gradually to changes in water temperature and quality. Tropical fish sold at pet stores are often kept in acclimatization bags until this process is complete. Lowe & Vance (1995) were able to show that lizards acclimated to warm temperatures could maintain a higher running speed at warmer temperatures than lizards that were not acclimated to warm conditions. Fruit flies that develop at relatively cooler or warmer temperatures have increased cold or heat tolerance as adults, respectively (See Developmental plasticity).
Humans
The salt content of sweat and urine decreases as people acclimatize to hot conditions. Plasma volume, heart rate, and capillary activation are also affected.
Acclimatization to high altitude continues for months or even years after initial ascent, and ultimately enables humans to survive in an environment that, without acclimatization, would kill them. Humans who migrate permanently to a higher altitude naturally acclimatize to their new environment by developing an increase in the number of red blood cells to increase the oxygen carrying capacity of the blood, in order to compensate for lower levels of oxygen intake.
See also
Acclimatisation society
Beneficial acclimation hypothesis
Heat index
Introduced species
Phenotypic plasticity
Wind chill
References
Physiology
Ecological processes
Climate
Biology terminology | 0.794487 | 0.994293 | 0.789952 |
Soil biology | Soil biology is the study of microbial and faunal activity and ecology in soil.
Soil life, soil biota, soil fauna, or edaphon is a collective term that encompasses all organisms that spend a significant portion of their life cycle within a soil profile, or at the soil-litter interface.
These organisms include earthworms, nematodes, protozoa, fungi, bacteria, different arthropods, as well as some reptiles (such as snakes), and species of burrowing mammals like gophers, moles and prairie dogs. Soil biology plays a vital role in determining many soil characteristics. The decomposition of organic matter by soil organisms has an immense influence on soil fertility, plant growth, soil structure, and carbon storage. As a relatively new science, much remains unknown about soil biology and its effect on soil ecosystems.
Overview
The soil is home to a large proportion of the world's biodiversity. The links between soil organisms and soil functions are complex. The interconnectedness and complexity of this soil 'food web' means any appraisal of soil function must necessarily take into account interactions with the living communities that exist within the soil. We know that soil organisms break down organic matter, making nutrients available for uptake by plants and other organisms. The nutrients stored in the bodies of soil organisms prevent nutrient loss by leaching. Microbial exudates act to maintain soil structure, and earthworms are important in bioturbation. However, we find that we do not understand critical aspects about how these populations function and interact. The discovery of glomalin in 1995 indicates that we lack the knowledge to correctly answer some of the most basic questions about the biogeochemical cycle in soils. There is much work ahead to gain a better understanding of the ecological role of soil biological components in the biosphere.
In balanced soil, plants grow in an active and steady environment. The mineral content of the soil and its heartiful structure are important for their well-being, but it is the life in the earth that powers its cycles and provides its fertility. Without the activities of soil organisms, organic materials would accumulate and litter the soil surface, and there would be no food for plants.
The soil biota includes:
Megafauna: size range – 20 mm upward, e.g. moles, rabbits, and rodents.
Macrofauna: size range – 2 to 20 mm, e.g. woodlice, earthworms, beetles, centipedes, slugs, snails, ants, and harvestmen.
Mesofauna: size range – 100 micrometres to 2 mm, e.g. tardigrades, mites and springtails.
Microfauna and Microflora: size range – 1 to 100 micrometres, e.g. yeasts, bacteria (commonly actinobacteria), fungi, protozoa, roundworms, and rotifers.
Of these, bacteria and fungi play key roles in maintaining a healthy soil. They act as decomposers that break down organic materials to produce detritus and other breakdown products. Soil detritivores, like earthworms, ingest detritus and decompose it. Saprotrophs, well represented by fungi and bacteria, extract soluble nutrients from delitro.
The ants (macrofaunas) help by breaking down in the same way but they also provide the motion part as they move in their armies. Also the rodents, wood-eaters help the soil to be more absorbent.
Scope
Soil biology involves work in the following areas:
Modelling of biological processes and population dynamics
Soil biology, physics and chemistry: occurrence of physicochemical parameters and surface properties on biological processes and population behavior
Population biology and molecular ecology: methodological development and contribution to study microbial and faunal populations; diversity and population dynamics; genetic transfers, influence of environmental factors
Community ecology and functioning processes: interactions between organisms and mineral or organic compounds; involvement of such interactions in soil pathogenicity; transformation of mineral and organic compounds, cycling of elements; soil structuration
Complementary disciplinary approaches are necessarily utilized which involve molecular biology, genetics, ecophysiology, biogeography, ecology, soil processes, organic matter, nutrient dynamics and landscape ecology.
Bacteria
Bacteria are single-cell organisms and the most numerous denizens of agriculture, with populations ranging from 100 million to 3 billion in a gram. They are capable of very rapid reproduction by binary fission (dividing into two) in favourable conditions. One bacterium is capable of producing 16 million more in just 24 hours. Most soil bacteria live close to plant roots and are often referred to as rhizobacteria. Bacteria live in soil water, including the film of moisture surrounding soil particles, and some are able to swim by means of flagella. The majority of the beneficial soil-dwelling bacteria need oxygen (and are thus termed aerobic bacteria), whilst those that do not require air are referred to as anaerobic, and tend to cause putrefaction of dead organic matter. Aerobic bacteria are most active in a soil that is moist (but not saturated, as this will deprive aerobic bacteria of the air that they require), and neutral soil pH, and where there is plenty of food (carbohydrates and micronutrients from organic matter) available. Hostile conditions will not completely kill bacteria; rather, the bacteria will stop growing and get into a dormant stage, and those individuals with pro-adaptive mutations may compete better in the new conditions. Some Gram-positive bacteria produce spores in order to wait for more favourable circumstances, and Gram-negative bacteria get into a "nonculturable" stage. Bacteria are colonized by persistent viral agents (bacteriophages) that determine gene word order in bacterial host.
From the organic gardener's point of view, the important roles that bacteria play are:
Nitrification
Nitrification is a vital part of the nitrogen cycle, wherein certain bacteria (which manufacture their own carbohydrate supply without using the process of photosynthesis) are able to transform nitrogen in the form of ammonium, which is produced by the decomposition of proteins, into nitrates, which are available to growing plants, and once again converted to proteins.
Nitrogen fixation
In another part of the cycle, the process of nitrogen fixation constantly puts additional nitrogen into biological circulation. This is carried out by free-living nitrogen-fixing bacteria in the soil or water such as Azotobacter, or by those that live in close symbiosis with leguminous plants, such as rhizobia. These bacteria form colonies in nodules they create on the roots of peas, beans, and related species. These are able to convert nitrogen from the atmosphere into nitrogen-containing organic substances.
Denitrification
While nitrogen fixation converts nitrogen from the atmosphere into organic compounds, a series of processes called denitrification returns an approximately equal amount of nitrogen to the atmosphere. Denitrifying bacteria tend to be anaerobes, or facultatively anaerobes (can alter between the oxygen dependent and oxygen independent types of metabolisms), including Achromobacter and Pseudomonas. The purification process caused by oxygen-free conditions converts nitrates and nitrites in soil into nitrogen gas or into gaseous compounds such as nitrous oxide or nitric oxide. In excess, denitrification can
lead to overall losses of available soil nitrogen and subsequent loss of soil fertility. However, fixed nitrogen may circulate many times between organisms and the soil
before denitrification returns it to the atmosphere. The diagram above illustrates the nitrogen cycle.
Actinomycetota
Actinomycetota are critical in the decomposition of organic matter and in humus formation. They specialize in breaking down cellulose and lignin along with the tough chitin found on the exoskeletons of insects. Their presence is responsible for the sweet "earthy" aroma associated with a good healthy soil. They require plenty of air and a pH between 6.0 and 7.5, but are more tolerant of dry conditions than most other bacteria and fungi.
Fungi
A gram of garden soil can contain around one million fungi, such as yeasts and moulds. Fungi have no chlorophyll, and are not able to photosynthesise. They cannot use atmospheric carbon dioxide as a source of carbon, therefore they are chemo-heterotrophic, meaning that, like animals, they require a chemical source of energy rather than being able to use light as an energy source, as well as organic substrates to get carbon for growth and development.
Many fungi are parasitic, often causing disease to their living host plant, although some have beneficial relationships with living plants, as illustrated below. In terms of soil and humus creation, the most important fungi tend to be saprotrophic; that is, they live on dead or decaying organic matter, thus breaking it down and converting it to forms that are available to the higher plants. A succession of fungi species will colonise the dead matter, beginning with those that use sugars and starches, which are succeeded by those that are able to break down cellulose and lignins.
Fungi spread underground by sending long thin threads known as mycelium throughout the soil; these threads can be observed throughout many soils and compost heaps. From the mycelia the fungi is able to throw up its fruiting bodies, the visible part above the soil (e.g., mushrooms, toadstools, and puffballs), which may contain millions of spores. When the fruiting body bursts, these spores are dispersed through the air to settle in
fresh environments, and are able to lie dormant for up to years until the right conditions for their activation arise or the right food is made available.
Mycorrhizae
Those fungi that are able to live symbiotically with living plants, creating a relationship that is beneficial to both, are known as mycorrhizae (from myco meaning fungal and rhiza meaning root). Plant root hairs are invaded by the mycelia of the mycorrhiza, which lives partly in the soil and partly in the root, and may either cover the length of the root hair as a sheath or be concentrated around its tip. The mycorrhiza obtains the carbohydrates that it requires from the root, in return providing
the plant with nutrients including nitrogen and moisture. Later the plant roots will also absorb the mycelium into its own tissues.
Beneficial mycorrhizal associations are to be found in many of our edible and flowering crops. Shewell Cooper suggests that these include at least 80% of the Brassica and Solanum families (including tomatoes and potatoes), as well as the majority of tree species, especially in forest and woodlands. Here the mycorrhizae create a fine underground mesh that extends greatly beyond the limits of the tree's roots, greatly increasing their feeding range and actually causing neighbouring trees to become physically interconnected. The benefits of mycorrhizal relations to their plant partners are not limited to nutrients, but can be essential for plant reproduction. In situations where little light is able to reach the forest floor, such as the North American pine forests, a young seedling cannot obtain sufficient light to photosynthesise for itself and will not grow properly in a sterile soil. But, if the ground is underlain by a mycorrhizal mat, then the developing seedling will throw down roots that can link with the fungal threads and through them obtain the nutrients it needs, often indirectly obtained from its parents or neighbouring trees.
David Attenborough points out the plant, fungi, animal relationship that creates a "three way harmonious trio" to be found in forest ecosystems, wherein the plant/fungi symbiosis is enhanced by animals such as the wild boar, deer, mice, or flying squirrel, which feed upon the fungi's fruiting bodies, including truffles, and cause their further spread (Private Life Of Plants, 1995). A greater understanding of the complex relationships that pervade natural systems is one of the major justifications of the organic gardener, in refraining from the use of artificial chemicals and the damage these might cause.
Recent research has shown that arbuscular mycorrhizal fungi produce glomalin, a protein that binds soil particles and stores both carbon and nitrogen. These glomalin-related soil proteins are an important part of soil organic matter.
Invertebrates
Soil fauna affect soil formation and soil organic matter dynamically on many spatiotemporal scales. Earthworms, ants and termites mix the soil as they burrow, significantly affecting soil formation. Earthworms ingest soil particles and organic residues, enhancing the availability of plant nutrients in the material that passes through and out of their bodies. By aerating and stirring the soil, and by increasing the stability of soil aggregates, these organisms help to assure the ready infiltration of water. These organisms in the soil also help improve pH levels.
Ants and termites are often referred to as "Soil engineers" because, when they create their nests, there are several chemical and physical changes made to the soil. Among these changes are increasing the presence of the most essential elements like carbon, nitrogen, and phosphorus—elements needed for plant growth. They also can gather soil particles from differing depths of soil and deposit them in other places, leading to the mixing of soil so it is richer with nutrients and other elements.
Vertebrates
The soil is also important to many mammals. Gophers, moles, prairie dogs, and other burrowing animals rely on this soil for protection and food. The animals even give back to the soil as their burrowing allows more rain, snow and water from ice to enter the soil instead of creating erosion.
Table of soil life
This table includes some familiar types of soil life of soil life, coherent with prevalent taxonomy as used in the linked Wikipedia articles.
See also
Agricultural soil science
Agroecology
Biogeochemical cycle
Compost
Nitrification
Nitrogen cycle
Potting soil
Soil food web
Soil microbiology
Soil science
Notes
References
Bibliography
Alexander, 1977, Introduction to Soil Microbiology, 2nd edition, John Wiley
Alexander, 1994, Biodegradation and Bioremediation, Academic Press
Bardgett, R.D., 2005, The Biology of Soil: A Community and Ecosystem Approach, Oxford University Press
Burges, A., and Raw, F., 1967, Soil Biology: Academic Press
Coleman D.C. et al., 2004, Fundamentals of Soil Ecology, 2nd edition, Academic Press
Coyne, 1999, Soil Microbiology: An Exploratory Approach, Delmar
Doran, J.W., D.C. Coleman, D.F. Bezdicek and B.A. Stewart. 1994. Defining soil quality for a sustainable environment. Soil Science Society of America Special Publication Number 35, ASA, Madison Wis.
Paul, P.A. and F.E. Clark. 1996, Soil Microbiology and Biochemistry, 2nd edition, Academic Press
Richards, 1987, The Microbiology of Terrestrial Ecosystems, Longman Scientific & Technical
Sylvia et al., 1998, Principles and Applications of Soil Microbiology, Prentice Hall
Soil and Water Conservation Society, 2000, Soil Biology Primer.
Tate, 2000, Soil Microbiology, 2nd edition, John Wiley
van Elsas et al., 1997, Modern Soil Microbiology, Marcel Dekker
Wood, 1995, Environmental Soil Biology, 2nd edition, Blackie A & P
Vats, Rajeev & Sanjeev, Aggarwal. (2019). Impact of termite activity and its effect on soil composition.
External links
Michigan State University – Soil Ecology and Management: Soil Biology
New South Wales – Soil Biology
University of Minnesota – Soil Biology and Soil Management
Soil-Net.com A free schools-age educational site, featuring much on soil biology and teaching about soil and its importance.
Why organic fertilizers are a good choice for healthy soil
Effects of transgenic zeaxanthin potatoes on soil quality Biosafety research project funded by the BMBF
Phospholipid fatty-acid analysis protocol A method for analyzing the soil microbial community (pdf file)
USDA-NRCS – Soil Biology Primer
Soil science | 0.806224 | 0.979174 | 0.789434 |
Cell biology | Cell biology (also cellular biology or cytology) is a branch of biology that studies the structure, function, and behavior of cells. All living organisms are made of cells. A cell is the basic unit of life that is responsible for the living and functioning of organisms. Cell biology is the study of the structural and functional units of cells. Cell biology encompasses both prokaryotic and eukaryotic cells and has many subtopics which may include the study of cell metabolism, cell communication, cell cycle, biochemistry, and cell composition. The study of cells is performed using several microscopy techniques, cell culture, and cell fractionation. These have allowed for and are currently being used for discoveries and research pertaining to how cells function, ultimately giving insight into understanding larger organisms. Knowing the components of cells and how cells work is fundamental to all biological sciences while also being essential for research in biomedical fields such as cancer, and other diseases. Research in cell biology is interconnected to other fields such as genetics, molecular genetics, molecular biology, medical microbiology, immunology, and cytochemistry.
History
Cells were first seen in 17th-century Europe with the invention of the compound microscope. In 1665, Robert Hooke referred to the building blocks of all living organisms as "cells" (published in Micrographia) after looking at a piece of cork and observing a structure reminiscent of a monastic cell; however, the cells were dead. They gave no indication to the actual overall components of a cell. A few years later, in 1674, Anton Van Leeuwenhoek was the first to analyze live cells in his examination of algae. Many years later, in 1831, Robert Brown discovered the nucleus. All of this preceded the cell theory which states that all living things are made up of cells and that cells are organisms' functional and structural units. This was ultimately concluded by plant scientist Matthias Schleiden and animal scientist Theodor Schwann in 1838, who viewed live cells in plant and animal tissue, respectively. 19 years later, Rudolf Virchow further contributed to the cell theory, adding that all cells come from the division of pre-existing cells. Viruses are not considered in cell biology – they lack the characteristics of a living cell and instead are studied in the microbiology subclass of virology.
Techniques
Cell biology research looks at different ways to culture and manipulate cells outside of a living body to further research in human anatomy and physiology, and to derive medications.The techniques by which cells are studied have evolved. Due to advancements in microscopy, techniques and technology have allowed scientists to hold a better understanding of the structure and function of cells. Many techniques commonly used to study cell biology are listed below:
Cell culture: Utilizes rapidly growing cells on media which allows for a large amount of a specific cell type and an efficient way to study cells. Cell culture is one of the major tools used in cellular and molecular biology, providing excellent model systems for studying the normal physiology and biochemistry of cells (e.g., metabolic studies, aging), the effects of drugs and toxic compounds on the cells, and mutagenesis and carcinogenesis. It is also used in drug screening and development, and large scale manufacturing of biological compounds (e.g., vaccines, therapeutic proteins).
Fluorescence microscopy: Fluorescent markers such as GFP, are used to label a specific component of the cell. Afterwards, a certain light wavelength is used to excite the fluorescent marker which can then be visualized.
Phase-contrast microscopy: Uses the optical aspect of light to represent the solid, liquid, and gas-phase changes as brightness differences.
Confocal microscopy: Combines fluorescence microscopy with imaging by focusing light and snap shooting instances to form a 3-D image.
Transmission electron microscopy: Involves metal staining and the passing of electrons through the cells, which will be deflected upon interaction with metal. This ultimately forms an image of the components being studied.
Cytometry: The cells are placed in the machine which uses a beam to scatter the cells based on different aspects and can therefore separate them based on size and content. Cells may also be tagged with GFP-fluorescence and can be separated that way as well.
Cell fractionation: This process requires breaking up the cell using high temperature or sonification followed by centrifugation to separate the parts of the cell allowing for them to be studied separately.
Cell types
There are two fundamental classifications of cells: prokaryotic and eukaryotic. Prokaryotic cells are distinguished from eukaryotic cells by the absence of a cell nucleus or other membrane-bound organelle. Prokaryotic cells are much smaller than eukaryotic cells, making them the smallest form of life. Prokaryotic cells include Bacteria and Archaea, and lack an enclosed cell nucleus. Eukaryotic cells are found in plants, animals, fungi, and protists. They range from 10 to 100 μm in diameter, and their DNA is contained within a membrane-bound nucleus. Eukaryotes are organisms containing eukaryotic cells. The four eukaryotic kingdoms are Animalia, Plantae, Fungi, and Protista.
They both reproduce through binary fission. Bacteria, the most prominent type, have several different shapes, although most are spherical or rod-shaped. Bacteria can be classed as either gram-positive or gram-negative depending on the cell wall composition. Gram-positive bacteria have a thicker peptidoglycan layer than gram-negative bacteria. Bacterial structural features include a flagellum that helps the cell to move, ribosomes for the translation of RNA to protein, and a nucleoid that holds all the genetic material in a circular structure. There are many processes that occur in prokaryotic cells that allow them to survive. In prokaryotes, mRNA synthesis is initiated at a promoter sequence on the DNA template comprising two consensus sequences that recruit RNA polymerase. The prokaryotic polymerase consists of a core enzyme of four protein subunits and a σ protein that assists only with initiation. For instance, in a process termed conjugation, the fertility factor allows the bacteria to possess a pilus which allows it to transmit DNA to another bacteria which lacks the F factor, permitting the transmittance of resistance allowing it to survive in certain environments.
Structure and function
Structure of eukaryotic cells
Eukaryotic cells are composed of the following organelles:
Nucleus: The nucleus of the cell functions as the genome and genetic information storage for the cell, containing all the DNA organized in the form of chromosomes. It is surrounded by a nuclear envelope, which includes nuclear pores allowing for the transportation of proteins between the inside and outside of the nucleus. This is also the site for replication of DNA as well as transcription of DNA to RNA. Afterwards, the RNA is modified and transported out to the cytosol to be translated to protein.
Nucleolus: This structure is within the nucleus, usually dense and spherical. It is the site of ribosomal RNA (rRNA) synthesis, which is needed for ribosomal assembly.
Endoplasmic reticulum (ER): This functions to synthesize, store, and secrete proteins to the Golgi apparatus. Structurally, the endoplasmic reticulum is a network of membranes found throughout the cell and connected to the nucleus. The membranes are slightly different from cell to cell and a cell's function determines the size and structure of the ER.
Mitochondria: Commonly known as the powerhouse of the cell is a double membrane bound cell organelle. This functions for the production of energy or ATP within the cell. Specifically, this is the place where the Krebs cycle or TCA cycle for the production of NADH and FADH occurs. Afterwards, these products are used within the electron transport chain (ETC) and oxidative phosphorylation for the final production of ATP.
Golgi apparatus: This functions to further process, package, and secrete the proteins to their destination. The proteins contain a signal sequence that allows the Golgi apparatus to recognize and direct it to the correct place. Golgi apparatus also produce glycoproteins and glycolipids.
Lysosome: The lysosome functions to degrade material brought in from the outside of the cell or old organelles. This contains many acid hydrolases, proteases, nucleases, and lipases, which break down the various molecules. Autophagy is the process of degradation through lysosomes which occurs when a vesicle buds off from the ER and engulfs the material, then, attaches and fuses with the lysosome to allow the material to be degraded.
Ribosomes: Functions to translate RNA to protein. it serves as a site of protein synthesis.
Cytoskeleton: Cytoskeleton is a structure that helps to maintain the shape and general organization of the cytoplasm. It anchors organelles within the cells and makes up the structure and stability of the cell. The cytoskeleton is composed of three principal types of protein filaments: actin filaments, intermediate filaments, and microtubules, which are held together and linked to subcellular organelles and the plasma membrane by a variety of accessory proteins.
Cell membrane: The cell membrane can be described as a phospholipid bilayer and is also consisted of lipids and proteins. Because the inside of the bilayer is hydrophobic and in order for molecules to participate in reactions within the cell, they need to be able to cross this membrane layer to get into the cell via osmotic pressure, diffusion, concentration gradients, and membrane channels.
Centrioles: Function to produce spindle fibers which are used to separate chromosomes during cell division.
Eukaryotic cells may also be composed of the following molecular components:
Chromatin: This makes up chromosomes and is a mixture of DNA with various proteins.
Cilia: They help to propel substances and can also be used for sensory purposes.
Cell metabolism
Cell metabolism is necessary for the production of energy for the cell and therefore its survival and includes many pathways and also sustaining the main cell organelles such as the nucleus, the mitochondria, the cell membrane etc. For cellular respiration, once glucose is available, glycolysis occurs within the cytosol of the cell to produce pyruvate. Pyruvate undergoes decarboxylation using the multi-enzyme complex to form acetyl coA which can readily be used in the TCA cycle to produce NADH and FADH2. These products are involved in the electron transport chain to ultimately form a proton gradient across the inner mitochondrial membrane. This gradient can then drive the production of ATP and during oxidative phosphorylation. Metabolism in plant cells includes photosynthesis which is simply the exact opposite of respiration as it ultimately produces molecules of glucose.
Cell signaling
Cell signaling or cell communication is important for cell regulation and for cells to process information from the environment and respond accordingly. Signaling can occur through direct cell contact or endocrine, paracrine, and autocrine signaling. Direct cell-cell contact is when a receptor on a cell binds a molecule that is attached to the membrane of another cell. Endocrine signaling occurs through molecules secreted into the bloodstream. Paracrine signaling uses molecules diffusing between two cells to communicate. Autocrine is a cell sending a signal to itself by secreting a molecule that binds to a receptor on its surface. Forms of communication can be through:
Ion channels: Can be of different types such as voltage or ligand gated ion channels. They allow for the outflow and inflow of molecules and ions.
G-protein coupled receptor (GPCR): Is widely recognized to contain seven transmembrane domains. The ligand binds on the extracellular domain and once the ligand binds, this signals a guanine exchange factor to convert GDP to GTP and activate the G-α subunit. G-α can target other proteins such as adenyl cyclase or phospholipase C, which ultimately produce secondary messengers such as cAMP, Ip3, DAG, and calcium. These secondary messengers function to amplify signals and can target ion channels or other enzymes. One example for amplification of a signal is cAMP binding to and activating PKA by removing the regulatory subunits and releasing the catalytic subunit. The catalytic subunit has a nuclear localization sequence which prompts it to go into the nucleus and phosphorylate other proteins to either repress or activate gene activity.
Receptor tyrosine kinases: Bind growth factors, further promoting the tyrosine on the intracellular portion of the protein to cross phosphorylate. The phosphorylated tyrosine becomes a landing pad for proteins containing an SH2 domain allowing for the activation of Ras and the involvement of the MAP kinase pathway.
Growth and development
Eukaryotic cell cycle
Cells are the foundation of all organisms and are the fundamental units of life. The growth and development of cells are essential for the maintenance of the host and survival of the organism. For this process, the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, and cell death.
The cell cycle is divided into four distinct phases: G1, S, G2, and M. The G phase – which is the cell growth phase – makes up approximately 95% of the cycle. The proliferation of cells is instigated by progenitors. All cells start out in an identical form and can essentially become any type of cells. Cell signaling such as induction can influence nearby cells to determinate the type of cell it will become. Moreover, this allows cells of the same type to aggregate and form tissues, then organs, and ultimately systems. The G1, G2, and S phase (DNA replication, damage and repair) are considered to be the interphase portion of the cycle, while the M phase (mitosis) is the cell division portion of the cycle. Mitosis is composed of many stages which include, prophase, metaphase, anaphase, telophase, and cytokinesis, respectively. The ultimate result of mitosis is the formation of two identical daughter cells.
The cell cycle is regulated in cell cycle checkpoints, by a series of signaling factors and complexes such as cyclins, cyclin-dependent kinase, and p53. When the cell has completed its growth process and if it is found to be damaged or altered, it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it can cause to the organism's survival.
Cell mortality, cell lineage immortality
The ancestry of each present day cell presumably traces back, in an unbroken lineage for over 3 billion years to the origin of life. It is not actually cells that are immortal but multi-generational cell lineages. The immortality of a cell lineage depends on the maintenance of cell division potential. This potential may be lost in any particular lineage because of cell damage, terminal differentiation as occurs in nerve cells, or programmed cell death (apoptosis) during development. Maintenance of cell division potential over successive generations depends on the avoidance and the accurate repair of cellular damage, particularly DNA damage. In sexual organisms, continuity of the germline depends on the effectiveness of processes for avoiding DNA damage and repairing those DNA damages that do occur. Sexual processes in eukaryotes, as well as in prokaryotes, provide an opportunity for effective repair of DNA damages in the germ line by homologous recombination.
Cell cycle phases
The cell cycle is a four-stage process that a cell goes through as it develops and divides. It includes Gap 1 (G1), synthesis (S), Gap 2 (G2), and mitosis (M). The cell either restarts the cycle from G1 or leaves the cycle through G0 after completing the cycle. The cell can progress from G0 through terminal differentiation. Finally, the interphase refers to the phases of the cell cycle that occur between one mitosis and the next, and includes G1, S, and G2. Thus, the phases are:
G1 phase: the cell grows in size and its contents are replicated.
S phase: the cell replicates each of the 46 chromosomes.
G2 phase: in preparation for cell division, new organelles and proteins form.
M phase: cytokinesis occurs, resulting in two identical daughter cells.
G0 phase: the two cells enter a resting stage where they do their job without actively preparing to divide.
Pathology
The scientific branch that studies and diagnoses diseases on the cellular level is called cytopathology. Cytopathology is generally used on samples of free cells or tissue fragments, in contrast to the pathology branch of histopathology, which studies whole tissues. Cytopathology is commonly used to investigate diseases involving a wide range of body sites, often to aid in the diagnosis of cancer but also in the diagnosis of some infectious diseases and other inflammatory conditions. For example, a common application of cytopathology is the Pap smear, a screening test used to detect cervical cancer, and precancerous cervical lesions that may lead to cervical cancer.
Cell cycle checkpoints and DNA damage repair system
The cell cycle is composed of a number of well-ordered, consecutive stages that result in cellular division. The fact that cells do not begin the next stage until the last one is finished, is a significant element of cell cycle regulation. Cell cycle checkpoints are characteristics that constitute an excellent monitoring strategy for accurate cell cycle and divisions. Cdks, associated cyclin counterparts, protein kinases, and phosphatases regulate cell growth and division from one stage to another. The cell cycle is controlled by the temporal activation of Cdks, which is governed by cyclin partner interaction, phosphorylation by particular protein kinases, and de-phosphorylation by Cdc25 family phosphatases. In response to DNA damage, a cell's DNA repair reaction is a cascade of signaling pathways that leads to checkpoint engagement, regulates, the repairing mechanism in DNA, cell cycle alterations, and apoptosis. Numerous biochemical structures, as well as processes that detect damage in DNA, are ATM and ATR, which induce the DNA repair checkpoints
The cell cycle is a sequence of activities in which cell organelles are duplicated and subsequently separated into daughter cells with precision. There are major events that happen during a cell cycle. The processes that happen in the cell cycle include cell development, replication and segregation of chromosomes. The cell cycle checkpoints are surveillance systems that keep track of the cell cycle's integrity, accuracy, and chronology. Each checkpoint serves as an alternative cell cycle endpoint, wherein the cell's parameters are examined and only when desirable characteristics are fulfilled does the cell cycle advance through the distinct steps. The cell cycle's goal is to precisely copy each organism's DNA and afterwards equally split the cell and its components between the two new cells. Four main stages occur in the eukaryotes. In G1, the cell is usually active and continues to grow rapidly, while in G2, the cell growth continues while protein molecules become ready for separation. These are not dormant times; they are when cells gain mass, integrate growth factor receptors, establish a replicated genome, and prepare for chromosome segregation. DNA replication is restricted to a separate Synthesis in eukaryotes, which is also known as the S-phase. During mitosis, which is also known as the M-phase, the segregation of the chromosomes occur. DNA, like every other molecule, is capable of undergoing a wide range of chemical reactions. Modifications in DNA's sequence, on the other hand, have a considerably bigger impact than modifications in other cellular constituents like RNAs or proteins because DNA acts as a permanent copy of the cell genome. When erroneous nucleotides are incorporated during DNA replication, mutations can occur. The majority of DNA damage is fixed by removing the defective bases and then re-synthesizing the excised area. On the other hand, some DNA lesions can be mended by reversing the damage, which may be a more effective method of coping with common types of DNA damage. Only a few forms of DNA damage are mended in this fashion, including pyrimidine dimers caused by ultraviolet (UV) light changed by the insertion of methyl or ethyl groups at the purine ring's O6 position.
Mitochondrial membrane dynamics
Mitochondria are commonly referred to as the cell's "powerhouses" because of their capacity to effectively produce ATP which is essential to maintain cellular homeostasis and metabolism. Moreover, researchers have gained a better knowledge of mitochondria's significance in cell biology because of the discovery of cell signaling pathways by mitochondria which are crucial platforms for cell function regulation such as apoptosis. Its physiological adaptability is strongly linked to the cell mitochondrial channel's ongoing reconfiguration through a range of mechanisms known as mitochondrial membrane dynamics, including endomembrane fusion and fragmentation (separation) and ultrastructural membrane remodeling. As a result, mitochondrial dynamics regulate and frequently choreograph not only metabolic but also complicated cell signaling processes such as cell pluripotent stem cells, proliferation, maturation, aging, and mortality. Mutually, post-translational alterations of mitochondrial apparatus and the development of transmembrane contact sites among mitochondria and other structures, which both have the potential to link signals from diverse routes that affect mitochondrial membrane dynamics substantially, Mitochondria are wrapped by two membranes: an inner mitochondrial membrane (IMM) and an outer mitochondrial membrane (OMM), each with a distinctive function and structure, which parallels their dual role as cellular powerhouses and signaling organelles. The inner mitochondrial membrane divides the mitochondrial lumen into two parts: the inner border membrane, which runs parallel to the OMM, and the cristae, which are deeply twisted, multinucleated invaginations that give room for surface area enlargement and house the mitochondrial respiration apparatus. The outer mitochondrial membrane, on the other hand, is soft and permeable. It, therefore, acts as a foundation for cell signaling pathways to congregate, be deciphered, and be transported into mitochondria. Furthermore, the OMM connects to other cellular organelles, such as the endoplasmic reticulum (ER), lysosomes, endosomes, and the plasma membrane. Mitochondria play a wide range of roles in cell biology, which is reflected in their morphological diversity. Ever since the beginning of the mitochondrial study, it has been well documented that mitochondria can have a variety of forms, with both their general and ultra-structural morphology varying greatly among cells, during the cell cycle, and in response to metabolic or cellular cues. Mitochondria can exist as independent organelles or as part of larger systems; they can also be unequally distributed in the cytosol through regulated mitochondrial transport and placement to meet the cell's localized energy requirements. Mitochondrial dynamics refers to the adaptive and variable aspect of mitochondria, including their shape and subcellular distribution.
Autophagy
Autophagy is a self-degradative mechanism that regulates energy sources during growth and reaction to dietary stress. Autophagy also cleans up after itself, clearing aggregated proteins, cleaning damaged structures including mitochondria and endoplasmic reticulum and eradicating intracellular infections. Additionally, autophagy has antiviral and antibacterial roles within the cell, and it is involved at the beginning of distinctive and adaptive immune responses to viral and bacterial contamination. Some viruses include virulence proteins that prevent autophagy, while others utilize autophagy elements for intracellular development or cellular splitting. Macro autophagy, micro autophagy, and chaperon-mediated autophagy are the three basic types of autophagy. When macro autophagy is triggered, an exclusion membrane incorporates a section of the cytoplasm, generating the autophagosome, a distinctive double-membraned organelle. The autophagosome then joins the lysosome to create an autolysosome, with lysosomal enzymes degrading the components. In micro autophagy, the lysosome or vacuole engulfs a piece of the cytoplasm by invaginating or protruding the lysosomal membrane to enclose the cytosol or organelles. The chaperone-mediated autophagy (CMA) protein quality assurance by digesting oxidized and altered proteins under stressful circumstances and supplying amino acids through protein denaturation. Autophagy is the primary intrinsic degradative system for peptides, fats, carbohydrates, and other cellular structures. In both physiologic and stressful situations, this cellular progression is vital for upholding the correct cellular balance. Autophagy instability leads to a variety of illness symptoms, including inflammation, biochemical disturbances, aging, and neurodegenerative, due to its involvement in controlling cell integrity. The modification of the autophagy-lysosomal networks is a typical hallmark of many neurological and muscular illnesses. As a result, autophagy has been identified as a potential strategy for the prevention and treatment of various disorders. Many of these disorders are prevented or improved by consuming polyphenol in the meal. As a result, natural compounds with the ability to modify the autophagy mechanism are seen as a potential therapeutic option. The creation of the double membrane (phagophore), which would be known as nucleation, is the first step in macro-autophagy. The phagophore approach indicates dysregulated polypeptides or defective organelles that come from the cell membrane, Golgi apparatus, endoplasmic reticulum, and mitochondria. With the conclusion of the autophagocyte, the phagophore's enlargement comes to an end. The auto-phagosome combines with the lysosomal vesicles to formulate an auto-lysosome that degrades the encapsulated substances, referred to as phagocytosis.
Notable cell biologists
Jean Baptiste Carnoy
Peter Agre
Günter Blobel
Robert Brown
Geoffrey M. Cooper
Christian de Duve
Henri Dutrochet
Robert Hooke
H. Robert Horvitz
Marc Kirschner
Anton van Leeuwenhoek
Ira Mellman
Marta Miączyńska
Peter D. Mitchell
Rudolf Virchow
Paul Nurse
George Emil Palade
Keith R. Porter
Ray Rappaport
Michael Swann
Roger Tsien
Edmund Beecher Wilson
Kenneth R. Miller
Matthias Jakob Schleiden
Theodor Schwann
Yoshinori Ohsumi
Jan Evangelista Purkyně
See also
The American Society for Cell Biology
Cell biophysics
Cell disruption
Cell physiology
Cellular adaptation
Cellular microbiology
Institute of Molecular and Cell Biology (disambiguation)
Meiomitosis
Organoid
Outline of cell biology
Notes
References
electronic-book electronic-
Cell and Molecular Biology by Karp 5th Ed.,
External links
Aging Cell
"Francis Harry Compton Crick (1916–2004)" by A. Andrei at the Embryo Project Encyclopedia
"Biology Resource By Professor Lin." | 0.791369 | 0.99739 | 0.789304 |
Biological dispersal | Biological dispersal refers to both the movement of individuals (animals, plants, fungi, bacteria, etc.) from their birth site to their breeding site ('natal dispersal'), as well as the movement from one breeding site to another ('breeding dispersal').
Dispersal is also used to describe the movement of propagules such as seeds and spores.
Technically, dispersal is defined as any movement that has the potential to lead to gene flow.
The act of dispersal involves three phases: departure, transfer, and settlement. There are different fitness costs and benefits associated with each of these phases.
Through simply moving from one habitat patch to another, the dispersal of an individual has consequences not only for individual fitness, but also for population dynamics, population genetics, and species distribution. Understanding dispersal and the consequences, both for evolutionary strategies at a species level and for processes at an ecosystem level, requires understanding on the type of dispersal, the dispersal range of a given species, and the dispersal mechanisms involved. Biological dispersal can be correlated to population density. The range of variations of a species' location determines the expansion range.
Biological dispersal may be contrasted with geodispersal, which is the mixing of previously isolated populations (or whole biotas) following the erosion of geographic barriers to dispersal or gene flow.
Dispersal can be distinguished from animal migration (typically round-trip seasonal movement), although within population genetics, the terms 'migration' and 'dispersal' are often used interchangeably.
Furthermore, biological dispersal is impacted and limited by different environmental and individual conditions. This leads to a wide range of consequences on the organisms present in the environment and their ability to adapt their dispersal methods to that environment.
Types of dispersal
Some organisms are motile throughout their lives, but others are adapted to move or be moved at precise, limited phases of their life cycles. This is commonly called the dispersive phase of the life cycle. The strategies of organisms' entire life cycles often are predicated on the nature and circumstances of their dispersive phases.
In general, there are two basic types:
Passive Dispersal (Density-Independent Dispersal)
In passive dispersal, the organisms cannot move on their own but use other methods to achieve successful reproduction or facilitation into new habitats. Organisms have evolved adaptations for dispersal that take advantage of various forms of kinetic energy occurring naturally in the environment. This can be done by taking advantage of water, wind, or an animal that is able to perform active dispersal themselves. Some organisms are capable of movement while in their larval phase. This is common amongst some invertebrates, fish, insects and sessile organisms such as plants) that depend on animal vectors, wind, gravity or current for dispersal.
Invertebrates, like sea sponges and corals, pass gametes through water. In this way are able to successfully reproduce because the sperm move around, while the eggs are moved by currents. Plants act in similar ways as they can also use water currents, winds, or moving animals to transport their gametes. Seeds, spores, and fruits can have certain adaptations that aid in facilitation of movement.
Active Dispersal (Density-Dependent Dispersal)
In active dispersal, an organism will move locations by its own inherit capabilities. Age is not a restriction, as location change is common in both young and adult animals. The extent of dispersion is dependent on multiple factors, such as local population, resource competition, habitat quality, and habitat size. Due to this, many consider active dispersal to also be density-dependence, as density of the community plays a major role in the movement of animals. However, the effect is observed in age groups differently, which results in diverse levels of dispersion.
When it comes to active dispersal, animals that are capable to free movement of large distances are ideal achieved through flying swimming, or walking. Nonetheless, there are restrictions enforced by geographical location and habitat. Walking animals are at the biggest disadvantage when it comes to this, as they can be prone to being stopped by potential barriers. Although some terrestrial animals traveling by foot can travel great distances, walking uses more energy in comparison to flying or swimming, especially when passing through adverse conditions.
Due to population density, dispersal may relieve pressure for resources in an ecosystem, and competition for these resources may be a selection factor for dispersal mechanisms. Dispersal of organisms is a critical process for understanding both geographic isolation in evolution through gene flow and the broad patterns of current geographic distributions (biogeography).
A distinction is often made between natal dispersal where an individual (often a juvenile) moves away from the place it was born, and breeding dispersal where an individual (often an adult) moves away from one breeding location to breed elsewhere.
Costs and benefits
In the broadest sense, dispersal occurs when the fitness benefits of moving outweigh the costs.
There are a number of benefits to dispersal such as locating new resources, escaping unfavorable conditions, avoiding competing with siblings, and avoiding breeding with closely related individuals which could lead to inbreeding depression.
There are also a number of costs associated with dispersal, which can be thought of in terms of four main currencies: energy, risk, time, and opportunity.
Energetic costs include the extra energy required to move as well as energetic investment in movement machinery (e.g. wings). Risks include increased injury and mortality during dispersal and the possibility of settling in an unfavorable environment.
Time spent dispersing is time that often cannot be spent on other activities such as growth and reproduction.
Finally, dispersal can also lead to outbreeding depression if an individual is better adapted to its natal environment than the one it ends up in. In social animals (such as many birds and mammals) a dispersing individual must find and join a new group, which can lead to loss of social rank.
Dispersal range
"Dispersal range" refers to the distance a species can move from an existing population or the parent organism. An ecosystem depends critically on the ability of individuals and populations to disperse from one habitat patch to another. Therefore, biological dispersal is critical to the stability of ecosystems.
Urban Environments and Dispersal Range
Urban areas can be seen to have their own unique effects on the dispersal range and dispersal abilities of different organisms. For plant species, urban environments largely provide novel dispersal vectors. While animals and physical factors (i.e. wind, water, etc) have played a role in dispersal for centuries, motor vehicles have recently been considered as major dispersal vectors. Tunnels that connect rural and urban environments have been shown to expedite a large amount of and diverse set of seeds from urban to rural environments. This could lead to possible sources of invasive species on the urban-rural gradient. Another example of the effects of urbanization could be seen next to rivers. Urbanization has led to the introduction of different invasive species through direct planting or wind dispersal. In turn, rivers next to these invasive plant species have become vital dispersal vectors. Rivers could be seen to connect urban centers to rural and natural environments. Seeds from the invasive species were shown to be transported by the rivers to natural areas located downstream, thus building upon the already established dispersal distance of the plant.
In contrast, urban environments can also provide limitations for certain dispersal strategies. Human influence through urbanization greatly affects the layout of landscapes, which leads to the limitation of dispersal strategies for many organisms. These changes have largely been exhibited through pollinator-flowering plant relationships. As the pollinator's optimal range of survival is limited, it leads to a limited supply of pollination sites. Subsequently, this leads to less gene flow between distantly separated populations, in turn decreasing the genetic diversity of each of the areas. Likewise, urbanization has been shown to impact the gene flow of distinctly different species (ex. mice and bats) in similar ways. While these two species may have different ecological niches and living strategies, urbanization limits the dispersal strategies of both species. This leads to genetic isolation of both populations, resulting in limited gene flow. While the urbanization did have a greater effect on mice dispersal, it also led to a slight increase in inbreeding among bat populations.
Environmental constraints
Few species are ever evenly or randomly distributed within or across landscapes. In general, species significantly vary across the landscape in association with environmental features that influence their reproductive success and population persistence. Spatial patterns in environmental features (e.g. resources) permit individuals to escape unfavorable conditions and seek out new locations. This allows the organism to "test" new environments for their suitability, provided they are within animal's geographic range. In addition, the ability of a species to disperse over a gradually changing environment could enable a population to survive extreme conditions. (i.e. climate change).
As the climate changes, prey and predators have to adapt to survive. This poses a problem for many animals, for example, the Southern Rockhopper Penguins. These penguins are able to live and thrive in a variety of climates due to the penguins' phenotypic plasticity. However, they are predicted to respond by dispersal, not adaptation this time. This is explained due to their long life spans and slow microevolution. Penguins in the subantarctic have very different foraging behavior from those of subtropical waters; it would be very hard to survive by keeping up with the fast-changing climate because these behaviors took years to shape.
Dispersal barriers
A dispersal barrier may result in a dispersal range of a species much smaller than the species distribution. An artificial example is habitat fragmentation due to human land use. By contrast, natural barriers to dispersal that limit species distribution include mountain ranges and rivers. An example is the separation of the ranges of the two species of chimpanzee by the Congo River.
On the other hand, human activities may also expand the dispersal range of a species by providing new dispersal methods (e.g., ballast water from ships). Many such dispersed species become invasive, like rats or stinkbugs, but some species also have a slightly positive effect to human settlers like honeybees and earthworms.
Dispersal mechanisms
Most animals are capable of locomotion and the basic mechanism of dispersal is movement from one place to another. Locomotion allows the organism to "test" new environments for their suitability, provided they are within the animal's range. Movements are usually guided by inherited behaviors.
The formation of barriers to dispersal or gene flow between adjacent areas can isolate populations on either side of the emerging divide. The geographic separation and subsequent genetic isolation of portions of an ancestral population can result in allopatric speciation.
Plant dispersal mechanisms
Seed dispersal is the movement or transport of seeds away from the parent plant. Plants are limited by vegetative reproduction and consequently rely upon a variety of dispersal vectors to transport their propagules, including both abiotic and biotic vectors. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time. The patterns of seed dispersal are determined in large part by the specific dispersal mechanism, and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals.
Animal dispersal mechanisms
Non-motile animals
There are numerous animal forms that are non-motile, such as sponges, bryozoans, tunicates, sea anemones, corals, and oysters. In common, they are all either marine or aquatic. It may seem curious that plants have been so successful at stationary life on land, while animals have not, but the answer lies in the food supply. Plants produce their own food from sunlight and carbon dioxide—both generally more abundant on land than in water. Animals fixed in place must rely on the surrounding medium to bring food at least close enough to grab, and this occurs in the three-dimensional water environment, but with much less abundance in the atmosphere.
All of the marine and aquatic invertebrates whose lives are spent fixed to the bottom (more or less; anemones are capable of getting up and moving to a new location if conditions warrant) produce dispersal units. These may be specialized "buds", or motile sexual reproduction products, or even a sort of alteration of generations as in certain cnidaria.
Corals provide a good example of how sedentary species achieve dispersion. Broadcast spawning corals reproduce by releasing sperm and eggs directly into the water. These release events are coordinated by the lunar phase in certain warm months, such that all corals of one or many species on a given reef will be released on the same single or several consecutive nights. The released eggs are fertilized, and the resulting zygote develops quickly into a multicellular planula. This motile stage then attempts to find a suitable substratum for settlement. Most are unsuccessful and die or are fed upon by zooplankton and bottom-dwelling predators such as anemones and other corals. However, untold millions are produced, and a few do succeed in locating spots of bare limestone, where they settle and transform by growth into a polyp. All things being favorable, the single polyp grows into a coral head by budding off new polyps to form a colony.
Motile animals
The majority of animals are motile. Motile animals can disperse themselves by their spontaneous and independent locomotive powers. For example, dispersal distances across bird species depend on their flight capabilities. On the other hand, small animals utilize the existing kinetic energies in the environment, resulting in passive movement. Dispersal by water currents is especially associated with the physically small inhabitants of marine waters known as zooplankton. The term plankton comes from the Greek, πλαγκτον, meaning "wanderer" or "drifter".
Dispersal by dormant stages
Many animal species, especially freshwater invertebrates, are able to disperse by wind or by transfer with an aid of larger animals (birds, mammals or fishes) as dormant eggs, dormant embryos or, in some cases, dormant adult stages. Tardigrades, some rotifers and some copepods are able to withstand desiccation as adult dormant stages. Many other taxa (Cladocera, Bryozoa, Hydra, Copepoda and so on) can disperse as dormant eggs or embryos. Freshwater sponges usually have special dormant propagules called gemmulae for such a dispersal. Many kinds of dispersal dormant stages are able to withstand not only desiccation and low and high temperature, but also action of digestive enzymes during their transfer through digestive tracts of birds and other animals, high concentration of salts, and many kinds of toxicants. Such dormant-resistant stages made possible the long-distance dispersal from one water body to another and broad distribution ranges of many freshwater animals.
Quantifying dispersal
Dispersal is most commonly quantified either in terms of rate or distance.
Dispersal rate (also called migration rate in the population genetics literature) or probability describes the probability that any individual leaves an area or, equivalently, the expected proportion of individual to leave an area.
The dispersal distance is usually described by a dispersal kernel which gives the probability distribution of the distance traveled by any individual. A number of different functions are used for dispersal kernels in theoretical models of dispersal including the negative exponential distribution, extended negative exponential distribution, normal distribution, exponential power distribution, inverse power distribution, and the two-sided power distribution. The inverse power distribution and distributions with 'fat tails' representing long-distance dispersal events (called leptokurtic distributions) are thought to best match empirical dispersal data.
Consequences of dispersal
Dispersal not only has costs and benefits to the dispersing individual (as mentioned above), it also has consequences at the level of the population and species on both ecological and evolutionary timescales. Organisms can be dispersed through multiple methods. Carrying through animals is especially effective as it allows traveling of far distances. Many plants depend on this to be able to go to new locations, preferably with conditions ideal for precreation and germination. With this, dispersal has major influence in the determination of population and spread of plant species.
Many populations have patchy spatial distributions where separate yet interacting sub-populations occupy discrete habitat patches (see metapopulations). Dispersing individuals move between different sub-populations which increases the overall connectivity of the metapopulation and can lower the risk of stochastic extinction. If a sub-population goes extinct by chance, it is more likely to be recolonized if the dispersal rate is high. Increased connectivity can also decrease the degree of local adaptation.
Human interference with the environment has been seen to have an effect on dispersal. Some of these occurrences have been accidents, like in the case of zebra mussels, which are indigenous to Southeast Russia. A ship had accidentally released them into the North American Great Lakes and they became a major nuisance in the area, as they began to clog water treatment and power plants. Another case of this was seen in Chinese bighead and silver carp, which were brought in with the purpose of algae control in many catfish ponds across the U.S. Unfortunately, some had managed to escape into the neighboring rivers of Mississippi, Missouri, Illinois, and Ohio, eventually causing a negative impact for the surrounding ecosystems. However, human-created habitats such as urban environments have allowed certain migrated species to become urbanophiles or synanthropes.
Dispersal has caused changes to many species on a genetic level. A positive correlation has been seen for differentiation and diversification of certain species of spiders in the Canary Islands. These spiders were residing in archipelagos and islands. Dispersion was identified as a key factor in the rate of both occurrences.
Human-Mediated Dispersal
Human impact has had a major influence on the movement of animals through time. An environmental response occurs in due to this, as dispersal patterns are important for species to survive major changes. There are two forms of human-mediated dispersal:
Human-Vectored Dispersal (HVD)
In Human-Vectored Dispersal, humans directly move the organism. This can occur deliberately, like for the usage of the animal in an agricultural setting, hunting or more. However, it can also occur accidentally, if the organism attaches itself to a person or vehicle. For this process, the organism first has to come in contact with a human and then movement can start. This has become more common has the human population all over the world has increased and movement through the world has also become more prevalent. Dispersal through a human can be many times more successful in distance compared to movement by wild or other environmental means.
Human-Altered Dispersal (HAD)
Human-Altered Dispersal signifies the effects that have occurred due to human interference with landscapes and animals. Many of these interferences have caused negative consequences in the environment. For example, many areas have suffered habitat loss, which in turn can have a negative effect on dispersal. Researchers have found that due to this, animals have been reported to move further distances in an attempt to find isolated places. This can especially be seen through construction of roads and other infrastructures in remote areas.
Long-distance dispersals are observed when seeds are carried through human vectors. A study was conducted to test the effects of human-mediated dispersal of seeds over long distances in two species of Brassica in England. The main methods of dispersal compared with movement by wind versus movement by attachment to outerwear. It was concluded that shoes were able to transport seeds to further distances than what would be achievable through wind alone. It was noted that some seeds were able to stay on the shoes for long periods of time, about 8 hours of walking, but evenly came off. Due to this, the seeds were able to travel far distances and settle into new areas, where they were previously not inhabiting. However, it is also important that the seeds land in places where they are able to stick and grow. Specific shoe size did not seem to have an effect on prevalence.
Dispersal observation methods
Biological dispersal can be observed using different methods. To study the effects of dispersal, observers use the methods of landscape genetics. This allows scientists to observe the difference between population variation, climate and well as the size and shape of the landscape. An example of the use of landscape genetics as a means to study seed dispersal, for example, involves studying the effects of traffic using motorway tunnels between inner cities and suburban area.
Genome wide SNP dataset and species distribution modelling are examples of computational methods used to examine different dispersal modes. A genome-wide SNP dataset can be used to determine the genomic and demographic history within the range of collection or observation [Reference needed]. Species distribution models are used when scientists wish to determine which region is best suited for the species under observation [Reference needed]. Methods such as these are used to understand the criteria the environment provides when migration and settlement occurs such as the cases in biological invasion.
Human-aided dispersal, an example of an anthropogenic effect, can contribute to biological dispersal ranges and variations.
Informed dispersal is a way to observe the cues of biological dispersal suggesting the reasoning behind the placement. This concept implies that the movement between species also involve information transfer. Methods such as GPS location are used to monitor the social cues and mobility of species regarding habitat selection. GPS radio-collars can be used when collecting data on social animals such a meerkats. Consensus data such as detailed trip records and point of interest (POI) data can be used to predict the movement of humans from rural to urban areas are examples of informed dispersal [Reference needed].
Direct tracking or visual tracking allows scientists to monitor the movement of seed dispersal by color coding. Scientists and observers can track the migration of individuals through the landscape. The pattern of transportation can then be visualized to reflect the range in which the organism expands.
See also
Aeroplankton
Competition (biology)
Disturbance (ecology)
Dormancy ('dispersal in time')
Gene flow
Habitat fragmentation
Island hopping
Landscape ecology
Metapopulation
Oceanic dispersal
Phoresy
Population modeling
Population distribution
Population ecology
Species distribution
References
Further reading
(Dispersal of animals)
(Animals and plants)
External links
Fruit and seed dispersal images at bioimages.vanderbilt.edu
Reproduction
Population ecology
Biological evolution | 0.805035 | 0.980065 | 0.788987 |
Geomorphology | Geomorphology (from Ancient Greek: , , 'earth'; , , 'form'; and , , 'study') is the scientific study of the origin and evolution of topographic and bathymetric features generated by physical, chemical or biological processes operating at or near Earth's surface. Geomorphologists seek to understand why landscapes look the way they do, to understand landform and terrain history and dynamics and to predict changes through a combination of field observations, physical experiments and numerical modeling. Geomorphologists work within disciplines such as physical geography, geology, geodesy, engineering geology, archaeology, climatology, and geotechnical engineering. This broad base of interests contributes to many research styles and interests within the field.
Overview
Earth's surface is modified by a combination of surface processes that shape landscapes, and geologic processes that cause tectonic uplift and subsidence, and shape the coastal geography. Surface processes comprise the action of water, wind, ice, wildfire, and life on the surface of the Earth, along with chemical reactions that form soils and alter material properties, the stability and rate of change of topography under the force of gravity, and other factors, such as (in the very recent past) human alteration of the landscape. Many of these factors are strongly mediated by climate. Geologic processes include the uplift of mountain ranges, the growth of volcanoes, isostatic changes in land surface elevation (sometimes in response to surface processes), and the formation of deep sedimentary basins where the surface of the Earth drops and is filled with material eroded from other parts of the landscape. The Earth's surface and its topography therefore are an intersection of climatic, hydrologic, and biologic action with geologic processes, or alternatively stated, the intersection of the Earth's lithosphere with its hydrosphere, atmosphere, and biosphere.
The broad-scale topographies of the Earth illustrate this intersection of surface and subsurface action. Mountain belts are uplifted due to geologic processes. Denudation of these high uplifted regions produces sediment that is transported and deposited elsewhere within the landscape or off the coast. On progressively smaller scales, similar ideas apply, where individual landforms evolve in response to the balance of additive processes (uplift and deposition) and subtractive processes (subsidence and erosion). Often, these processes directly affect each other: ice sheets, water, and sediment are all loads that change topography through flexural isostasy. Topography can modify the local climate, for example through orographic precipitation, which in turn modifies the topography by changing the hydrologic regime in which it evolves. Many geomorphologists are particularly interested in the potential for feedbacks between climate and tectonics, mediated by geomorphic processes.
In addition to these broad-scale questions, geomorphologists address issues that are more specific or more local. Glacial geomorphologists investigate glacial deposits such as moraines, eskers, and proglacial lakes, as well as glacial erosional features, to build chronologies of both small glaciers and large ice sheets and understand their motions and effects upon the landscape. Fluvial geomorphologists focus on rivers, how they transport sediment, migrate across the landscape, cut into bedrock, respond to environmental and tectonic changes, and interact with humans. Soils geomorphologists investigate soil profiles and chemistry to learn about the history of a particular landscape and understand how climate, biota, and rock interact. Other geomorphologists study how hillslopes form and change. Still others investigate the relationships between ecology and geomorphology. Because geomorphology is defined to comprise everything related to the surface of the Earth and its modification, it is a broad field with many facets.
Geomorphologists use a wide range of techniques in their work. These may include fieldwork and field data collection, the interpretation of remotely sensed data, geochemical analyses, and the numerical modelling of the physics of landscapes. Geomorphologists may rely on geochronology, using dating methods to measure the rate of changes to the surface. Terrain measurement techniques are vital to quantitatively describe the form of the Earth's surface, and include differential GPS, remotely sensed digital terrain models and laser scanning, to quantify, study, and to generate illustrations and maps.
Practical applications of geomorphology include hazard assessment (such as landslide prediction and mitigation), river control and stream restoration, and coastal protection.
Planetary geomorphology studies landforms on other terrestrial planets such as Mars. Indications of effects of wind, fluvial, glacial, mass wasting, meteor impact, tectonics and volcanic processes are studied. This effort not only helps better understand the geologic and atmospheric history of those planets but also extends geomorphological study of the Earth. Planetary geomorphologists often use Earth analogues to aid in their study of surfaces of other planets.
History
Other than some notable exceptions in antiquity, geomorphology is a relatively young science, growing along with interest in other aspects of the earth sciences in the mid-19th century. This section provides a very brief outline of some of the major figures and events in its development.
Ancient geomorphology
The study of landforms and the evolution of the Earth's surface can be dated back to scholars of Classical Greece. In the 5th century BC, Greek historian Herodotus argued from observations of soils that the Nile delta was actively growing into the Mediterranean Sea, and estimated its age. In the 4th century BC, Greek philosopher Aristotle speculated that due to sediment transport into the sea, eventually those seas would fill while the land lowered. He claimed that this would mean that land and water would eventually swap places, whereupon the process would begin again in an endless cycle. The Encyclopedia of the Brethren of Purity published in Arabic at Basra during the 10th century also discussed the cyclical changing positions of land and sea with rocks breaking down and being washed into the sea, their sediment eventually rising to form new continents. The medieval Persian Muslim scholar Abū Rayhān al-Bīrūnī (973–1048), after observing rock formations at the mouths of rivers, hypothesized that the Indian Ocean once covered all of India. In his De Natura Fossilium of 1546, German metallurgist and mineralogist Georgius Agricola (1494–1555) wrote about erosion and natural weathering.
Another early theory of geomorphology was devised by Song dynasty Chinese scientist and statesman Shen Kuo (1031–1095). This was based on his observation of marine fossil shells in a geological stratum of a mountain hundreds of miles from the Pacific Ocean. Noticing bivalve shells running in a horizontal span along the cut section of a cliffside, he theorized that the cliff was once the pre-historic location of a seashore that had shifted hundreds of miles over the centuries. He inferred that the land was reshaped and formed by soil erosion of the mountains and by deposition of silt, after observing strange natural erosions of the Taihang Mountains and the Yandang Mountain near Wenzhou. Furthermore, he promoted the theory of gradual climate change over centuries of time once ancient petrified bamboos were found to be preserved underground in the dry, northern climate zone of Yanzhou, which is now modern day Yan'an, Shaanxi province. Previous Chinese authors also presented ideas about changing landforms. Scholar-official Du Yu (222–285) of the Western Jin dynasty predicted that two monumental stelae recording his achievements, one buried at the foot of a mountain and the other erected at the top, would eventually change their relative positions over time as would hills and valleys. Daoist alchemist Ge Hong (284–364) created a fictional dialogue where the immortal Magu explained that the territory of the East China Sea was once a land filled with mulberry trees.
Early modern geomorphology
The term geomorphology seems to have been first used by Laumann in an 1858 work written in German. Keith Tinkler has suggested that the word came into general use in English, German and French after John Wesley Powell and W. J. McGee used it during the International Geological Conference of 1891. John Edward Marr in his The Scientific Study of Scenery considered his book as, 'an Introductory Treatise on Geomorphology, a subject which has sprung from the union of Geology and Geography'.
An early popular geomorphic model was the geographical cycle or cycle of erosion model of broad-scale landscape evolution developed by William Morris Davis between 1884 and 1899. It was an elaboration of the uniformitarianism theory that had first been proposed by James Hutton (1726–1797). With regard to valley forms, for example, uniformitarianism posited a sequence in which a river runs through a flat terrain, gradually carving an increasingly deep valley, until the side valleys eventually erode, flattening the terrain again, though at a lower elevation. It was thought that tectonic uplift could then start the cycle over. In the decades following Davis's development of this idea, many of those studying geomorphology sought to fit their findings into this framework, known today as "Davisian". Davis's ideas are of historical importance, but have been largely superseded today, mainly due to their lack of predictive power and qualitative nature.
In the 1920s, Walther Penck developed an alternative model to Davis's. Penck thought that landform evolution was better described as an alternation between ongoing processes of uplift and denudation, as opposed to Davis's model of a single uplift followed by decay. He also emphasised that in many landscapes slope evolution occurs by backwearing of rocks, not by Davisian-style surface lowering, and his science tended to emphasise surface process over understanding in detail the surface history of a given locality. Penck was German, and during his lifetime his ideas were at times rejected vigorously by the English-speaking geomorphology community. His early death, Davis' dislike for his work, and his at-times-confusing writing style likely all contributed to this rejection.
Both Davis and Penck were trying to place the study of the evolution of the Earth's surface on a more generalized, globally relevant footing than it had been previously. In the early 19th century, authors – especially in Europe – had tended to attribute the form of landscapes to local climate, and in particular to the specific effects of glaciation and periglacial processes. In contrast, both Davis and Penck were seeking to emphasize the importance of evolution of landscapes through time and the generality of the Earth's surface processes across different landscapes under different conditions.
During the early 1900s, the study of regional-scale geomorphology was termed "physiography". Physiography later was considered to be a contraction of "physical" and "geography", and therefore synonymous with physical geography, and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline. Some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with "pure morphology", separated from its geological heritage. In the period following World War II, the emergence of process, climatic, and quantitative studies led to a preference by many earth scientists for the term "geomorphology" in order to suggest an analytical approach to landscapes rather than a descriptive one.
Climatic geomorphology
During the age of New Imperialism in the late 19th century European explorers and scientists traveled across the globe bringing descriptions of landscapes and landforms. As geographical knowledge increased over time these observations were systematized in a search for regional patterns. Climate emerged thus as prime factor for explaining landform distribution at a grand scale. The rise of climatic geomorphology was foreshadowed by the work of Wladimir Köppen, Vasily Dokuchaev and Andreas Schimper. William Morris Davis, the leading geomorphologist of his time, recognized the role of climate by complementing his "normal" temperate climate cycle of erosion with arid and glacial ones. Nevertheless, interest in climatic geomorphology was also a reaction against Davisian geomorphology that was by the mid-20th century considered both un-innovative and dubious. Early climatic geomorphology developed primarily in continental Europe while in the English-speaking world the tendency was not explicit until L.C. Peltier's 1950 publication on a periglacial cycle of erosion.
Climatic geomorphology was criticized in a 1969 review article by process geomorphologist D.R. Stoddart. The criticism by Stoddart proved "devastating" sparking a decline in the popularity of climatic geomorphology in the late 20th century. Stoddart criticized climatic geomorphology for applying supposedly "trivial" methodologies in establishing landform differences between morphoclimatic zones, being linked to Davisian geomorphology and by allegedly neglecting the fact that physical laws governing processes are the same across the globe. In addition some conceptions of climatic geomorphology, like that which holds that chemical weathering is more rapid in tropical climates than in cold climates proved to not be straightforwardly true.
Quantitative and process geomorphology
Geomorphology was started to be put on a solid quantitative footing in the middle of the 20th century. Following the early work of Grove Karl Gilbert around the turn of the 20th century, a group of mainly American natural scientists, geologists and hydraulic engineers including William Walden Rubey, Ralph Alger Bagnold, Hans Albert Einstein, Frank Ahnert, John Hack, Luna Leopold, A. Shields, Thomas Maddock, Arthur Strahler, Stanley Schumm, and Ronald Shreve began to research the form of landscape elements such as rivers and hillslopes by taking systematic, direct, quantitative measurements of aspects of them and investigating the scaling of these measurements. These methods began to allow prediction of the past and future behavior of landscapes from present observations, and were later to develop into the modern trend of a highly quantitative approach to geomorphic problems. Many groundbreaking and widely cited early geomorphology studies appeared in the Bulletin of the Geological Society of America, and received only few citations prior to 2000 (they are examples of "sleeping beauties") when a marked increase in quantitative geomorphology research occurred.
Quantitative geomorphology can involve fluid dynamics and solid mechanics, geomorphometry, laboratory studies, field measurements, theoretical work, and full landscape evolution modeling. These approaches are used to understand weathering and the formation of soils, sediment transport, landscape change, and the interactions between climate, tectonics, erosion, and deposition.
In Sweden Filip Hjulström's doctoral thesis, "The River Fyris" (1935), contained one of the first quantitative studies of geomorphological processes ever published. His students followed in the same vein, making quantitative studies of mass transport (Anders Rapp), fluvial transport (Åke Sundborg), delta deposition (Valter Axelsson), and coastal processes (John O. Norrman). This developed into "the Uppsala School of Physical Geography".
Contemporary geomorphology
Today, the field of geomorphology encompasses a very wide range of different approaches and interests. Modern researchers aim to draw out quantitative "laws" that govern Earth surface processes, but equally, recognize the uniqueness of each landscape and environment in which these processes operate. Particularly important realizations in contemporary geomorphology include:
1) that not all landscapes can be considered as either "stable" or "perturbed", where this perturbed state is a temporary displacement away from some ideal target form. Instead, dynamic changes of the landscape are now seen as an essential part of their nature.
2) that many geomorphic systems are best understood in terms of the stochasticity of the processes occurring in them, that is, the probability distributions of event magnitudes and return times. This in turn has indicated the importance of chaotic determinism to landscapes, and that landscape properties are best considered statistically. The same processes in the same landscapes do not always lead to the same end results.
According to Karna Lidmar-Bergström, regional geography is since the 1990s no longer accepted by mainstream scholarship as a basis for geomorphological studies.
Albeit having its importance diminished, climatic geomorphology continues to exist as field of study producing relevant research. More recently concerns over global warming have led to a renewed interest in the field.
Despite considerable criticism, the cycle of erosion model has remained part of the science of geomorphology. The model or theory has never been proved wrong, but neither has it been proven. The inherent difficulties of the model have instead made geomorphological research to advance along other lines. In contrast to its disputed status in geomorphology, the cycle of erosion model is a common approach used to establish denudation chronologies, and is thus an important concept in the science of historical geology. While acknowledging its shortcomings, modern geomorphologists Andrew Goudie and Karna Lidmar-Bergström have praised it for its elegance and pedagogical value respectively.
Processes
Geomorphically relevant processes generally fall into
(1) the production of regolith by weathering and erosion,
(2) the transport of that material, and
(3) its eventual deposition. Primary surface processes responsible for most topographic features include wind, waves, chemical dissolution, mass wasting, groundwater movement, surface water flow, glacial action, tectonism, and volcanism. Other more exotic geomorphic processes might include periglacial (freeze-thaw) processes, salt-mediated action, changes to the seabed caused by marine currents, seepage of fluids through the seafloor or extraterrestrial impact.
Aeolian processes
Aeolian processes pertain to the activity of the winds and more specifically, to the winds' ability to shape the surface of the Earth. Winds may erode, transport, and deposit materials, and are effective agents in regions with sparse vegetation and a large supply of fine, unconsolidated sediments. Although water and mass flow tend to mobilize more material than wind in most environments, aeolian processes are important in arid environments such as deserts.
Biological processes
The interaction of living organisms with landforms, or biogeomorphologic processes, can be of many different forms, and is probably of profound importance for the terrestrial geomorphic system as a whole. Biology can influence very many geomorphic processes, ranging from biogeochemical processes controlling chemical weathering, to the influence of mechanical processes like burrowing and tree throw on soil development, to even controlling global erosion rates through modulation of climate through carbon dioxide balance. Terrestrial landscapes in which the role of biology in mediating surface processes can be definitively excluded are extremely rare, but may hold important information for understanding the geomorphology of other planets, such as Mars.
Fluvial processes
Rivers and streams are not only conduits of water, but also of sediment. The water, as it flows over the channel bed, is able to mobilize sediment and transport it downstream, either as bed load, suspended load or dissolved load. The rate of sediment transport depends on the availability of sediment itself and on the river's discharge. Rivers are also capable of eroding into rock and forming new sediment, both from their own beds and also by coupling to the surrounding hillslopes. In this way, rivers are thought of as setting the base level for large-scale landscape evolution in nonglacial environments. Rivers are key links in the connectivity of different landscape elements.
As rivers flow across the landscape, they generally increase in size, merging with other rivers. The network of rivers thus formed is a drainage system. These systems take on four general patterns: dendritic, radial, rectangular, and trellis. Dendritic happens to be the most common, occurring when the underlying stratum is stable (without faulting). Drainage systems have four primary components: drainage basin, alluvial valley, delta plain, and receiving basin. Some geomorphic examples of fluvial landforms are alluvial fans, oxbow lakes, and fluvial terraces.
Glacial processes
Glaciers, while geographically restricted, are effective agents of landscape change. The gradual movement of ice down a valley causes abrasion and plucking of the underlying rock. Abrasion produces fine sediment, termed glacial flour. The debris transported by the glacier, when the glacier recedes, is termed a moraine. Glacial erosion is responsible for U-shaped valleys, as opposed to the V-shaped valleys of fluvial origin.
The way glacial processes interact with other landscape elements, particularly hillslope and fluvial processes, is an important aspect of Plio-Pleistocene landscape evolution and its sedimentary record in many high mountain environments. Environments that have been relatively recently glaciated but are no longer may still show elevated landscape change rates compared to those that have never been glaciated. Nonglacial geomorphic processes which nevertheless have been conditioned by past glaciation are termed paraglacial processes. This concept contrasts with periglacial processes, which are directly driven by formation or melting of ice or frost.
Hillslope processes
Soil, regolith, and rock move downslope under the force of gravity via creep, slides, flows, topples, and falls. Such mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Titan and Iapetus.
Ongoing hillslope processes can change the topology of the hillslope surface, which in turn can change the rates of those processes. Hillslopes that steepen up to certain critical thresholds are capable of shedding extremely large volumes of material very quickly, making hillslope processes an extremely important element of landscapes in tectonically active areas.
On the Earth, biological processes such as burrowing or tree throw may play important roles in setting the rates of some hillslope processes.
Igneous processes
Both volcanic (eruptive) and plutonic (intrusive) igneous processes can have important impacts on geomorphology. The action of volcanoes tends to rejuvenize landscapes, covering the old land surface with lava and tephra, releasing pyroclastic material and forcing rivers through new paths. The cones built by eruptions also build substantial new topography, which can be acted upon by other surface processes. Plutonic rocks intruding then solidifying at depth can cause both uplift or subsidence of the surface, depending on whether the new material is denser or less dense than the rock it displaces.
Tectonic processes
Tectonic effects on geomorphology can range from scales of millions of years to minutes or less. The effects of tectonics on landscape are heavily dependent on the nature of the underlying bedrock fabric that more or less controls what kind of local morphology tectonics can shape. Earthquakes can, in terms of minutes, submerge large areas of land forming new wetlands. Isostatic rebound can account for significant changes over hundreds to thousands of years, and allows erosion of a mountain belt to promote further erosion as mass is removed from the chain and the belt uplifts. Long-term plate tectonic dynamics give rise to orogenic belts, large mountain chains with typical lifetimes of many tens of millions of years, which form focal points for high rates of fluvial and hillslope processes and thus long-term sediment production.
Features of deeper mantle dynamics such as plumes and delamination of the lower lithosphere have also been hypothesised to play important roles in the long term (> million year), large scale (thousands of km) evolution of the Earth's topography (see dynamic topography). Both can promote surface uplift through isostasy as hotter, less dense, mantle rocks displace cooler, denser, mantle rocks at depth in the Earth.
Marine processes
Marine processes are those associated with the action of waves, marine currents and seepage of fluids through the seafloor. Mass wasting and submarine landsliding are also important processes for some aspects of marine geomorphology. Because ocean basins are the ultimate sinks for a large fraction of terrestrial sediments, depositional processes and their related forms (e.g., sediment fans, deltas) are particularly important as elements of marine geomorphology.
Overlap with other fields
There is a considerable overlap between geomorphology and other fields. Deposition of material is extremely important in sedimentology. Weathering is the chemical and physical disruption of earth materials in place on exposure to atmospheric or near surface agents, and is typically studied by soil scientists and environmental chemists, but is an essential component of geomorphology because it is what provides the material that can be moved in the first place. Civil and environmental engineers are concerned with erosion and sediment transport, especially related to canals, slope stability (and natural hazards), water quality, coastal environmental management, transport of contaminants, and stream restoration. Glaciers can cause extensive erosion and deposition in a short period of time, making them extremely important entities in the high latitudes and meaning that they set the conditions in the headwaters of mountain-born streams; glaciology therefore is important in geomorphology.
See also
Bioerosion
Biogeology
Biogeomorphology
Biorhexistasy
British Society for Geomorphology
Coastal biogeomorphology
Coastal erosion
Concepts and Techniques in Modern Geography
Drainage system (geomorphology)
Erosion prediction
Geologic modelling
Geomorphometry
Geotechnics
Hack's law
Hydrologic modeling, behavioral modeling in hydrology
List of landforms
Orogeny
Physiographic regions of the world
Sediment transport
Soil morphology
Soils retrogression and degradation
Stream capture
Thermochronology
References
Further reading
Ialenti, Vincent. "Envisioning Landscapes of Our Very Distant Future" NPR Cosmos & Culture. 9/2014.
Bierman, P.R.; Montgomery, D.R. Key Concepts in Geomorphology. New York: W. H. Freeman, 2013. .
Ritter, D.F.; Kochel, R.C.; Miller, J.R.. Process Geomorphology. London: Waveland Pr Inc, 2011. .
Hargitai H., Page D., Canon-Tapia E. and Rodrigue C.M..; Classification and Characterization of Planetary Landforms. in: Hargitai H, Kereszturi Á, eds, Encyclopedia of Planetary Landforms. Cham: Springer 2015
External links
The Geographical Cycle, or the Cycle of Erosion (1899)
Geomorphology from Space (NASA)
British Society for Geomorphology
Earth sciences
Geology
Geological processes
Gravity
Physical geography
Planetary science
Seismology
Topography | 0.79208 | 0.996041 | 0.788943 |
Cryptobiosis | Cryptobiosis or anabiosis is a metabolic state in extremophilic organisms in response to adverse environmental conditions such as desiccation, freezing, and oxygen deficiency. In the cryptobiotic state, all measurable metabolic processes stop, preventing reproduction, development, and repair. When environmental conditions return to being hospitable, the organism will return to its metabolic state of life as it was prior to cryptobiosis.
Forms
Anhydrobiosis
Anhydrobiosis is the most studied form of cryptobiosis and occurs in situations of extreme desiccation. The term anhydrobiosis derives from the Greek for "life without water" and is most commonly used for the desiccation tolerance observed in certain invertebrate animals such as bdelloid rotifers, tardigrades, brine shrimp, nematodes, and at least one insect, a species of chironomid (Polypedilum vanderplanki). However, other life forms exhibit desiccation tolerance. These include the resurrection plant Craterostigma plantagineum, the majority of plant seeds, and many microorganisms such as bakers' yeast. Studies have shown that some anhydrobiotic organisms can survive for decades, even centuries, in the dry state.
Invertebrates undergoing anhydrobiosis often contract into a smaller shape and some proceed to form a sugar called trehalose. Desiccation tolerance in plants is associated with the production of another sugar, sucrose. These sugars are thought to protect the organism from desiccation damage. In some creatures, such as bdelloid rotifers, no trehalose has been found, which has led scientists to propose other mechanisms of anhydrobiosis, possibly involving intrinsically disordered proteins.
In 2011, Caenorhabditis elegans, a nematode that is also one of the best-studied model organisms, was shown to undergo anhydrobiosis in the dauer larva stage. Further research taking advantage of genetic and biochemical tools available for this organism revealed that in addition to trehalose biosynthesis, a set of other functional pathways is involved in anhydrobiosis at the molecular level. These are mainly defense mechanisms against reactive oxygen species and xenobiotics, expression of heat shock proteins and intrinsically disordered proteins as well as biosynthesis of polyunsaturated fatty acids and polyamines. Some of them are conserved among anhydrobiotic plants and animals, suggesting that anhydrobiotic ability may depend on a set of common mechanisms. Understanding these mechanisms in detail might enable modification of non-anhydrobiotic cells, tissues, organs or even organisms so that they can be preserved in a dried state of suspended animation over long time periods.
As of 2004, such an application of anhydrobiosis is being applied to vaccines. In vaccines, the process can produce a dry vaccine that reactivates once it is injected into the body. In theory, dry-vaccine technology could be used on any vaccine, including live vaccines such as the one for measles. It could also potentially be adapted to allow a vaccine's slow release, eliminating the need for boosters. This proposes to eliminate the need for refrigerating vaccines, thus making dry vaccines more widely available throughout the developing world where refrigeration, electricity, and proper storage are less accessible.
Based on similar principles, lyopreservation has been developed as a technique for preservation of biological samples at ambient temperatures. Lyopreservation is a biomimetic strategy based on anhydrobiosis to preserve cells at ambient temperatures. It has been explored as an alternative technique for cryopreservation. The technique has the advantages of being able to preserve biological samples at ambient temperatures, without the need for refrigeration or use of cryogenic temperatures.
Anoxybiosis
In situations lacking oxygen (a.k.a., anoxia), many cryptobionts (such as M. tardigradum) take in water and become turgid and immobile, but can survive for prolonged periods of time. Some ectothermic vertebrates and some invertebrates, such as brine shrimps, copepods, nematodes, and sponge gemmules, are capable of surviving in a seemingly inactive state during anoxic conditions for months to decades.
Studies of the metabolic activity of these idling organisms during anoxia have been mostly inconclusive. This is because it is difficult to measure very small degrees of metabolic activity reliably enough to prove a cryptobiotic state rather than ordinary metabolic rate depression (MRD). Many experts are skeptical of the biological feasibility of anoxybiosis, as the organism is managing to prevent damage to its cellular structures from the environmental negative free energy, despite being both surrounded by plenty of water and thermal energy and without using any free energy of its own. However, there is evidence that the stress-induced protein p26 may act as a protein chaperone that requires no energy in cystic Artemia franciscana (sea monkey) embryos, and most likely an extremely specialized and slow guanine polynucleotide pathway continues to provide metabolic free energy to the A. franciscana embryos during anoxic conditions. It seems that A. franciscana approaches but does not reach true anoxybiosis.
Chemobiosis
Chemobiosis is the cryptobiotic response to high levels of environmental toxins. It has been observed in tardigrades.
Cryobiosis
Cryobiosis is a form of cryptobiosis that takes place in reaction to decreased temperature. Cryobiosis begins when the water surrounding the organism's cells has been frozen. Stopping molecule mobility allows the organism to endure the freezing temperatures until more hospitable conditions return. Organisms capable of enduring these conditions typically feature molecules that facilitate freezing of water in preferential locations while also prohibiting the growth of large ice crystals that could otherwise damage cells. One such organism is the lobster.
Osmobiosis
Osmobiosis is the least studied of all types of cryptobiosis. Osmobiosis occurs in response to increased solute concentration in the solution the organism lives in. Little is known for certain, other than that osmobiosis appears to involve a cessation of metabolism.
Examples
The brine shrimp Artemia salina, which can be found in the Makgadikgadi Pans in Botswana, survives over the dry season when the water of the pans evaporates, leaving a virtually desiccated lake bed.
The tardigrade, or water bear, can undergo all five types of cryptobiosis. While in a cryptobiotic state, its metabolism reduces to less than 0.01% of what is normal, and its water content can drop to 1% of normal. It can withstand extreme temperature, radiation, and pressure while in a cryptobiotic state.
Some nematodes and rotifers can also undergo cryptobiosis.
See also
References
Further reading
David A. Wharton, Life at the Limits: Organisms in Extreme Environments, Cambridge University Press, 2002, hardcover,
Physiology
Senescence
Articles containing video clips | 0.798753 | 0.987699 | 0.788928 |
Quantum biology | Quantum biology is the study of applications of quantum mechanics and theoretical chemistry to aspects of biology that cannot be accurately described by the classical laws of physics. An understanding of fundamental quantum interactions is important because they determine the properties of the next level of organization in biological systems.
Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes, such as photosynthesis, olfaction and cellular respiration. Moreover, quantum biology may use computations to model biological interactions in light of quantum mechanical effects. Quantum biology is concerned with the influence of non-trivial quantum phenomena, which can be explained by reducing the biological process to fundamental physics, although these effects are difficult to study and can be speculative.
Currently, there exist four major life processes that have been identified as influenced by quantum effects: enzyme catalysis, sensory processes, energy transference, and information encoding.
History
Quantum biology is an emerging field, in the sense that most current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book What Is Life? discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by "quantum leaps". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbrück argued that the quantum idea of complementarity was fundamental to the life sciences. In 1963, Per-Olov Löwdin published proton tunneling as another mechanism for DNA mutation. In his paper, he stated that there is a new field of study called "quantum biology". In 1979, the Soviet and Ukrainian physicist Alexander Davydov published the first textbook on quantum biology entitled Biology and Quantum Mechanics.
Enzyme catalysis
Enzymes have been postulated to use quantum tunneling to transfer electrons in electron transport chains. It is possible that protein quaternary architectures may have adapted to enable sustained quantum entanglement and coherence, which are two of the limiting factors for quantum tunneling in biological entities. These architectures might account for a greater percentage of quantum energy transfer, which occurs through electron transport and proton tunneling (usually in the form of hydrogen ions, H+). Tunneling refers to the ability of a subatomic particle to travel through potential energy barriers. This ability is due, in part, to the principle of complementarity, which holds that certain substances have pairs of properties that cannot be measured separately without changing the outcome of measurement. Particles, such as electrons and protons, have wave-particle duality; they can pass through energy barriers due to their wave characteristics without violating the laws of physics. In order to quantify how quantum tunneling is used in many enzymatic activities, many biophysicists utilize the observation of hydrogen ions. When hydrogen ions are transferred, this is seen as a staple in an organelle's primary energy processing network; in other words, quantum effects are most usually at work in proton distribution sites at distances on the order of an angstrom (1 Å). In physics, a semiclassical (SC) approach is most useful in defining this process because of the transfer from quantum elements (e.g. particles) to macroscopic phenomena (e.g. biochemicals). Aside from hydrogen tunneling, studies also show that electron transfer between redox centers through quantum tunneling plays an important role in enzymatic activity of photosynthesis and cellular respiration (see also Mitochondria section below).
Ferritin
Ferritin is an iron storage protein that is found in plants and animals. It is usually formed from 24 subunits that self-assemble into a spherical shell that is approximately 2 nm thick, with an outer diameter that varies with iron loading up to about 16 nm. Up to ~4500 iron atoms can be stored inside the core of the shell in the Fe3+ oxidation state as water-insoluble compounds such as ferrihydrite and magnetite. Ferritin is able to store electrons for at least several hours, which reduce the Fe3+ to water soluble Fe2+. Electron tunneling as the mechanism by which electrons transit the 2 nm thick protein shell was proposed as early as 1988. Electron tunneling and other quantum mechanical properties of ferritin were observed in 1992, and electron tunneling at room temperature and ambient conditions was observed in 2005. Electron tunneling associated with ferritin is a quantum biological process, and ferritin is a quantum biological agent.
Electron tunneling through ferritin between electrodes is independent of temperature, which indicates that it is substantially coherent and activation-less. The electron tunneling distance is a function of the size of the ferritin. Single electron tunneling events can occur over distances of up to 8 nm through the ferritin, and sequential electron tunneling can occur up to 12 nm through the ferritin. It has been proposed that the electron tunneling is magnon-assisted and associated with magnetite microdomains in the ferritin core.
Early evidence of quantum mechanical properties exhibited by ferritin in vivo was reported in 2004, where increased magnetic ordering of ferritin structures in placental macrophages was observed using small angle neutron scattering (SANS). Quantum dot solids also show increased magnetic ordering in SANS testing, and can conduct electrons over long distances. Increased magnetic ordering of ferritin cores disposed in an ordered layer on a silicon substrate with SANS testing has also been observed. Ferritin structures like those in placental macrophages have been tested in solid state configurations and exhibit quantum dot solid-like properties of conducting electrons over distances of up to 80 microns through sequential tunneling and formation of Coulomb blockades. Electron transport through ferritin in placental macrophages may be associated with an anti-inflammatory function.
Conductive atomic force microscopy of substantia nigra pars compacta (SNc) tissue demonstrated evidence of electron tunneling between ferritin cores, in structures that correlate to layers of ferritin outside of neuromelanin organelles.
Evidence of ferritin layers in cell bodies of large dopamine neurons of the SNc and between those cell bodies in glial cells has also been found, and is hypothesized to be associated with neuron function. Overexpression of ferritin reduces the accumulation of reactive oxygen species (ROS), and may act as a catalyst by increasing the ability of electrons from antioxidants to neutralize ROS through electron tunneling. Ferritin has also been observed in ordered configurations in lysosomes associated with erythropoiesis, where it may be associated with red blood cell production. While direct evidence of tunneling associated with ferritin in vivo in live cells has not yet been obtained, it may be possible to do so using QDs tagged with anti-ferritin, which should emit photons if electrons stored in the ferritin core tunnel to the QD.
Sensory processes
Olfaction
Olfaction, the sense of smell, can be broken down into two parts; the reception and detection of a chemical, and how that detection is sent to and processed by the brain. This process of detecting an odorant is still under question. One theory named the "shape theory of olfaction" suggests that certain olfactory receptors are triggered by certain shapes of chemicals and those receptors send a specific message to the brain. Another theory (based on quantum phenomena) suggests that the olfactory receptors detect the vibration of the molecules that reach them and the "smell" is due to different vibrational frequencies, this theory is aptly called the "vibration theory of olfaction."
The vibration theory of olfaction, created in 1938 by Malcolm Dyson but reinvigorated by Luca Turin in 1996, proposes that the mechanism for the sense of smell is due to G-protein receptors that detect molecular vibrations due to inelastic electron tunneling, tunneling where the electron loses energy, across molecules. In this process a molecule would fill a binding site with a G-protein receptor. After the binding of the chemical to the receptor, the chemical would then act as a bridge allowing for the electron to be transferred through the protein. As the electron transfers across what would otherwise have been a barrier, it loses energy due to the vibration of the newly-bound molecule to the receptor. This results in the ability to smell the molecule.
While the vibration theory has some experimental proof of concept, there have been multiple controversial results in experiments. In some experiments, animals are able to distinguish smells between molecules of different frequencies and same structure, while other experiments show that people are unaware of distinguishing smells due to distinct molecular frequencies.
Vision
Vision relies on quantized energy in order to convert light signals to an action potential in a process called phototransduction. In phototransduction, a photon interacts with a chromophore in a light receptor. The chromophore absorbs the photon and undergoes photoisomerization. This change in structure induces a change in the structure of the photo receptor and resulting signal transduction pathways lead to a visual signal. However, the photoisomerization reaction occurs at a rapid rate, in under 200 femtoseconds, with high yield. Models suggest the use of quantum effects in shaping the ground state and excited state potentials in order to achieve this efficiency.
The sensor in the retina of the human eye is sensitive enough to detect a single photon. Single photon detection could lead to multiple different technologies. One area of development is in quantum communication and cryptography. The idea is to use a biometric system to measure the eye using only a small number of points across the retina with random flashes of photons that "read" the retina and identify the individual. This biometric system would only allow a certain individual with a specific retinal map to decode the message. This message can not be decoded by anyone else unless the eavesdropper were to guess the proper map or could read the retina of the intended recipient of the message.
Energy transfer
Photosynthesis
Photosynthesis refers to the biological process that photosynthetic cells use to synthesize organic compounds from inorganic starting materials using sunlight. What has been primarily implicated as exhibiting non-trivial quantum behaviors is the light reaction stage of photosynthesis. In this stage, photons are absorbed by the membrane-bound photosystems. Photosystems contain two major domains, the light-harvesting complex (antennae) and the reaction center. These antennae vary among organisms. For example, bacteria use circular aggregates of chlorophyll pigments, while plants use membrane-embedded protein and chlorophyll complexes. Regardless, photons are first captured by the antennae and passed on to the reaction-center complex. Various pigment-protein complexes, such as the FMO complex in green sulfur bacteria, are responsible for transferring energy from antennae to reaction site. The photon-driven excitation of the reaction-center complex mediates the oxidation and the reduction of the primary electron acceptor, a component of the reaction-center complex. Much like the electron transport chain of the mitochondria, a linear series of oxidations and reductions drives proton (H+) pumping across the thylakoid membrane, the development of a proton motive force, and energetic coupling to the synthesis of ATP.
Previous understandings of electron-excitation transference (EET) from light-harvesting antennae to the reaction center have relied on the Förster theory of incoherent EET, postulating weak electron coupling between chromophores and incoherent hopping from one to another. This theory has largely been disproven by FT electron spectroscopy experiments that show electron absorption and transfer with an efficiency of above 99%, which cannot be explained by classical mechanical models. Instead, as early as 1938, scientists theorized that quantum coherence was the mechanism for excitation-energy transfer. Indeed, the structure and nature of the photosystem places it in the quantum realm, with EET ranging from the femto- to nanosecond scale, covering sub-nanometer to nanometer distances. The effects of quantum coherence on EET in photosynthesis are best understood through state and process coherence. State coherence refers to the extent of individual superpositions of ground and excited states for quantum entities, such as excitons. Process coherence, on the other hand, refers to the degree of coupling between multiple quantum entities and their evolution as either dominated by unitary or dissipative parts, which compete with one another. Both of these types of coherence are implicated in photosynthetic EET, where a exciton is coherently delocalized over several chromophores. This delocalization allows for the system to simultaneously explore several energy paths and use constructive and destructive interference to guide the path of the exciton's wave packet. It is presumed that natural selection has favored the most efficient path to the reaction center. Experimentally, the interaction between the different frequency wave packets, made possible by long-lived coherence, will produce quantum beats.
While quantum photosynthesis is still an emerging field, there have been many experimental results that support the quantum-coherence understanding of photosynthetic EET. A 2007 study claimed the identification of electronic quantum coherence at −196 °C (77 K). Another theoretical study from 2010 provided evidence that quantum coherence lives as long as 300 femtoseconds at biologically relevant temperatures (4 °C or 277 K). In that same year, experiments conducted on photosynthetic cryptophyte algae using two-dimensional photon echo spectroscopy yielded further confirmation for long-term quantum coherence. These studies suggest that, through evolution, nature has developed a way of protecting quantum coherence to enhance the efficiency of photosynthesis. However, critical follow-up studies question the interpretation of these results. Single-molecule spectroscopy now shows the quantum characteristics of photosynthesis without the interference of static disorder, and some studies use this method to assign reported signatures of electronic quantum coherence to nuclear dynamics occurring in chromophores. A number of proposals emerged to explain unexpectedly long coherence. According to one proposal, if each site within the complex feels its own environmental noise, the electron will not remain in any local minimum due to both quantum coherence and its thermal environment, but proceed to the reaction site via quantum walks. Another proposal is that the rate of quantum coherence and electron tunneling create an energy sink that moves the electron to the reaction site quickly. Other work suggested that geometric symmetries in the complex may favor efficient energy transfer to the reaction center, mirroring perfect state transfer in quantum networks. Furthermore, experiments with artificial dye molecules cast doubts on the interpretation that quantum effects last any longer than one hundred femtoseconds.
In 2017, the first control experiment with the original FMO protein under ambient conditions confirmed that electronic quantum effects are washed out within 60 femtoseconds, while the overall exciton transfer takes a time on the order of a few picoseconds. In 2020 a review based on a wide collection of control experiments and theory concluded that the proposed quantum effects as long lived electronic coherences in the FMO system does not hold. Instead, research investigating transport dynamics suggests that interactions between electronic and vibrational modes of excitation in FMO complexes require a semi-classical, semi-quantum explanation for the transfer of exciton energy. In other words, while quantum coherence dominates in the short-term, a classical description is most accurate to describe long-term behavior of the excitons.
Another process in photosynthesis that has almost 100% efficiency is charge transfer, again suggesting that quantum mechanical phenomena are at play. In 1966, a study on the photosynthetic bacterium Chromatium found that at temperatures below 100 K, cytochrome oxidation is temperature-independent, slow (on the order of milliseconds), and very low in activation energy. The authors, Don DeVault and Britton Chase, postulated that these characteristics of electron transfer are indicative of quantum tunneling, whereby electrons penetrate a potential barrier despite possessing less energy than is classically necessary.
Mitochondria
Mitochondria have been demonstrated to utilize quantum tunneling in their function as the powerhouse of eukaryotic cells. Similar to the light reactions in the thylakoid, linearly-associated membrane-bound proteins comprising the electron transport chain (ETC) energetically link the reduction of O2 with the development of a proton motive gradient (H+) across the inner membrane of the mitochondria. This energy stored as a proton motive gradient is then coupled with the synthesis of ATP. It is significant that the mitochondrion conversion of biomass into chemical ATP achieves 60-70% thermodynamic efficiency, far superior to that of man-made engines. This high degree of efficiency is largely attributed to the quantum tunnelling of electrons in the ETC and of protons in the proton motive gradient. Indeed, electron tunneling has already been demonstrated in certain elements of the ETC including NADH:ubiquinone oxidoreductase(Complex I) and CoQH2-cytochrome c reductase (Complex III).
In quantum mechanics, both electrons and protons are quantum entities that exhibit wave-particle duality, exhibiting both particle and wave-like properties depending on the method of experimental observation. Quantum tunneling is a direct consequence of this wave-like nature of quantum entities that permits the passing-through of a potential energy barrier that would otherwise restrict the entity. Moreover, it depends on the shape and size of a potential barrier relative to the incoming energy of a particle. Because the incoming particle is defined by its wave function, its tunneling probability is dependent upon the potential barrier's shape in an exponential way. For example, if the barrier is relatively wide, the incoming particle's probability to tunnel will decrease. The potential barrier, in some sense, can come in the form of an actual biomaterial barrier. The inner mitochondria membrane which houses the various components of the ETC is on the order of 7.5 nm thick. The inner membrane of a mitochondrion must be overcome to permit signals (in the form of electrons, protons, H+) to transfer from the site of emittance (internal to the mitochondria) and the site of acceptance (i.e. the electron transport chain proteins). In order to transfer particles, the membrane of the mitochondria must have the correct density of phospholipids to conduct a relevant charge distribution that attracts the particle in question. For instance, for a greater density of phospholipids, the membrane contributes to a greater conductance of protons.
Molecular solitons in proteins
Alexander Davydov developed the quantum theory of molecular solitons in order to explain the transport of energy in protein α-helices in general and the physiology of muscle contraction in particular. He showed that the molecular solitons are able to preserve their shape through nonlinear interaction of amide I excitons and phonon deformations inside the lattice of hydrogen-bonded peptide groups. In 1979, Davydov published his complete textbook on quantum biology entitled "Biology and Quantum Mechanics" featuring quantum dynamics of proteins, cell membranes, bioenergetics, muscle contraction, and electron transport in biomolecules.
Information encoding
Magnetoreception
Magnetoreception is the ability of animals to navigate using the inclination of the magnetic field of the Earth. A possible explanation for magnetoreception is the entangled radical pair mechanism. The radical-pair mechanism is well-established in spin chemistry, and was speculated to apply to magnetoreception in 1978 by Schulten et al.. The ratio between singlet and triplet pairs is changed by the interaction of entangled electron pairs with the magnetic field of the Earth. In 2000, cryptochrome was proposed as the "magnetic molecule" that could harbor magnetically sensitive radical-pairs. Cryptochrome, a flavoprotein found in the eyes of European robins and other animal species, is the only protein known to form photoinduced radical-pairs in animals. When it interacts with light particles, cryptochrome goes through a redox reaction, which yields radical pairs both during the photo-reduction and the oxidation. The function of cryptochrome is diverse across species, however, the photoinduction of radical-pairs occurs by exposure to blue light, which excites an electron in a chromophore. Magnetoreception is also possible in the dark, so the mechanism must rely more on the radical pairs generated during light-independent oxidation.
Experiments in the lab support the basic theory that radical-pair electrons can be significantly influenced by very weak magnetic fields, i.e., merely the direction of weak magnetic fields can affect radical-pair's reactivity and therefore can "catalyze" the formation of chemical products. Whether this mechanism applies to magnetoreception and/or quantum biology, that is, whether Earth's magnetic field "catalyzes" the formation of biochemical products by the aid of radical-pairs, is not fully clear. Radical-pairs may need not be entangled, the key quantum feature of the radical-pair mechanism, to play a part in these processes. There are entangled and non-entangled radical-pairs, but disturbing only entangled radical-pairs is not possible with current technology. Researchers found evidence for the radical-pair mechanism of magnetoreception when European robins, cockroaches, and garden warblers, could no longer navigate when exposed to a radio frequency that obstructs magnetic fields and radical-pair chemistry. Further evidence came from a comparison of Cryptochrome 4 (CRY4) from migrating and non-migrating birds. CRY4 from chicken and pigeon were found to be less sensitive to magnetic fields than those from the (migrating) European robin, suggesting evolutionary optimization of this protein as a sensor of magnetic fields.
DNA mutation
DNA acts as the instructions for making proteins throughout the body. It consists of 4 nucleotides: guanine, thymine, cytosine, and adenine. The order of these nucleotides gives the "recipe" for the different proteins.
Whenever a cell reproduces, it must copy these strands of DNA. However, sometime throughout the process of copying the strand of DNA a mutation, or an error in the DNA code, can occur. A theory for the reasoning behind DNA mutation is explained in the Lowdin DNA mutation model. In this model, a nucleotide may spontaneously change its form through a process of quantum tunneling. Because of this, the changed nucleotide will lose its ability to pair with its original base pair and consequently change the structure and order of the DNA strand.
Exposure to ultraviolet light and other types of radiation can cause DNA mutation and damage. The radiation also can modify the bonds along the DNA strand in the pyrimidines and cause them to bond with themselves, creating a dimer.
In many prokaryotes and plants, these bonds are repaired by a DNA-repair-enzyme photolyase. As its prefix implies, photolyase is reliant on light in order to repair the strand. Photolyase works with its cofactor FADH, flavin adenine dinucleotide, while repairing the DNA. Photolyase is excited by visible light and transfers an electron to the cofactor FADH. FADH—now in the possession of an extra electron—transfers the electron to the dimer to break the bond and repair the DNA. The electron tunnels from the FADH to the dimer. Although the range of this tunneling is much larger than feasible in a vacuum, the tunneling in this scenario is said to be "superexchange-mediated tunneling," and is possible due to the protein's ability to boost the tunneling rates of the electron.
Other
Other quantum phenomena in biological systems include the conversion of chemical energy into motion and brownian motors in many cellular processes.
Pseudoscience
Alongside the multiple strands of scientific inquiry into quantum mechanics has come unconnected pseudoscientific interest; this caused scientists to approach quantum biology cautiously.
Hypotheses such as orchestrated objective reduction which postulate a link between quantum mechanics and consciousness have drawn criticism from the scientific community with some claiming it to be pseudoscientific and "an excuse for quackery".
References
External links
Philip Ball (2015). "Quantum Biology: An Introduction". The Royal Institution
Quantum Biology and the Hidden Nature of Nature, World Science Festival 2012, video of podium discussion
Quantum Biology: Current Status and Opportunities, September 17-18, 2012, University of Surrey, UK
Biophysics | 0.794453 | 0.992349 | 0.788374 |
Vestigiality | Vestigiality is the retention, during the process of evolution, of genetically determined structures or attributes that have lost some or all of the ancestral function in a given species. Assessment of the vestigiality must generally rely on comparison with homologous features in related species. The emergence of vestigiality occurs by normal evolutionary processes, typically by loss of function of a feature that is no longer subject to positive selection pressures when it loses its value in a changing environment. The feature may be selected against more urgently when its function becomes definitively harmful, but if the lack of the feature provides no advantage, and its presence provides no disadvantage, the feature may not be phased out by natural selection and persist across species.
Examples of vestigial structures (also called degenerate, atrophied, or rudimentary organs) are the loss of functional wings in island-dwelling birds; the human vomeronasal organ; and the hindlimbs of the snake and whale.
Overview
Vestigial features may take various forms; for example, they may be patterns of behavior, anatomical structures, or biochemical processes. Like most other physical features, however functional, vestigial features in a given species may successively appear, develop, and persist or disappear at various stages within the life cycle of the organism, ranging from early embryonic development to late adulthood.
Vestigiality, biologically speaking, refers to organisms retaining organs that have seemingly lost their original function. Vestigial organs are common evolutionary knowledge. In addition, the term vestigiality is useful in referring to many genetically determined features, either morphological, behavioral, or physiological; in any such context, however, it need not follow that a vestigial feature must be completely useless. A classic example at the level of gross anatomy is the human vermiform appendix, vestigial in the sense of retaining no significant digestive function.
Similar concepts apply at the molecular level—some nucleic acid sequences in eukaryotic genomes have no known biological function; some of them may be "junk DNA", but it is a difficult matter to demonstrate that a particular sequence in a particular region of a given genome is truly nonfunctional. The simple fact that it is noncoding DNA does not establish that it is functionless. Furthermore, even if an extant DNA sequence is functionless, it does not follow that it has descended from an ancestral sequence of functional DNA. Logically such DNA would not be vestigial in the sense of being the vestige of a functional structure. In contrast pseudogenes have lost their protein-coding ability or are otherwise no longer expressed in the cell. Whether they have any extant function or not, they have lost their former function and in that sense, they do fit the definition of vestigiality.
Vestigial structures are often called vestigial organs, although many of them are not actually organs. Such vestigial structures typically are degenerate, atrophied, or rudimentary, and tend to be much more variable than homologous non-vestigial parts. Although structures commonly regarded "vestigial" may have lost some or all of the functional roles that they had played in ancestral organisms, such structures may retain lesser functions or may have become adapted to new roles in extant populations.
It is important to avoid confusion of the concept of vestigiality with that of exaptation. Both may occur together in the same example, depending on the relevant point of view. In exaptation, a structure originally used for one purpose is modified for a new one. For example, the wings of penguins would be exaptational in the sense of serving a substantial new purpose (underwater locomotion), but might still be regarded as vestigial in the sense of having lost the function of flight. In contrast Darwin argued that the wings of emus would be definitely vestigial, as they appear to have no major extant function; however, function is a matter of degree, so judgments on what is a "major" function are arbitrary; the emu does seem to use its wings as organs of balance in running. Similarly, the ostrich uses its wings in displays and temperature control, though they are undoubtedly vestigial as structures for flight.
Vestigial characters range from detrimental through neutral to favorable in terms of selection. Some may be of some limited utility to an organism but still degenerate over time if they do not confer a significant enough advantage in terms of fitness to avoid the effects of genetic drift or competing selective pressures. Vestigiality in its various forms presents many examples of evidence for biological evolution.
History
Vestigial structures have been noticed since ancient times, and the reason for their existence was long speculated upon before Darwinian evolution provided a widely accepted explanation. In the 4th century BC, Aristotle was one of the earliest writers to comment, in his History of Animals, on the vestigial eyes of moles, calling them "stunted in development" due to the fact that moles can scarcely see. However, only in recent centuries have anatomical vestiges become a subject of serious study. In 1798, Étienne Geoffroy Saint-Hilaire noted on vestigial structures:
His colleague, Jean-Baptiste Lamarck, named a number of vestigial structures in his 1809 book Philosophie Zoologique. Lamarck noted "Olivier's Spalax, which lives underground like the mole, and is apparently exposed to daylight even less than the mole, has altogether lost the use of sight: so that it shows nothing more than vestiges of this organ."
Charles Darwin was familiar with the concept of vestigial structures, though the term for them did not yet exist. He listed a number of them in The Descent of Man, including the muscles of the ear, wisdom teeth, the appendix, the tail bone, body hair, and the semilunar fold in the corner of the eye. Darwin also noted, in On the Origin of Species, that a vestigial structure could be useless for its primary function, but still retain secondary anatomical roles: "An organ serving for two purposes, may become rudimentary or utterly aborted for one, even the more important purpose, and remain perfectly efficient for the other.... [A]n organ may become rudimentary for its proper purpose, and be used for a distinct object."
In the first edition of On the Origin of Species, Darwin briefly mentioned inheritance of acquired characters under the heading "Effects of Use and Disuse", expressing little doubt that use "strengthens and enlarges certain parts, and disuse diminishes them; and that such modifications are inherited". In later editions he expanded his thoughts on this, and in the final chapter of the 6th edition concluded that species have been modified "chiefly through the natural selection of numerous successive, slight, favorable variations; aided in an important manner by the inherited effects of the use and disuse of parts".
In 1893, Robert Wiedersheim published The Structure of Man, a book on human anatomy and its relevance to man's evolutionary history. The Structure of Man contained a list of 86 human organs that Wiedersheim described as, "Organs having become wholly or in part functionless, some appearing in the Embryo alone, others present during Life constantly or inconstantly. For the greater part Organs which may be rightly termed Vestigial." Since his time, the function of some of these structures have been discovered, while other anatomical vestiges have been unearthed, making the list primarily of interest as a record of the knowledge of human anatomy at the time. Later versions of Wiedersheim's list were expanded to as many as 180 human "vestigial organs". This is why the zoologist Horatio Newman said in a written statement read into evidence in the Scopes Trial that "There are, according to Wiedersheim, no less than 180 vestigial structures in the human body, sufficient to make of a man a veritable walking museum of antiquities."
Common descent and evolutionary theory
Vestigial structures are often homologous to structures that are functioning normally in other species. Therefore, vestigial structures can be considered evidence for evolution, the process by which beneficial heritable traits arise in populations over an extended period of time. The existence of vestigial traits can be attributed to changes in the environment and behavior patterns of the organism in question. Through an examination of these various traits, it is clear that evolution had a hard role in the development of organisms. Every anatomical structure or behavior response has origins in which they were, at one time, useful. As time progressed, the ancient common ancestor organisms did as well. Evolving with time, natural selection played a huge role. More advantageous structures were selected, while others were not. With this expansion, some traits were left to the wayside. As the function of the trait is no longer beneficial for survival, the likelihood that future offspring will inherit the "normal" form of it decreases. In some cases, the structure becomes detrimental to the organism (for example the eyes of a mole can become infected). In many cases the structure is of no direct harm, yet all structures require extra energy in terms of development, maintenance, and weight, and are also a risk in terms of disease (e.g., infection, cancer), providing some selective pressure for the removal of parts that do not contribute to an organism's fitness. A structure that is not harmful will take longer to be 'phased out' than one that is. However, some vestigial structures may persist due to limitations in development, such that complete loss of the structure could not occur without major alterations of the organism's developmental pattern, and such alterations would likely produce numerous negative side-effects. The toes of many animals such as horses, which stand on a single toe, are still evident in a vestigial form and may become evident, although rarely, from time to time in individuals.
The vestigial versions of the structure can be compared to the original version of the structure in other species in order to determine the homology of a vestigial structure. Homologous structures indicate common ancestry with those organisms that have a functional version of the structure. Douglas Futuyma has stated that vestigial structures make no sense without evolution, just as spelling and usage of many modern English words can only be explained by their Latin or Old Norse antecedents.
Vestigial traits can still be considered adaptations. This is because an adaptation is often defined as a trait that has been favored by natural selection. Adaptations, therefore, need not be adaptive, as long as they were at some point.
Examples
Non-human animals
Vestigial characters are present throughout the animal kingdom, and an almost endless list could be given. Darwin said that "it would be impossible to name one of the higher animals in which some part or other is not in a rudimentary condition."
The wings of ostriches, emus and other flightless birds are vestigial; they are remnants of their flying ancestors' wings. These birds go through the effort of developing wings, even though most birds are too large to use the wings successfully. Seeing vestigial wings in birds is also common when they no longer need to fly to escape predators, such as birds on the Galapagos Islands. The eyes of certain cavefish and salamanders are vestigial, as they no longer allow the organism to see, and are remnants of their ancestors' functional eyes. Animals that reproduce without sex (via asexual reproduction) generally lose their sexual traits, such as the ability to locate/recognize the opposite sex and copulation behavior.
Boas and pythons have vestigial pelvis remnants, which are externally visible as two small pelvic spurs on each side of the cloaca. These spurs are sometimes used in copulation, but are not essential, as no colubrid snake (the vast majority of species) possesses these remnants. Furthermore, in most snakes, the left lung is greatly reduced or absent. Amphisbaenians, which independently evolved limblessness, also retain vestiges of the pelvis as well as the pectoral girdle, and have lost their right lung.
A case of vestigial organs was described in polyopisthocotylean Monogeneans (parasitic flatworms). These parasites usually have a posterior attachment organ with several clamps, which are sclerotised organs attaching the worm to the gill of the host fish. These clamps are extremely important for the survival of the parasite. In the family Protomicrocotylidae, species have either normal clamps, simplified clamps, or no clamps at all (in the genus Lethacotyle). After a comparative study of the relative surface of clamps in more than 100 Monogeneans, this has been interpreted as an evolutionary sequence leading to the loss of clamps. Coincidentally, other attachment structures (lateral flaps, transverse striations) have evolved in protomicrocotylids. Therefore, clamps in protomicrocotylids were considered vestigial organs.
In the foregoing examples the vestigiality is generally the (sometimes incidental) result of adaptive evolution. However, there are many examples of vestigiality as the product of drastic mutation, and such vestigiality is usually harmful or counter-adaptive. One of the earliest documented examples was that of vestigial wings in Drosophila. Many examples in many other contexts have emerged since.
Humans
Human vestigiality is related to human evolution, and includes a variety of characters occurring in the human species. Many examples of these are vestigial in other primates and related animals, whereas other examples are still highly developed. The human caecum is vestigial, as often is the case in omnivores, being reduced to a single chamber receiving the content of the ileum into the colon. The ancestral caecum would have been a large, blind diverticulum in which resistant plant material such as cellulose would have been fermented in preparation for absorption in the colon. Analogous organs in other animals similar to humans continue to perform similar functions. The coccyx, or tailbone, though a vestige of the tail of some primate ancestors, is functional as an anchor for certain pelvic muscles including: the levator ani muscle and the largest gluteal muscle, the gluteus maximus.
Other structures that are vestigial include the plica semilunaris on the inside corner of the eye (a remnant of the nictitating membrane); and (as seen at right) muscles in the ear. Other organic structures (such as the occipitofrontalis muscle) have lost their original functions (to keep the head from falling) but are still useful for other purposes (facial expression).
Humans also bear some vestigial behaviors and reflexes. The formation of goose bumps in humans under stress is a vestigial reflex; its function in human ancestors was to raise the body's hair, making the ancestor appear larger and scaring off predators. The arrector pili (muscle that connects the hair follicle to connective tissue) contracts and creates goosebumps on skin.
There are also vestigial molecular structures in humans, which are no longer in use but may indicate common ancestry with other species. One example of this is a gene that is functional in most other mammals and which produces L-gulonolactone oxidase, an enzyme that can make vitamin C. A documented mutation deactivated the gene in an ancestor of the modern infraorder of monkeys, and apes, and it now remains in their genomes, including the human genome, as a vestigial sequence called a pseudogene.
The shift in human diet towards soft and processed food over time caused a reduction in the number of powerful grinding teeth, especially the third molars (also known as wisdom teeth), which were highly prone to impaction.
Plants and fungi
Plants also have vestigial parts, including functionless stipules and carpels, leaf reduction of Equisetum, paraphyses of Fungi. Well known examples are the reductions in floral display, leading to smaller and/or paler flowers, in plants that reproduce without outcrossing, for example via selfing or obligate clonal reproduction.
Objects
Many objects in daily use contain vestigial structures. While not the result of natural selection through random mutation, much of the process is the same. Product design, like evolution, is iterative; it builds on features and processes that already exist, with limited resources available to make tweaks. To spend resources on completely weeding out a form that serves no purpose (if at the same time it is not an obstruction either) is not economically astute. These vestigial structures differ from the concept of skeuomorphism in that a skeuomorph is a design feature that has been specifically implemented as a reference to the past, enabling users to acclimatise quicker. A vestigial feature does not exist intentionally, or even usefully.
For example, men's business suits often contain a row of buttons at the bottom of the sleeve. These used to serve a purpose, allowing the sleeve to be split and rolled up. The feature has been lost entirely, though most suits still give the impression that it is possible, complete with fake button holes. There is also an example of exaptation to be found in the business suit: it was previously possible to button a jacket up all the way to the top. As it became the fashion to fold the lapel over, the top half of buttons and their accompanying buttonholes disappeared, save for a single hole at the top; it has since found a new use as a place to fasten pins, badges, or boutonnières.
As a final example, soldiers in ceremonial or parade uniform can sometimes be seen wearing a gorget: a small decorative piece of metal suspended around the neck with a chain. The gorget serves no protection to the wearer, yet there exists an unbroken lineage from the gorget to the full suits of armour of the middle ages. With the introduction of gunpowder weapons, armour increasingly lost its usefulness on the battlefield. At the same time, military men were keen to retain the status it provided them. The result: a breastplate that "shrank" away over time, but never disappeared completely.
See also
Atavism
Dewclaw
Exaptation
Evolutionary anachronism
Human vestigiality
Maladaptation
Plantaris muscle
Recessive refuge
Spandrel (biology)
Vestigial response
References
External links
Vestigial organs at the TalkOrigins Archive
Evolutionary biology concepts | 0.792034 | 0.99533 | 0.788335 |
Developmental bioelectricity | Developmental bioelectricity is the regulation of cell, tissue, and organ-level patterning and behavior by electrical signals during the development of embryonic animals and plants. The charge carrier in developmental bioelectricity is the ion (a charged atom) rather than the electron, and an electric current and field is generated whenever a net ion flux occurs. Cells and tissues of all types use flows of ions to communicate electrically. Endogenous electric currents and fields, ion fluxes, and differences in resting potential across tissues comprise a signalling system. It functions along with biochemical factors, transcriptional networks, and other physical forces to regulate cell behaviour and large-scale patterning in processes such as embryogenesis, regeneration, and cancer suppression.
Overview
Developmental bioelectricity is a sub-discipline of biology, related to, but distinct from, neurophysiology and bioelectromagnetics. Developmental bioelectricity refers to the endogenous ion fluxes, transmembrane and transepithelial voltage gradients, and electric currents and fields produced and sustained in living cells and tissues. This electrical activity is often used during embryogenesis, regeneration, and cancer suppression—it is one layer of the complex field of signals that impinge upon all cells in vivo and regulate their interactions during pattern formation and maintenance. This is distinct from neural bioelectricity (classically termed electrophysiology), which refers to the rapid and transient spiking in well-recognized excitable cells like neurons and myocytes (muscle cells); and from bioelectromagnetics, which refers to the effects of applied electromagnetic radiation, and endogenous electromagnetics such as biophoton emission and magnetite.
The inside/outside discontinuity at the cell surface enabled by a lipid bilayer membrane (capacitor) is at the core of bioelectricity. The plasma membrane was an indispensable structure for the origin and evolution of life itself. It provided compartmentalization permitting the setting of a differential voltage/potential gradient (battery or voltage source) across the membrane, probably allowing early and rudimentary bioenergetics that fueled cell mechanisms. During evolution, the initially purely passive diffusion of ions (charge carriers), become gradually controlled by the acquisition of ion channels, pumps, exchangers, and transporters. These energetically free (resistors or conductors, passive transport) or expensive (current sources, active transport) translocators set and fine tune voltage gradients – resting potentials – that are ubiquitous and essential to life's physiology, ranging from bioenergetics, motion, sensing, nutrient transport, toxins clearance, and signaling in homeostatic and disease/injury conditions. Upon stimuli or barrier breaking (short-circuit) of the membrane, ions powered by the voltage gradient (electromotive force) diffuse or leak, respectively, through the cytoplasm and interstitial fluids (conductors), generating measurable electric currents – net ion fluxes – and fields. Some ions (such as calcium) and molecules (such as hydrogen peroxide) modulate targeted translocators to produce a current or to enhance, mitigate or even reverse an initial current, being switchers.
Endogenous bioelectric signals are produced in cells by the cumulative action of ion channels, pumps, and transporters. In non-excitable cells, the resting potential across the plasma membrane (Vmem) of individual cells propagate across distances via electrical synapses known as gap junctions (conductors), which allow cells to share their resting potential with neighbors. Aligned and stacked cells (such as in epithelia) generate transepithelial potentials (such as batteries in series) and electric fields, which likewise propagate across tissues. Tight junctions (resistors) efficiently mitigate the paracellular ion diffusion and leakage, precluding the voltage short circuit. Together, these voltages and electric fields form rich and dynamic and patterns inside living bodies that demarcate anatomical features, thus acting like blueprints for gene expression and morphogenesis in some instances. More than correlations, these bioelectrical distributions are dynamic, evolving with time and with the microenvironment and even long-distant conditions to serve as instructive influences over cell behavior and large-scale patterning during embryogenesis, regeneration, and cancer suppression. Bioelectric control mechanisms are an important emerging target for advances in regenerative medicine, birth defects, cancer, and synthetic bioengineering.
History
18th century
Developmental bioelectricity began in the 18th century. Several seminal works stimulating muscle contractions using Leyden jars culminated with the publication of classical studies by Luigi Galvani in 1791 (De viribus electricitatis in motu musculari) and 1794. In these, Galvani thought to have uncovered intrinsic electric-producing ability in living tissues or "animal electricity". Alessandro Volta showed that the frog's leg muscle twitching was due to a static electricity generator and from dissimilar metals undergoing or catalyzing electrochemical reactions. Galvani showed, in a 1794 study, twitching without metal electricity by touching the leg muscle with a deviating cut sciatic nerve, definitively demonstrating "animal electricity". Unknowingly, Galvani with this and related experiments discovered the injury current (ion leakage driven by the intact membrane/epithelial potential) and injury potential (potential difference between injured and intact membrane/epithelium). The injury potential was, in fact, the electrical source behind the leg contraction, as realized in the next century. Subsequent work ultimately extended this field broadly beyond nerve and muscle to all cells, from bacteria to non-excitable mammalian cells.
19th century
Building on earlier studies, further glimpses of developmental bioelectricity occurred with the discovery of wound-related electric currents and fields in the 1840s, when the electrophysiologist Emil du Bois-Reymond reported macroscopic level electrical activities in frog, fish and human bodies. He recorded minute electric currents in live tissues and organisms with a then state-of-the-art galvanometer made of insulated copper wire coils. He unveiled the fast-changing electricity associated with muscle contraction and nerve excitation – the action potentials. Du Bois-Reymond also reported in detail less fluctuating electricity at wounds – injury current and potential – he made to himself.
Early 20th century
Developmental bioelectricity work began in earnest at the beginning of the 20th century. Ida H. Hyde studied the role of electricity in the development of eggs.
T. H. Morgan and others studied the electrophysiology of the earthworm.
Oren E. Frazee studied the effects of electricity on limb regeneration in amphibians.
E. J. Lund explored morphogenesis in flowering plants.
Libbie Hyman studied vertebrate and invertebrate animals.
In the 1920s and 1930s, Elmer J. Lund and Harold Saxton Burr wrote multiple papers about the role of electricity in embryonic development. Lund measured currents in a large number of living model systems, correlating them to changes in patterning. In contrast, Burr used a voltmeter to measure voltage gradients, examining developing embryonic tissues and tumors, in a range of animals and plants. Applied electric fields were demonstrated to alter the regeneration of planarian by Marsh and Beams in the 1940s and 1950s, inducing the formation of heads or tails at cut sites, reversing the primary body polarity.
Late 20th century
In the 1970s, Lionel Jaffe and Richard Nuccittelli's introduction and development of the vibrating probe, the first device for quantitative non-invasive characterization of the extracellular minute ion currents, revitalized the field.
Researchers such as Joseph Vanable, Richard Borgens, Ken Robinson, and Colin McCaig explored the roles of endogenous bioelectric signaling in limb development and regeneration, embryogenesis, organ polarity, and wound healing.
C.D. Cone studied the role of resting potential in regulating cell differentiation and proliferation.
Subsequent work has identified specific regions of the resting potential spectrum that correspond to distinct cell states such as quiescent, stem, cancer, and terminally differentiated.
Although this body of work generated a significant amount of high-quality physiological data, this large-scale biophysics approach has historically come second to the study of biochemical gradients and genetic networks in biology education, funding, and overall popularity among biologists. A key factor that contributed to this field lagging behind molecular genetics and biochemistry is that bioelectricity is inherently a living phenomenon – it cannot be studied in fixed specimens. Working with bioelectricity is more complex than traditional approaches to developmental biology, both methodologically and conceptually, as it typically requires a highly interdisciplinary approach.
Study techniques
Electrodes
The gold standard techniques to quantitatively extract electric dimensions from living specimens, ranging from cell to organism levels, are the glass microelectrode (or micropipette), the vibrating (or self-referencing) voltage probe, and the vibrating ion-selective microelectrode. The former is inherently invasive, and the two latter are non-invasive, but all are ultra-sensitive and fast-responsive sensors extensively used in a plethora of physiological conditions in widespread biological models.
The glass microelectrode was developed in the 1940s to study the action potential of excitable cells, deriving from the seminal work by Hodgkin and Huxley in the giant axon squid. It is simply a liquid salt bridge connecting the biological specimen with the electrode, protecting tissues from leachable toxins and redox reactions of the bare electrode. Owing to its low impedance, low junction potential and weak polarization, silver electrodes are standard transducers of the ionic into electric current that occurs through a reversible redox reaction at the electrode surface.
The vibrating probe was introduced in biological studies in the 1970s. The voltage-sensitive probe is electroplated with platinum to form a capacitive black tip ball with large surface area. When vibrating in an artificial or natural DC voltage gradient, the capacitive ball oscillates in a sinusoidal AC output. The amplitude of the wave is proportional to the measuring potential difference at the frequency of the vibration, efficiently filtered by a lock-in amplifier that boosts probe's sensitivity.
The vibrating ion-selective microelectrode was first used in 1990 to measure calcium fluxes in various cells and tissues. The ion-selective microelectrode is an adaptation of the glass microelectrode, where an ion-specific liquid ion exchanger (ionophore) is tip-filled into a previously silanized (to prevent leakage) microelectrode. Also, the microelectrode vibrates at low frequencies to operate in the accurate self-referencing mode. Only the specific ion permeates the ionophore, therefore the voltage readout is proportional to the ion concentration in the measuring condition. Then, flux is calculated using the Fick's first law.
Emerging optic-based techniques, for example, the pH optrode (or optode), which can be integrated into a self-referencing system may become an alternative or additional technique in bioelectricity laboratories. The optrode does not require referencing and is insensitive to electromagnetism simplifying system setting up and making it a suitable option for recordings where electric stimulation is simultaneously applied.
Much work to functionally study bioelectric signaling has made use of applied (exogenous) electric currents and fields via DC and AC voltage-delivering apparatus integrated with agarose salt bridges. These devices can generate countless combinations of voltage magnitude and direction, pulses, and frequencies. Currently, lab-on-a-chip mediated application of electric fields is gaining ground in the field with the possibility to allow high-throughput screening assays of the large combinatory outputs.
Fluorescence
Progress in molecular biology over the last six decades has produced powerful tools that facilitate the dissection of biochemical and genetic signals; yet, they tend to not be well-suited for bioelectric studies in vivo. Prior work relied extensively on current applied directly by electrodes, reinvigorated by significant recent advances in materials science and extracellular current measurements, facilitated by sophisticated self-referencing electrode systems. While electrode applications for manipulating neuraly-controlled body processes have recently attracted much attention, there are other opportunities for controlling somatic processes, as most cell types are electrically active and respond to ionic signals from themselves and their neighbors.
In the early part of the 21st century, a number of new molecular techniques were developed that allowed bioelectric pathways to be investigated with a high degree of mechanistic resolution, and to be linked to canonical molecular cascades. These include:
Pharmacological screens to identify endogenous channels and pumps responsible for specific patterning events;
Voltage-sensitive fluorescent reporter dyes and genetically encoded fluorescent voltage indicators for the characterization of the bioelectric state in vivo.
Panels of well-characterized dominant ion channels that can be misexpressed in cells of interest to alter the bioelectric state in desired ways; and
Computational platforms that are coming on-line to assist in building predictive models of bioelectric dynamics in tissues.
Compared with the electrode-based techniques, the molecular probes provide a wider spatial resolution and facilitated dynamic analysis over time. Although calibration or titration can be possible, molecular probes are typically semi-quantitative, whereas electrodes provide absolute bioelectric values. Another advantage of fluorescence and other probes is their less-invasive nature and spatial multiplexing, enabling the simultaneous monitoring of large areas of embryonic or other tissues in vivo during normal or pathological pattering processes.
Roles in organisms
Early development
Work in model systems such as Xenopus laevis and zebrafish has revealed a role for bioelectric signaling in the development of heart, face, eye, brain, and other organs. Screens have identified roles for ion channels in size control of structures such as the zebrafish fin, while focused gain-of-function studies have shown for example that body parts can be re-specified at the organ level – for example creating entire eyes in gut endoderm. As in the brain, developmental bioelectrics can integrate information across significant distance in the embryo, for example such as the control of brain size by bioelectric states of ventral tissue. and the control of tumorigenesis at the site of oncogene expression by bioelectric state of remote cells.
Human disorders, as well as numerous mouse mutants show that bioelectric signaling is important for human development (Tables 1 and 2). Those effects are pervasively linked to channelopathies, which are human disorders that result from mutations that disrupt ion channels.
Several channelopathies result in morphological abnormalities or congenital birth defects in addition to symptoms that affect muscle and or neurons. For example, mutations that disrupt an inwardly rectifying potassium channel Kir2.1 cause dominantly inherited Andersen-Tawil Syndrome (ATS). ATS patients experience periodic paralysis, cardiac arrhythmias, and multiple morphological abnormalities that can include cleft or high arched palate, cleft or thin upper lip, flattened philtrum, micrognathia, dental oligodontia, enamel hypoplasia, delayed dentition eruption, malocclusion, broad forehead, wide set eyes, low set ears, syndactyly, clinodactyly, brachydactyly, and dysplastic kidneys. Mutations that disrupt another inwardly rectifying K+ channel Girk2 encoded by KCNJ6 cause Keppen-Lubinsky syndrome which includes microcephaly, a narrow nasal bridge, a high arched palate, and severe generalized lipodystrophy (failure to generate adipose tissue). KCNJ6 is in the Down syndrome critical region such that duplications that include this region lead to craniofacial and limb abnormalities and duplications that do not include this region do not lead to morphological symptoms of Down syndrome. Mutations in KCNH1, a voltage gated potassium channel lead to Temple-Baraitser (also known as Zimmermann- Laband) syndrome. Common features of Temple-Baraitser syndrome include absent or hypoplastic of finger and toe nails and phalanges and joint instability. Craniofacial defects associated with mutations in KCNH1 include cleft or high arched palate, hypertelorism, dysmorphic ears, dysmorphic nose, gingival hypertrophy, and abnormal number of teeth.
Mutations in CaV1.2, a voltage gated Ca2+ channel, lead to Timothy syndrome, which causes severe cardiac arrhythmia (long-QT) along with syndactyly and similar craniofacial defects to Andersen-Tawil syndrome including cleft or high-arched palate, micrognathia, low set ears, syndactyly and brachydactyly. While these channelopathies are rare, they show that functional ion channels are important for development. Furthermore, in utero exposure to anti-epileptic medications that target some ion channels also cause increased incidence of birth defects such as oral cleft. The effects of both genetic and exogenous disruption of ion channels lend insight into the importance of bioelectric signaling in development.
Wound healing and cell guidance
One of the best-understood roles for bioelectric gradients is at the tissue-level endogenous electric fields utilized during wound healing. It is challenging to study wound-associated electric fields, because these fields are weak, less fluctuating, and do not have immediate biological responses when compared to nerve pulses and muscle contraction. The development of the vibrating and glass microelectrodes, demonstrated that wounds indeed produced and, importantly, sustained measurable electric currents and electric fields. These techniques allow further characterization of the wound electric fields/currents at cornea and skin wounds, which show active spatial and temporal features, suggesting active regulation of these electrical phenomena. For example, the wound electric currents are always the strongest at the wound edge, which gradually increased to reach a peak about 1 hour after injury. At wounds in diabetic animals, the wound electric fields are significantly compromised. Understanding the mechanisms of generation and regulation of the wound electric currents/fields is expected to reveal new approaches to manipulate the electrical aspect for better wound healing.
How are the electric fields at a wound produced? Epithelia actively pump and differentially segregate ions. In the cornea epithelium, for example, Na+ and K+ are transported inwards from tear fluid to extracellular fluid, and Cl− is transported out of the extracellular fluid into the tear fluid. The epithelial cells are connected by tight junctions, forming the major electrical resistive barrier, and thus establishing an electrical gradient across the epithelium – the transepithelial potential (TEP). Breaking the epithelial barrier, as occurs in any wounds, creates a hole that breaches the high electrical resistance established by the tight junctions in the epithelial sheet, short-circuiting the epithelium locally. The TEP therefore drops to zero at the wound. However, normal ion transport continues in unwounded epithelial cells beyond the wound edge (typically <1 mm away), driving positive charge flow out of the wound and establishing a steady, laterally-oriented electric field (EF) with the cathode at the wound. Skin also generates a TEP, and when a skin wound is made, similar wound electric currents and fields arise, until the epithelial barrier function recovers to terminate the short-circuit at the wound. When wound electric fields are manipulated with pharmacological agents that either stimulate or inhibit transport of ions, the wound electric fields also increase or decrease, respectively. Wound healing can be speed up or slowed down accordingly in cornea wounds.
How do electric fields affect wound healing? To heal wounds, cells surrounding the wound must migrate and grow directionally into the wound to cover the defect and restore the barrier. Cells important to heal wounds respond remarkably well to applied electric fields of the same strength that are measured at wounds. The whole gamut of cell types and their responses following injury are affected by physiological electric fields. Those include migration and division of epithelial cells, sprouting and extension of nerves, and migration of leukocytes and endothelial cells. The most well studied cellular behavior is directional migration of epithelial cells in electric fields – electrotaxis. The epithelial cells migrate directionally to the negative pole (cathode), which at a wound is the field polarity of the endogenous vectorial electric fields in the epithelium, pointing (positive to negative) to the wound center. Epithelial cells of the cornea, keratinocytes from the skin, and many other types of cells show directional migration at electric field strengths as low as a few mV mm−1. Large sheets of monolayer epithelial cells, and sheets of stratified multilayered epithelial cells also migrate directionally. Such collective movement closely resembles what happens during wound healing in vivo, where cell sheets move collectively into the wound bed to cover the wound and restore the barrier function of the skin or cornea.
How cells sense such minute extracellular electric fields remains largely elusive. Recent research has started to identify some genetic, signaling and structural elements underlying how cells sense and respond to small physiological electric fields. These include ion channels, intracellular signaling pathways, membrane lipid rafts, and electrophoresis of cellular membrane components.
Limb regeneration in animals
In the early 20th century, Albert Mathews seminally correlated regeneration of a cnidarian polyp with the potential difference between polyp and stolon surfaces, and affected regeneration by imposing countercurrents. Amedeo Herlitzka, following on the wound electric currents footsteps of his mentor, du Bois-Raymond, theorized about electric currents playing an early role in regeneration, maybe initiating cell proliferation. Using electric fields overriding endogenous ones, Marsh and Beams astoundingly generated double-headed planarians and even reversed the primary body polarity entirely, with tails growing where a head previously existed. After these seed studies, variations of the idea that bioelectricity could sense injury and trigger or at least be a major player in regeneration have spurred over the decades until the present day. A potential explanation lies on resting potentials (primarily Vmem and TEP), which can be, at least in part, dormant sensors (alarms) ready to detect and effectors (triggers) ready to react to local damage.
Following up on the relative success of electric stimulation on non-permissive frog leg regeneration using an implanted bimetallic rod in the late 1960s, the bioelectric extracellular aspect of amphibian limb regeneration was extensively dissected in the next decades. Definitive descriptive and functional physiological data was made possible owing to the development of the ultra-sensitive vibrating probe and improved application devices. Amputation invariably leads to a skin-driven outward current and a consequent lateral electric field setting the cathode at the wound site. Although initially pure ion leakage, an active component eventually takes place and blocking ion translocators typically impairs regeneration. Using biomimetic exogenous electric currents and fields, partial regeneration was achieved, which typically included tissue growth and increased neuronal tissue. Conversely, precluding or reverting endogenous electric current and fields impairs regeneration. These studies in amphibian limb regeneration and related studies in lampreys and mammals combined with those of bone fracture healing and in vitro studies, led to the general rule that migrating (such as keratinocytes, leucocytes and endothelial cells) and outgrowing (such as axons) cells contributing to regeneration undergo electrotaxis towards the cathode (injury original site). Congruently, an anode is associated with tissue resorption or degeneration, as occurs in impaired regeneration and osteoclastic resorption in bone. Despite these efforts, the promise for a significant epimorphic regeneration in mammals remains a major frontier for future efforts, which includes the use of wearable bioreactors to provide an environment within which pro-regenerative bioelectric states can be driven and continued efforts at electrical stimulation.
Recent molecular work has identified proton and sodium flux as being important for tail regeneration in Xenopus tadpoles, and shown that regeneration of the entire tail (with spinal cord, muscle, etc.) could be triggered in a range of normally non-regenerative conditions by either molecular-genetic, pharmacological, or optogenetic methods. In planaria, work on bioelectric mechanism has revealed control of stem cell behavior, size control during remodeling, anterior-posterior polarity, and head shape. Gap junction-mediated alteration of physiological signaling produces two-headed worms in Dugesia japonica; remarkably, these animals continue to regenerate as two-headed in future rounds of regeneration months after the gap junction-blocking reagent has left the tissue. This stable, long-term alteration of the anatomical layout to which animals regenerate, without genomic editing, is an example of epigenetic inheritance of body pattern, and is also the only available "strain" of planarian species exhibiting an inherited anatomical change that is different from the wild-type.
Cancer
Defection of cells from the normally tight coordination of activity towards an anatomical structure results in cancer; it is thus no surprise that bioelectricity – a key mechanism for coordinating cell growth and patterning – is a target often implicated in cancer and metastasis. Indeed, it has long been known that gap junctions have a key role in carcinogenesis and progression. Channels can behave as oncogenes and are thus suitable as novel drug targets. Recent work in amphibian models has shown that depolarization of resting potential can trigger metastatic behavior in normal cells, while hyperpolarization (induced by ion channel misexpression, drugs, or light) can suppress tumorigenesis induced by expression of human oncogenes. Depolarization of resting potential appears to be a bioelectric signature by which incipient tumor sites can be detected non-invasively. Refinement of the bioelectric signature of cancer in biomedical contexts, as a diagnostic modality, is one of the possible applications of this field. Excitingly, the ambivalence of polarity – depolarization as marker and hyperpolarization as treatment – make it conceptually possible to derive theragnostic (portmanteau of therapeutics with diagnostics) approaches, designed to simultaneously detect and treat early tumors, in this case based on the normalization of the membrane polarization.
Pattern regulation
Recent experiments using ion channel opener/blocker drugs, as well as dominant ion channel misexpression, in a range of model species, has shown that bioelectricity, specifically, voltage gradients instruct not only stem cell behavior but also large-scale patterning. Patterning cues are often mediated by spatial gradients of cell resting potentials, or Vmem, which can be transduced into second messenger cascades and transcriptional changes by a handful of known mechanisms. These potentials are set by the function of ion channels and pumps, and shaped by gap junctional connections which establish developmental compartments (isopotential cell fields). Because both gap junctions and ion channels are themselves voltage-sensitive, cell groups implement electric circuits with rich feedback capabilities. The outputs of developmental bioelectric dynamics in vivo represent large-scale patterning decisions such as the number of heads in planarian, the shape of the face in frog development, and the size of tails in zebrafish. Experimental modulation of endogenous bioelectric prepatterns have enabled converting body regions (such as the gut) to a complete eye, inducing regeneration of appendages such as tadpole tails at non-regenerative contexts, and conversion of flatworm head shapes and contents to patterns appropriate to other species of flatworms, despite a normal genome. Recent work has shown the use of physiological modeling environments for identifying predictive interventions to target bioelectric states for repair of embryonic brain defects under a range of genetic and pharmacologically induced teratologies.
Future research
Life is ultimately an electrochemical enterprise; research in this field is progressing along several frontiers. First is the reductive program of understanding how bioelectric signals are produced, how voltage changes in the cell membrane are able to regulate cell behavior, and what the genetic and epigenetic downstream targets of bioelectric signals are. A few mechanisms that transduce bioelectric change into alterations of gene expression are already known, including the bioelectric control of movement of small second-messenger molecules through cells, including serotonin and butyrate, voltage sensitive phosphatases, among others. Also known are numerous gene targets of voltage signaling, such as Notch, BMP, FGF, and HIF-1α. Thus, the proximal mechanisms of bioelectric signaling within single cells are becoming well-understood, and advances in optogenetics and magnetogenetics continue to facilitate this research program. More challenging however is the integrative program of understanding how specific patterns of bioelectric dynamics help control the algorithms that accomplish large-scale pattern regulation (regeneration and development of complex anatomy). The incorporation of bioelectrics with chemical signaling in the emerging field of probing cell sensory perception and decision-making is an important frontier for future work.
Bioelectric modulation has shown control over complex morphogenesis and remodeling, not merely setting individual cell identity. Moreover, a number of the key results in this field have shown that bioelectric circuits are non-local – regions of the body make decisions based on bioelectric events at a considerable distance. Such non-cell-autonomous events suggest distributed network models of bioelectric control; new computational and conceptual paradigms may need to be developed to understand spatial information processing in bioelectrically active tissues. It has been suggested that results from the fields of primitive cognition and unconventional computation are relevant to the program of cracking the bioelectric code. Finally, efforts in biomedicine and bioengineering are developing applications such as wearable bioreactors for delivering voltage-modifying reagents to wound sites, and ion channel-modifying drugs (a kind of electroceutical) for repair of birth defects and regenerative repair. Synthetic biologists are likewise starting to incorporate bioelectric circuits into hybrid constructs.
Table 1: Ion Channels and Pumps Implicated in Patterning
Table 2: Gap Junctions Implicated in Patterning
Table 3: Ion Channel Oncogenes
References
External links
Biophysics
Electricity | 0.805293 | 0.978642 | 0.788093 |
Heterotrophic nutrition | Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes
References
Trophic ecology
Biological interactions | 0.797585 | 0.988061 | 0.788063 |
Ecological systems theory | Ecological systems theory is a broad term used to capture the theoretical contributions of developmental psychologist Urie Bronfenbrenner. Bronfenbrenner developed the foundations of the theory throughout his career, published a major statement of the theory in American Psychologist, articulated it in a series of propositions and hypotheses in his most cited book, The Ecology of Human Development and further developing it in The Bioecological Model of Human Development and later writings. A primary contribution of ecological systems theory was to systemically examine contextual variability in development processes. As the theory evolved, it placed increasing emphasis on the role of the developing person as an active agent in development and on understanding developmental process rather than "social addresses" (e.g., gender, ethnicity) as explanatory mechanisms.
Overview
Ecological systems theory describes a scientific approach to studying lifespan development that emphasizes the interrelationship of different developmental processes (e.g., cognitive, social, biological). It is characterized by its emphasis on naturalistic and quasi-experimental studies, although several important studies using this framework use experimental methodology. Although developmental processes are thought to be universal, they are thought to (a) show contextual variability in their likelihood of occurring, (b) occur in different constellations in different settings and (c) affect different people differently. Because of this variability, scientists working within this framework use individual and contextual variability to provide insight into these universal processes.
The foundations of ecological systems theory can be seen throughout Bronfennbrenner's career. For example, in the 1950s he analyzed historical and social class variations in parenting practices, in the 1960s he wrote an analysis of gender differences focusing on the different cultural meanings of the same parenting practices for boys and girls, and in the 1970s he compared childrearing in the US and USSR, focusing how cultural differences in the concordance of values across social institutions change parent influences.
The formal development of ecological systems theory occurred in three major stages. A major statement of the theory was published in American Psychologist. Bronfenbrenner critiqued then current methods of studying children in laboratories as providing a limited window on development, calling it "the science of the strange behavior of children in strange situations with strange adults for the briefest possible periods of time" (p. 513) and calling for more "ecologically valid" studies of developing individuals in their natural environment. For example, he argued that laboratory studies of children provided insight into their behavior in an unfamiliar ("strange") setting that had limited generalizability to their behavior in more familiar environments, such as home or school. The Ecology of Human Development articulated a series of definitions, propositions and hypotheses that could be used to study human development. This work categorized developmental processes, beginning with genetic and personal characteristics, though proximal influences that the developing person interacted with directly (e.g., social relationships), to influences such as parents' work, government policies or cultural value systems that affected them indirectly. As the theory evolved, it placed increasing emphasis on the role of the developing person as an active agent in development and on understanding developmental process rather than "social addresses" (e.g., gender, ethnicity) as explanatory mechanisms. The final form of the theory, developed in conjunction with Stephen Ceci, was called the Bioecological Model of Human Development and addresses critiques that previous statements of the theory under-emphasized individual difference and efficacy. Developmental processes were conceived of as co-occurring in niches that were lawfully defined and reinforcing. Because of this, Bronfenbrenner was a strong proponent of using social policy interventions as both a way of using science to improve child well-being and as an important scientific tool. Early examples of the application of ecological systems theory are evident in Head Start.
The five systems
Microsystem: Refers to the institutions and groups that most immediately and directly impact the child's development including: family, school, siblings, neighborhood, and peers.
Mesosystem: Consists of interconnections between the microsystems, for example between the family and teachers or between the child's peers and the family.
Exosystem: Involves links between social settings that do not involve the child. For example, a child's experience at home may be influenced by their parent's experiences at work. A parent might receive a promotion that requires more travel, which in turn increases conflict with the other parent resulting in changes in their patterns of interaction with the child.
Macrosystem: Describes the overarching culture that influences the developing child, as well as the microsystems and mesosystems embedded in those cultures. Cultural contexts can differ based on geographic location, socioeconomic status, poverty, and ethnicity. Members of a cultural group often share a common identity, heritage, and values. Macrosystems evolve across time and from generation to generation.
Chronosystem: Consists of the pattern of environmental events and transitions over the life course, as well as changing socio-historical circumstances. For example, researchers have found that the negative effects of divorce on children often peak in the first year after the divorce. By two years after the divorce, family interaction is less chaotic and more stable. An example of changing sociohistorical circumstances is the increase in opportunities for women to pursue a career during the last thirty years.
Later work by Bronfenbrenner considered the role of biology in this model as well; thus the theory has sometimes been called the bioecological model.
Per this theoretical construction, each system contains roles, norms and rules which may shape psychological development. For example, an inner-city family faces many challenges which an affluent family in a gated community does not, and vice versa. The inner-city family is more likely to experience environmental hardships, like crime and squalor. On the other hand, the sheltered family is more likely to lack the nurturing support of extended family.
Since its publication in 1979, Bronfenbrenner's major statement of this theory, The Ecology of Human Development has had widespread influence on the way psychologists and others approach the study of human beings and their environments. As a result of his groundbreaking work in human ecology, these environments—from the family to economic and political structures—have come to be viewed as part of the life course from childhood through adulthood.
Bronfenbrenner has identified Soviet developmental psychologist Lev Vygotsky and German-born psychologist Kurt Lewin as important influences on his theory.
Bronfenbrenner's work provides one of the foundational elements of the ecological counseling perspective, as espoused by Robert K. Conyne, Ellen Cook, and the University of Cincinnati Counseling Program.
There are many different theories related to human development. Human ecology theory emphasizes environmental factors as central to development.
See also
Bioecological model
Ecosystem
Ecosystem ecology
Systems ecology
Systems psychology
Theoretical ecology
Urie Bronfenbrenner
References
The diagram of the ecosystemic model was created by Buehler (2000) as part of a dissertation on assessing interactions between a child, their family, and the school and medical systems.
Further reading
Urie Bronfenbrenner. (2009). The Ecology of Human Development: Experiments by Nature and Design. Cambridge, Massachusetts: Harvard University Press.
Dede Paquette & John Ryan. (2001). Bronfenbrenner’s Ecological Systems Theory
Marlowe E. Trance, Kerstin O. Flores. (2014). " Child and Adolescent Development" Vol. 32. no. 5 9407
Ecological Systems Review
The ecological framework facilitates organizing information about people and their environment in
order to understand their interconnectedness. Individuals move through a series of life transitions,
all of which necessitate environmental support and coping skills. Social problems involving
health care, family relations, inadequate income, mental health difficulties, conflicts with law
enforcement agencies, unemployment, educational difficulties, and so on can all be subsumed
under the ecological model, which would enable practitioners to assess factors that are relevant
to such problems (Hepworth, Rooney, Rooney, Strom-Gottfried, & Larsen, 2010, p. 16). Thus,
examining the ecological contexts of parenting success of children with disabilities is particularly
important. Utilizing Bronfenbrenner's (1977, 1979) ecological framework, this article explores
parenting success factors at the micro- (i.e., parenting practice, parent-child relations), meso-
(i.e., caregivers' marital relations, religious social support), and macro-system levels (i.e., cultural
variations, racial and ethnic disparities, and health care delivery system) of practice.
Developmental psychology
Human ecology
Psychological schools
Psychological theories
Systems psychology
Systems theory | 0.791886 | 0.994903 | 0.787849 |
Steps to an Ecology of Mind | Steps to an Ecology of Mind is a collection of Gregory Bateson's short works over his long and varied career. Subject matter includes essays on anthropology, cybernetics, psychiatry, and epistemology. It was originally published by Ballantine Books in 1972 (republished 2000 with foreword by Mary Catherine Bateson).
Part I: Metalogues
The book begins with a series of metalogues, which take the form of conversations with his daughter Mary Catherine Bateson. The metalogues are mostly thought exercises with titles such as "What is an Instinct" and "How Much Do You Know." In the metalogues, the playful dialectic structure itself is closely related to the subject matter of the piece.
DEFINITION: A metalogue is a conversation about some problematic subject. This conversation should be such that not only do the participants discuss the problem but the structure of the conversation as a whole is also relevant to the same subject. Only some of the conversations here presented achieve this double format.
Notably, the history of evolutionary theory is inevitably a metalogue between man and nature, in which the creation and interaction of ideas must necessarily exemplify evolutionary process.
Why Do Things Get in a Muddle? (01948, previously unpublished)
Why Do Frenchmen? (01951, Impulse ; 01953, ETC: A Review of General Semantics, Vol. X)
About Games and Being Serious (01953, ETC: A Review of General Semantics, Vol. X)
How Much Do You Know? (01953, ETC: A Review of General Semantics, Vol. X)
Why Do Things Have Outlines? (01953, ETC: A Review of General Semantics, Vol. XI)
Why a Swan? (01954, Impulse)
What Is an Instinct? (01969, Sebeok, Approaches to Animal Communication)
Part II: Form and Pattern in Anthropology
Part II is a collection of anthropological writings, many of which were written while he was married to Margaret Mead.
Culture Contact and Schismogenesis (01935, Man, Article 199, Vol. XXXV)
Experiments in Thinking About Observed Ethnological Material (01940, Seventh Conference on Methods in Philosophy and the Sciences ; 01941, Philosophy of Science, Vol. 8, No. 1)
Morale and National Character (01942, Civilian Morale, Watson)
Bali: The Value System of a Steady State (01949, Social Structure: Studies Presented to A.R. Radcliffe-Brown, Fortes)
Style, Grace, and Information in Primitive Art (01967, A Study of Primitive Art, Forge)
Part III: Form and Pathology in Relationship
Part III is devoted to the theme of "Form and Pathology in Relationships." His essay on alcoholism examines the alcoholic state of mind, and the methodology of Alcoholics Anonymous within the framework of the then-nascent field of cybernetics.
Social Planning and the Concept of Deutero-Learning was a "comment on Margaret Mead's article "The Comparative Study of Culture and the Purposive Cultivation of Democratic Values," 01942, Science, Philosophy and Religion, Second Symposium)
A Theory of Play and Fantasy (01954, A.P.A. Regional Research Conference in Mexico City, March 11 ; 01955, A.P.A. Psychiatric Research Reports)
Epidemiology of a Schizophrenia (edited version of a talk, "How the Deviant Sees His Society," from 01955, at a conference on "The Epidemiology of Mental Health," Brighton, Utah)
Toward a Theory of Schizophrenia (01956, Behavioral Science, Vol. I, No. 4)
The Group Dynamics of Schizophrenia (01960)
Minimal Requirements for a Theory of Schizophrenia (01959)
Double Bind, 1969 (01969)
The Logical Categories of Learning and Communication (01968)
The Cybernetics of "Self": A Theory of Alcoholism (01971)
Part IV: Biology and Evolution
On Empty-Headedness Among Biologists and State Boards of Education (in BioScience, Vol. 20, 1970)
The Role of Somatic Change in Evolution (in the journal of Evolution, Vol 17, 1963)
Problems in Cetacean and Other Mammalian Communication (appeared as Chapter 25, pp. 569–799, in Whales, Dolphins and Purpoises, edited by Kenneth S. Norris, University of California Press, 1966)
A Re-examination of "Bateson's Rule" (accepted for publication in the Journal of Genetics)
Part V: Epistemology and Ecology.
Cybernetic Explanation (from the American Behavioral Scientist, Vol. 10, No. 8, April 1967, pp. 29–32)
Redundancy and Coding (appeared as Chapter 22 in Animal Communication: Techniques of Study and Results of Research, edited by Thomas A. Sebeok, 1968, Indiana University Press)
Conscious Purpose Versus Nature (this lecture was given in August, 1968, to the London Conference on the Dialectics of Liberation, appearing in a book of the same name, Penguin Books)
Effects of Conscious Purpose on Human Adaptation (prepared as the Bateson's position paper for Wenner-Gren Foundation Conference on "Effects of Conscious Purpose on Human Adaptation". Bateson chaired the conference held in Burg Wartenstein, Austria, July 17–24, 1968)
Form, Substance, and Difference (the Nineteenth Annual Korzbski Memorial Lecture, January 9, 1970, under the auspices of the Institute of General Semantics; appeared in the General Semantics'' Bulletin, No. 37, 1970)
Part VI: Crisis in the Ecology of Mind
From Versailles to Cybernetics (previously unpublished. This lecture was given 21 April 1966, to the "Two Worlds Symposium" at (CSU) Sacramento State College)
Pathologies of Epistemology (given at the Second Conference on Mental Health in Asia and the Pacific, 1969, at the East–West Center, Hawaii, appearing in the report of that conference)
The Roots of Ecological Crisis (testimony on behalf of the University of Hawaii Committee on Ecology and Man, presented in March 1970)
Ecology and Flexibility in Urban Civilization (written for a conference convened by Bateson in October 1970 on "Restructuring the Ecology of a Great City" and subsequently edited)
See also
Double bind
Information ecology
Philosophy of mind
Social sustainability
Systems philosophy
Systems theory
Notes and references
1972 books
Anthropology books
Cognitive science literature
Systems theory books
University of Chicago Press books | 0.806995 | 0.976018 | 0.787641 |
R/K selection theory | In ecology, selection theory relates to the selection of combinations of traits in an organism that trade off between quantity and quality of offspring. The focus on either an increased quantity of offspring at the expense of reduced individual parental investment of -strategists, or on a reduced quantity of offspring with a corresponding increased parental investment of -strategists, varies widely, seemingly to promote success in particular environments. The concepts of quantity or quality offspring are sometimes referred to as "cheap" or "expensive", a comment on the expendable nature of the offspring and parental commitment made. The stability of the environment can predict if many expendable offspring are made or if fewer offspring of higher quality would lead to higher reproductive success. An unstable environment would encourage the parent to make many offspring, because the likelihood of all (or the majority) of them surviving to adulthood is slim. In contrast, more stable environments allow parents to confidently invest in one offspring because they are more likely to survive to adulthood.
The terminology of -selection was coined by the ecologists Robert MacArthur and E. O. Wilson in 1967 based on their work on island biogeography; although the concept of the evolution of life history strategies has a longer history (see e.g. plant strategies).
The theory was popular in the 1970s and 1980s, when it was used as a heuristic device, but lost importance in the early 1990s, when it was criticized by several empirical studies. A life-history paradigm has replaced the selection paradigm, but continues to incorporate its important themes as a subset of life history theory. Some scientists now prefer to use the terms fast versus slow life history as a replacement for, respectively, versus reproductive strategy.
Overview
In selection theory, selective pressures are hypothesised to drive evolution in one of two generalized directions: - or -selection. These terms, and , are drawn from standard ecological formula as illustrated in the simplified Verhulst model of population dynamics:
where is the population, is the maximum growth rate, is the carrying capacity of the local environment, and (the derivative of population size with respect to time ) is the rate of change in population with time. Thus, the equation relates the growth rate of the population to the current population size, incorporating the effect of the two constant parameters and .
(Note that decrease is negative growth.) The choice of the letter came from the German Kapazitätsgrenze (capacity limit), while came from rate.
r-selection
-selected species are those that emphasize high growth rates, typically exploit less-crowded ecological niches, and produce many offspring, each of which has a relatively low probability of surviving to adulthood (i.e., high , low ). A typical species is the dandelion (genus Taraxacum).
In unstable or unpredictable environments, -selection predominates due to the ability to reproduce rapidly. There is little advantage in adaptations that permit successful competition with other organisms, because the environment is likely to change again. Among the traits that are thought to characterize -selection are high fecundity, small body size, early maturity onset, short generation time, and the ability to disperse offspring widely.
Organisms whose life history is subject to -selection are often referred to as -strategists or -selected. Organisms that exhibit -selected traits can range from bacteria and diatoms, to insects and grasses, to various semelparous cephalopods, certain families of birds, such as dabbling ducks, and small mammals, particularly rodents.
K-selection
By contrast, -selected species display traits associated with living at densities close to carrying capacity and typically are strong competitors in such crowded niches, that invest more heavily in fewer offspring, each of which has a relatively high probability of surviving to adulthood (i.e., low , high ). In scientific literature, -selected species are occasionally referred to as "opportunistic" whereas -selected species are described as "equilibrium".
In stable or predictable environments, -selection predominates as the ability to compete successfully for limited resources is crucial and populations of -selected organisms typically are very constant in number and close to the maximum that the environment can bear (unlike -selected populations, where population sizes can change much more rapidly).
Traits that are thought to be characteristic of -selection include large body size, long life expectancy, and the production of fewer offspring, which often require extensive parental care until they mature. Organisms whose life history is subject to -selection are often referred to as -strategists or -selected. Organisms with -selected traits include large organisms such as elephants, humans, and whales, but also smaller long-lived organisms such as Arctic terns, parrots, and eagles.
Continuous spectrum
Although some organisms are identified as primarily - or -strategists, the majority of organisms do not follow this pattern. For instance, trees have traits such as longevity and strong competitiveness that characterise them as -strategists. In reproduction, however, trees typically produce thousands of offspring and disperse them widely, traits characteristic of -strategists.
Similarly, reptiles such as sea turtles display both - and -traits: Although sea turtles are large organisms with long lifespans (provided they reach adulthood), they produce large numbers of unnurtured offspring.
The dichotomy can be re-expressed as a continuous spectrum using the economic concept of discounted future returns, with -selection corresponding to large discount rates and -selection corresponding to small discount rates.
Ecological succession
In areas of major ecological disruption or sterilisation (such as after a major volcanic eruption, as at Krakatoa or Mount St. Helens), - and -strategists play distinct roles in the ecological succession that regenerates the ecosystem. Because of their higher reproductive rates and ecological opportunism, primary colonisers typically are -strategists and they are followed by a succession of increasingly competitive flora and fauna. The ability of an environment to increase energetic content, through photosynthetic capture of solar energy, increases with the increase in complex biodiversity as species proliferate to reach a peak possible with strategies.
Eventually a new equilibrium is approached (sometimes referred to as a climax community), with -strategists gradually being replaced by -strategists which are more competitive and better adapted to the emerging micro-environmental characteristics of the landscape. Traditionally, biodiversity was considered maximized at this stage, with introductions of new species resulting in the replacement and local extinction of endemic species. However, the intermediate disturbance hypothesis posits that intermediate levels of disturbance in a landscape create patches at different levels of succession, promoting coexistence of colonizers and competitors at the regional scale.
Application
While usually applied at the level of species, selection theory is also useful in studying the evolution of ecological and life history differences between subspecies, for instance the African honey bee, A. m. scutellata, and the Italian bee, A. m. ligustica. At the other end of the scale, it has also been used to study the evolutionary ecology of whole groups of organisms, such as bacteriophages. Other researchers have proposed that the evolution of human inflammatory responses is related to selection.
Some researchers, such as Lee Ellis, J. Philippe Rushton, and Aurelio José Figueredo, have attempted to apply selection theory to various human behaviors, including crime, sexual promiscuity, fertility, IQ, and other traits related to life history theory. Rushton developed "differential theory" to attempt to explain variations in behavior across human races. Differential theory has been debunked as being devoid of empirical basis, and has also been described as a key example of scientific racism.
Status
Although selection theory became widely used during the 1970s, it also began to attract more critical attention. In particular, a review in 1977 by the ecologist Stephen C. Stearns drew attention to gaps in the theory, and to ambiguities in the interpretation of empirical data for testing it.
In 1981, a review of the selection literature by Parry demonstrated that there was no agreement among researchers using the theory about the definition of - and -selection, which led him to question whether the assumption of a relation between reproductive expenditure and packaging of offspring was justified. A 1982 study by Templeton and Johnson showed that in a population of Drosophila mercatorum under -selection the population actually produced a higher frequency of traits typically associated with -selection. Several other studies contradicting the predictions of selection theory were also published between 1977 and 1994.
When Stearns reviewed the status of the theory again in 1992, he noted that from 1977 to 1982 there was an average of 42 references to the theory per year in the BIOSIS literature search service, but from 1984 to 1989 the average dropped to 16 per year and continued to decline. He concluded that theory was a once useful heuristic that no longer serves a purpose in life history theory.
More recently, the panarchy theories of adaptive capacity and resilience promoted by C. S. Holling and Lance Gunderson have revived interest in the theory, and use it as a way of integrating social systems, economics, and ecology.
Writing in 2002, Reznick and colleagues reviewed the controversy regarding selection theory and concluded that:
Alternative approaches are now available both for studying life history evolution (e.g. Leslie matrix for an age-structured population) and for density-dependent selection (e.g. variable density lottery model).
See also
Evolutionary game theory
Life history theory
Minimax/maximin strategy
Ruderal species
Semelparity and iteroparity
Survivorship curve
Trivers–Willard hypothesis
References
Ecological theories
Evolutionary biology concepts
Mating systems
Population ecology
Race and intelligence controversy
Selection | 0.790494 | 0.996339 | 0.7876 |
Developmental systems theory | Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes.
Overview
All versions of developmental systems theory espouse the view that:
All biological processes (including both evolution and development) operate by continually assembling new structures.
Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws.
Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms.
Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for.
In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any particular entity and thereby maintains an explanatory openness on all empirical fronts. For example, there is vigorous resistance to the widespread assumptions that one can legitimately speak of genes ‘for’ specific phenotypic characters or that adaptation consists of evolution ‘shaping’ the more or less passive species, as opposed to adaptation consisting of organisms actively selecting, defining, shaping and often creating their niches.
Developmental systems theory: Topics
Six Themes of DST
Joint Determination by Multiple Causes: Development is a product of multiple interacting sources.
Context Sensitivity and Contingency: Development depends on the current state of the organism.
Extended Inheritance: An organism inherits resources from the environment in addition to genes.
Development as a process of construction: The organism helps shape its own environment, such as the way a beaver builds a dam to raise the water level to build a lodge.
Distributed Control: Idea that no single source of influence has central control over an organism's development.
Evolution As Construction: The evolution of an entire developmental system, including whole ecosystems of which given organisms are parts, not just the changes of a particular being or population.
A computing metaphor
To adopt a computing metaphor, the reductionists (whom developmental systems theory opposes) assume that causal factors can be divided into ‘processes’ and ‘data’, as in the Harvard computer architecture. Data (inputs, resources, content, and so on) is required by all processes, and must often fall within certain limits if the process in question is to have its ‘normal’ outcome. However, the data alone is helpless to create this outcome, while the process may be ‘satisfied’ with a considerable range of alternative data.
Developmental systems theory, by contrast, assumes that the process/data distinction is at best misleading and at worst completely false, and that while it may be helpful for very specific pragmatic or theoretical reasons to treat a structure now as a process and now as a datum, there is always a risk (to which reductionists routinely succumb) that this methodological convenience will be promoted into an ontological conclusion. In fact, for the proponents of DST, either all structures are both process and data, depending on context, or even more radically, no structure is either.
Fundamental asymmetry
For reductionists there is a fundamental asymmetry between different causal factors, whereas for DST such asymmetries can only be justified by specific purposes, and argue that many of the (generally unspoken) purposes to which such (generally exaggerated) asymmetries have been put are scientifically illegitimate. Thus, for developmental systems theory, many of the most widely applied, asymmetric and entirely legitimate distinctions biologists draw (between, say, genetic factors that create potential and environmental factors that select outcomes or genetic factors of determination and environmental factors of realisation) obtain their legitimacy from the conceptual clarity and specificity with which they are applied, not from their having tapped a profound and irreducible ontological truth about biological causation. One problem might be solved by reversing the direction of causation correctly identified in another. This parity of treatment is especially important when comparing the evolutionary and developmental explanations for one and the same character of an organism.
DST approach
One upshot of this approach is that developmental systems theory also argues that what is inherited from generation to generation is a good deal more than simply genes (or even the other items, such as the fertilised zygote, that are also sometimes conceded). As a result, much of the conceptual framework that justifies ‘selfish gene’ models is regarded by developmental systems theory as not merely weak but actually false. Not only are major elements of the environment built and inherited as materially as any gene but active modifications to the environment by the organism (for example, a termite mound or a beaver’s dam) demonstrably become major environmental factors to which future adaptation is addressed. Thus, once termites have begun to build their monumental nests, it is the demands of living in those very nests to which future generations of termite must adapt.
This inheritance may take many forms and operate on many scales, with a multiplicity of systems of inheritance complementing the genes. From position and maternal effects on gene expression to epigenetic inheritance to the active construction and intergenerational transmission of enduring niches, development systems theory argues that not only inheritance but evolution as a whole can be understood only by taking into account a far wider range of ‘reproducers’ or ‘inheritance systems’ – genetic, epigenetic, behavioural and symbolic – than neo-Darwinism’s ‘atomic’ genes and gene-like ‘replicators’. DST regards every level of biological structure as susceptible to influence from all the structures by which they are surrounded, be it from above, below, or any other direction – a proposition that throws into question some of (popular and professional) biology’s most central and celebrated claims, not least the ‘central dogma’ of Mendelian genetics, any direct determination of phenotype by genotype, and the very notion that any aspect of biological (or psychological, or any other higher form) activity or experience is capable of direct or exhaustive genetic or evolutionary ‘explanation’.
Developmental systems theory is plainly radically incompatible with both neo-Darwinism and information processing theory. Whereas neo-Darwinism defines evolution in terms of changes in gene distribution, the possibility that an evolutionarily significant change may arise and be sustained without any directly corresponding change in gene frequencies is an elementary assumption of developmental systems theory, just as neo-Darwinism’s ‘explanation’ of phenomena in terms of reproductive fitness is regarded as fundamentally shallow. Even the widespread mechanistic equation of ‘gene’ with a specific DNA sequence has been thrown into question, as have the analogous interpretations of evolution and adaptation.
Likewise, the wholly generic, functional and anti-developmental models offered by information processing theory are comprehensively challenged by DST’s evidence that nothing is explained without an explicit structural and developmental analysis on the appropriate levels. As a result, what qualifies as ‘information’ depends wholly on the content and context out of which that information arises, within which it is translated and to which it is applied.
Criticism
Philosopher Neven Sesardić, while not dismissive of developmental systems theory, argues that its proponents forget that the role between levels of interaction is ultimately an empirical issue, which cannot be settled by a priori speculation; Sesardić observes that while the emergence of lung cancer is a highly complicated process involving the combined action of many factors and interactions, it is not unreasonable to believe that smoking has an effect on developing lung cancer. Therefore, though developmental processes are highly interactive, context dependent, and extremely complex, it is incorrect to conclude main effects of heredity and environment are unlikely to be found in the "messiness". Sesardić argues that the idea that changing the effect of one factor always depends on what is happening in other factors is an empirical claim, as well as a false one; for example, the bacterium Bacillus thuringiensis produces a protein that is toxic to caterpillars. Genes from this bacterium have been placed into plants vulnerable to caterpillars and the insects proceed to die when they eat part of the plant, as they consume the toxic protein. Thus, developmental approaches must be assessed on a case by case basis and in Sesardić's view, DST does not offer much if only posed in general terms. Hereditarian Psychologist Linda Gottfredson differentiates the "fallacy of so–called "interactionism"" from the technical use of gene-environment interaction to denote a non–additive environmental effect conditioned upon genotype. “Interactionism's” over–generalization cannot render attempts to identify genetic and environmental contributions meaningless. Where behavioural genetics attempts to determine portions of variation accounted for by genetics, environmental–developmentalistics like DST attempt to determine the typical course of human development and erroneously conclude the common theme is readily changed.
Another Sesardić argument counters another DST claim of impossibility of determining contribution of trait influence (genetic vs. environment). It necessarily follows a trait cannot be causally attributed to environment as genes and environment are inseparable in DST. Yet DST, critical of genetic heritability, advocates developmentalist research of environmental effects, a logical inconsistency. Barnes et al., made similar criticisms observing that the innate human capacity for language (deeply genetic) does not determine the specific language spoken (a contextually environmental effect). It is then, in principle, possible to separate the effects of genes and environment. Similarly, Steven Pinker argues if genes and environment couldn't actually be separated then speakers have a deterministic genetic disposition to learn a specific native language upon exposure. Though seemingly consistent with the idea of gene–environment interaction, Pinker argues it is nonetheless an absurd position since empirical evidence shows ancestry has no effect on language acquisition — environmental effects are often separable from genetic ones.
Related theories
Developmental systems theory is not a narrowly defined collection of ideas, and the boundaries with neighbouring models are porous. Notable related ideas (with key texts) include:
The Baldwin effect
Evolutionary developmental biology
Neural Darwinism
Probabilistic epigenesis
Relational developmental systems
See also
Systems theory
Complex adaptive system
Developmental psychobiology
The Dialectical Biologist - a 1985 book by Richard Levins and Richard Lewontin which describe a related approach.
Living systems
References
Bibliography
Reprinted as:
Dawkins, R. (1976). The Selfish Gene. New York: Oxford University Press.
Dawkins, R. (1982). The Extended Phenotype. Oxford: Oxford University Press.
Oyama, S. (1985). The Ontogeny of Information: Developmental Systems and Evolution. Durham, N.C.: Duke University Press.
Edelman, G.M. (1987). Neural Darwinism: Theory of Neuronal Group Selection. New York: Basic Books.
Edelman, G.M. and Tononi, G. (2001). Consciousness. How Mind Becomes Imagination. London: Penguin.
Goodwin, B.C. (1995). How the Leopard Changed its Spots. London: Orion.
Goodwin, B.C. and Saunders, P. (1992). Theoretical Biology. Epigenetic and Evolutionary Order from Complex Systems. Baltimore: Johns Hopkins University Press.
Jablonka, E., and Lamb, M.J. (1995). Epigenetic Inheritance and Evolution. The Lamarckian Dimension. London: Oxford University Press.
Kauffman, S.A. (1993). The Origins of Order: Self-Organization and Selection in Evolution. Oxford: Oxford University Press.
Levins, R. and Lewontin, R. (1985). The Dialectical Biologist. London: Harvard University Press.
Neumann-Held, E.M. (1999). The gene is dead- long live the gene. Conceptualizing genes the constructionist way. In P. Koslowski (ed.). Sociobiology and Bioeconomics: The Theory of Evolution in Economic and Biological Thinking, pp. 105–137. Berlin: Springer.
Waddington, C.H. (1957). The Strategy of the Genes. London: Allen and Unwin.
Further reading
Depew, D.J. and Weber, B.H. (1995). Darwinism Evolving. System Dynamics and the Genealogy of Natural Selection. Cambridge, Massachusetts: MIT Press.
Eigen, M. (1992). Steps Towards Life. Oxford: Oxford University Press.
Gray, R.D. (2000). Selfish genes or developmental systems? In Singh, R.S., Krimbas, C.B., Paul, D.B., and Beatty, J. (2000). Thinking about Evolution: Historical, Philosophical, and Political Perspectives. Cambridge University Press: Cambridge. (184-207).
Koestler, A., and Smythies, J.R. (1969). Beyond Reductionism. London: Hutchinson.
Lehrman, D.S. (1953). A critique of Konrad Lorenz’s theory of instinctive behaviour. Quarterly Review of Biology 28: 337-363.
Thelen, E. and Smith, L.B. (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, Massachusetts: MIT Press.
External links
William Bechtel, Developmental Systems Theory and Beyond presentation, winter 2006.
Biological systems
Systems theory
Evolutionary biology | 0.817711 | 0.963026 | 0.787477 |
Ecosystem model | An ecosystem model is an abstract, usually mathematical, representation of an ecological system (ranging in scale from an individual population, to an ecological community, or even an entire biome), which is studied to better understand the real system.
Using data gathered from the field, ecological relationships—such as the relation of sunlight and water availability to photosynthetic rate, or that between predator and prey populations—are derived, and these are combined to form ecosystem models. These model systems are then studied in order to make predictions about the dynamics of the real system. Often, the study of inaccuracies in the model (when compared to empirical observations) will lead to the generation of hypotheses about possible ecological relations that are not yet known or well understood. Models enable researchers to simulate large-scale experiments that would be too costly or unethical to perform on a real ecosystem. They also enable the simulation of ecological processes over very long periods of time (i.e. simulating a process that takes centuries in reality, can be done in a matter of minutes in a computer model).
Ecosystem models have applications in a wide variety of disciplines, such as natural resource management, ecotoxicology and environmental health, agriculture, and wildlife conservation. Ecological modelling has even been applied to archaeology with varying degrees of success, for example, combining with archaeological models to explain the diversity and mobility of stone tools.
Types of models
There are two major types of ecological models, which are generally applied to different types of problems: (1) analytic models and (2) simulation / computational models. Analytic models are typically relatively simple (often linear) systems, that can be accurately described by a set of mathematical equations whose behavior is well-known. Simulation models on the other hand, use numerical techniques to solve problems for which analytic solutions are impractical or impossible. Simulation models tend to be more widely used, and are generally considered more ecologically realistic, while analytic models are valued for their mathematical elegance and explanatory power. Ecopath is a powerful software system which uses simulation and computational methods to model marine ecosystems. It is widely used by marine and fisheries scientists as a tool for modelling and visualising the complex relationships that exist in real world marine ecosystems.
Model design
The process of model design begins with a specification of the problem to be solved, and the objectives for the model.
Ecological systems are composed of an enormous number of biotic and abiotic factors that interact with each other in ways that are often unpredictable, or so complex as to be impossible to incorporate into a computable model. Because of this complexity, ecosystem models typically simplify the systems they are studying to a limited number of components that are well understood, and deemed relevant to the problem that the model is intended to solve.
The process of simplification typically reduces an ecosystem to a small number of state variables and mathematical functions that describe the nature of the relationships between them. The number of ecosystem components that are incorporated into the model is limited by aggregating similar processes and entities into functional groups that are treated as a unit.
After establishing the components to be modeled and the relationships between them, another important factor in ecosystem model structure is the representation of space used. Historically, models have often ignored the confounding issue of space. However, for many ecological problems spatial dynamics are an important part of the problem, with different spatial environments leading to very different outcomes. Spatially explicit models (also called "spatially distributed" or "landscape" models) attempt to incorporate a heterogeneous spatial environment into the model. A spatial model is one that has one or more state variables that are a function of space, or can be related to other spatial variables.
Validation
After construction, models are validated to ensure that the results are acceptably accurate or realistic. One method is to test the model with multiple sets of data that are independent of the actual system being studied. This is important since certain inputs can cause a faulty model to output correct results. Another method of validation is to compare the model's output with data collected from field observations. Researchers frequently specify beforehand how much of a disparity they are willing to accept between parameters output by a model and those computed from field data.
Examples
The Lotka–Volterra equations
One of the earliest, and most well-known, ecological models is the predator-prey model of Alfred J. Lotka (1925) and Vito Volterra (1926). This model takes the form of a pair of ordinary differential equations, one representing a prey species, the other its predator.
where,
Volterra originally devised the model to explain fluctuations in fish and shark populations observed in the Adriatic Sea after the First World War (when fishing was curtailed). However, the equations have subsequently been applied more generally. Although simple, they illustrate some of the salient features of ecological models: modelled biological populations experience growth, interact with other populations (as either predators, prey or competitors) and suffer mortality.
A credible, simple alternative to the Lotka-Volterra predator-prey model and its common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model. The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka-Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio dependent extreme, so if a simple model is needed one can use the Arditi-Ginzburg model as the first approximation.
Others
The theoretical ecologist Robert Ulanowicz has used information theory tools to describe the structure of ecosystems, emphasizing mutual information (correlations) in studied systems. Drawing on this methodology and prior observations of complex ecosystems, Ulanowicz depicts approaches to determining the stress levels on ecosystems and predicting system reactions to defined types of alteration in their settings (such as increased or reduced energy flow, and eutrophication.
Conway's Game of Life and its variations model ecosystems where proximity of the members of a population are factors in population growth.
See also
Compartmental models in epidemiology
Dynamic global vegetation model
Ecological forecasting
Gordon Arthur Riley
Land Surface Model (LSM version 1.0)
Liebig's law of the minimum
Mathematical biology
Population dynamics
Population ecology
Rapoport's rule
Scientific modelling
System dynamics
References
Further reading
External links
Ecological modelling resources (ecobas.org)
Exposure Assessment Models United States Environmental Protection Agency
Ecotoxicology & Models (ecotoxmodels.org)
Biological systems
Environmental terminology
Fisheries science
Habitat
Mathematical and theoretical biology
Population models
Systems ecology | 0.811471 | 0.970272 | 0.787347 |
Ontogeny | Ontogeny (also ontogenesis) is the origination and development of an organism (both physical and psychological, e.g., moral development), usually from the time of fertilization of the egg to adult. The term can also be used to refer to the study of the entirety of an organism's lifespan.
Ontogeny is the developmental history of an organism within its own lifetime, as distinct from phylogeny, which refers to the evolutionary history of a species. Another way to think of ontogeny is that it is the process of an organism going through all of the developmental stages over its lifetime. The developmental history includes all the developmental events that occur during the existence of an organism, beginning with the changes in the egg at the time of fertilization and events from the time of birth or hatching and afterward (i.e., growth, remolding of body shape, development of secondary sexual characteristics, etc.). While developmental (i.e., ontogenetic) processes can influence subsequent evolutionary (e.g., phylogenetic) processes (see evolutionary developmental biology and recapitulation theory), individual organisms develop (ontogeny), while species evolve (phylogeny).
Ontogeny, embryology and developmental biology are closely related studies and those terms are sometimes used interchangeably. Aspects of ontogeny are morphogenesis, the development of form and shape of an organism; tissue growth; and cellular differentiation. The term ontogeny has also been used in cell biology to describe the development of various cell types within an organism. Ontogeny is a useful field of study in many disciplines, including developmental biology, cell biology, genetics, developmental psychology, developmental cognitive neuroscience, and developmental psychobiology. Ontogeny is used in anthropology as "the process through which each of us embodies the history of our own making".
Etymology
The word ontogeny comes from the Greek on meaning a being, individual; and existence, and from the suffix -geny from the Greek -geniea, meaning genesis, origin, and mode of production.
History
The term ontogeny was coined by Ernst Haeckel, a German zoologist and evolutionist in the 1860s. Haeckel, born in Germany on February 16, 1834, was also a strong supporter of Darwinism. Haeckel suggested that ontogeny briefly and sometimes incompletely recapitulated or repeated phylogeny in his 1866 book, Generelle Morphologie der Organismen ("General Morphology of Organisms"). Even though his book was widely read, the scientific community was not very convinced or interested in his ideas, so he turned to producing more publications to get more attention. In 1866, Haeckel and others imagined development as producing new structures after earlier additions to the developing organism have been established. He proposed that individual development followed developmental stages of previous generations and that the future generations would add something new to this process, and that there was a causal parallelism between an animal's ontogeny and phylogeny. In addition, Haeckel suggested a biogenetic law that ontogeny recapitulates phylogeny, based on the idea that the successive and progressive origin of new species was based on the same laws as the successive and progressive origin of new embryonic structures. According to Haeckel, development produced novelties, and natural selection would eliminate species that had become outdated or obsolete. Though his view of development and evolution wasn't justifiable, future embryologists tweaked and collaborated with Haeckel's proposals and showed how new morphological structures can occur by the hereditary modification of embryonic development. Marine biologist Walter Garstang reversed Haeckel's relationship between ontogeny and phylogeny, stating that ontogeny creates phylogeny, not recapitulates it.
A seminal 1963 paper by Nikolaas Tinbergen named ontogeny as one of the four primary questions of biology, along with Julian Huxley's three others: causation, survival value and evolution. Tinbergen emphasized that the change of behavioral machinery during development was distinct from the change in behavior during development. We can conclude that the thrush itself, i.e. its behavioral machinery, has changed only if the behavior change occurred while the environment was held constant...When we turn from description to causal analysis, and ask in what way the observed change in behavior machinery has been brought about, the natural first step is to try and distinguish between environmental influences and those within the animal...In ontogeny the conclusion that a certain change is internally controlled (is 'innate') is reached by elimination. Tinbergen was concerned that the elimination of environmental factors is difficult to establish, and the use of the word innate is often misleading.
Developmental stages
Development of an organism happens through fertilization, cleavage, blastulation, gastrulation, organogenesis, and metamorphosis into an adult. Each species of animal has a slightly different journey through these stages, since some stages might be shorter or longer when compared to other species, and where the offspring develops is different for each animal type (e.g., in a hard egg shell, uterus, soft egg shell, on a plant leaf, etc.).
Fertilization
In humans, the process of fetal development starts after sperm fertilizes an egg and they fuse together, kickstarting embryonic development. The fusion of egg and sperm into a zygote changes the surrounding membrane to not allow any more sperm to penetrate the egg, so multiple fertilizations can be prevented. Fusion of a zygote also activates the egg so it can begin undergoing cell division. Each animal species might not have specifically a sperm and an egg, but two gametes that contain half of the species' typical genetic material and the membranes of these gametes fuse to start creating an offspring.
Cleavage
Not long after successful fertilization by sperm, the zygote undergoes many mitotic divisions, which are also non-sexual cell divisions. Cleavage is the process of cell division, so the starting zygote becomes a collection of identical cells which is a morula and contains cells called blastomeres. Cleavage prepares the zygote to become an embryo, which is from 2 weeks to 8 weeks after conception (fertilization) in humans.
Blastulation
After the zygote has become an embryo, it continues dividing into a hollow sphere of cells, which is a blastula. These outer cells form a single epithelial layer, the blastoderm, that essentially encases the fluid-filled inside that is the blastocoel. The figure to the right shows the basic process that is modified in different species. Blastulation differs slightly in different species, but in mammals, the eight-cell stage embryo forms into a slightly different type of blastula, called a blastocyst. Other species such as sea stars, frogs, chicks, and mice have all the same structures in this stage, yet the orientation of these features differs, plus these species have additional types of cells in this stage.
Gastrulation
After blastulation, the single-layered blastula expands and reorganizes into multiple layers, a gastrula (seen in the figure to the right). Reptiles, birds and mammals are triploblastic organisms, meaning the gastrula comprises three germ layers; the endoderm (inner layer), mesoderm (middle layer), and ectoderm (outer layer). As seen in the figure below, each germ layer will become multi-potent stem cells that can become a specific tissue depending on the germ layer and is what happens in humans. This differentiation of germ layers differs slightly, because not all of the organs and tissues below are in all organisms, but corresponding body systems can be substituted in place of these.
Organogenesis
In the figure below, human germ cells are able to differentiate into the specific organs and tissues they become later on in life. Germ cells are able to migrate to their final locations to rearrange themselves and some organs are made of two germ layers; one for the outside, the other for the inside. The endoderm cells become the internal linings of organisms, such as the stomach, colon, small intestine, liver, and pancreas of the digestive system and the lungs. The mesoderm gives rise to other tissues not formed by the ectoderm, such as the heart, muscles, bones, blood, dermis of the skin, bone marrow, and the urogenital system. This germ layer is more specific for species, as it is the distinguishing layer of the three that can identify evolutionarily higher life-forms (e.g., bilateral organisms like humans) from lower-life forms (with radial symmetry). Lastly, the ectoderm is the outer layer of cells that become the epidermis and hair while being the precursor to the mammary glands, central nervous system, and the peripheral nervous systems.
The figure above shows how the development of a pig, cow, rabbit, and human offspring are similar when compared to one another. This figure shows how the germ layers can become different organs and tissues in evolutionarily higher life-forms and how these species essentially develop very similarly. Additionally, it shows how multiple species develop in a parallel manner but branch off to develop more specific features for the organism such as hooves, a tail, or ears.
Neurulation
In developing vertebrate offspring, a neural tube is formed through either primary or secondary neurulation. Some species develop their spine and nervous system using both primary and secondary neurulation, while others use only primary or secondary neurulation. In human fetal development, primary neurulation occurs during weeks 3 and 4 of gestation to develop the brain and spinal cord. Then during weeks 5 and 6 of gestation, secondary neurulation forms the lower sacral and coccygeal cord.
Primary Neurulation
The diagram to the right illustrates primary neurulation, which is the process of cells surrounding the neural plate interacting with neural plate cells to proliferate, converge, and pinch off to form a hollow tube above the notochord and mesoderm. This process is discontinuous and can start at different points along the cranial-caudal axis necessary for it to close. After the neural crest closes, the neural crest cells and ectoderm cells separate and the ectoderm becomes the epidermis surrounding this complex. The neural crest cells differentiate to become components of most of the peripheral nervous system in animals. Next, the notochord degenerates to become only the nucleus pulposus of the intervertebral discs and the mesoderm cells differentiate to become the somites and skeletal muscle later on. Also during this stage, the neural crest cells become the spinal ganglions, which function as the brain in organisms like earthworms and arthropods. In more advanced organisms like amphibians, birds and mammals; the spinal ganglions consists of a cluster of nerve bodies positioned along the spinal cord at the dorsal and ventral roots of a spinal nerve, which is a pair of nerves that correspond to a vertebra of the spine.
Secondary Neurulation
In secondary neurulation, caudal and sacral regions of the spine are formed after primary neurulation is finished. This process initiates once primary neurulation is finished and the posterior neuropore closes, so the tail bud can proliferate and condense, then create a cavity and fuse with the central canal of the neural tube. Secondary neurulation occurs in the small region starting at the spinal tail bud up to the posterior neuropore, which is the open neural folds near the tail region that don't close through primary neurulation. As canalization progresses over the next few weeks, neurons and ependymal cells (cells that create cerebral spinal fluid) differentiate to become the tail end of the spinal cord. Next, the closed neural tube contains neuroepithelial cells that immediately divide after closure and a second type of cell forms; the neuroblast. Neuroblast cells form the mantle layer, which later becomes the gray matter, which then gives rise to a marginal layer that becomes the white matter of the spinal cord. Secondary neurulation is seen in the neural tube of the lumbar and tail vertebrae of frogs and chicks and in both instances, this process is like a continuation of gastrulation.
Larval and juvenile phases
In most species, the young organism that is just born or hatched is not sexually mature yet and in most animals, this young organism looks quite different than the adult form. This young organism is the larva and is the intermediate form before metamorphosing into an adult. A well known example of a larval form of an animal is the caterpillar of butterflies and moths. Caterpillars keep growing and feeding in order for enough energy during the pupal stage, when necessary body parts for metamorphosis are grown. The juvenile phase is different in plants and animals, but in plants juvenility is an early phase of plant growth in which plants can't flower. In animals, the juvenile stage is most commonly found in social mammals, such as wild dogs, monkeys, apes, lions, wolves, and more. In humans, puberty marks the end of this stage and adolescence follows. Some species begin puberty and reproduction before the juvenile stage is over, such as in female non-human primates. The larval and pupal stages can be seen in the figure to the right.
Metamorphosis
The process of an organism's body undergoing structural and physical changes after birth or hatching to become suitable for its adult environment is metamorphosis. For example, amphibian tadpoles have a maturation of liver enzymes, hemoglobin, and eye pigments, in addition to their nervous, digestive, and reproductive systems being remodeled. In all species, molting and juvenile hormones appear to regulate these changes. The figure to the right shows the stages of life in butterflies and their metamorphosis transforms the caterpillar into a butterfly.
Adulthood
Adulthood is the stage of when physical and intellectual maturity have been achieved and this differs between species. In humans, adulthood is thought to be around 20 or 21 years old and is the longest stage of life, but in all species it ends with death. In dogs, small breeds (e.g., Yorkshire Terrier, Chihuahua, Cocker Spaniel, etc.) physically mature faster than large breeds (e.g., Saint Bernard, Great Dane, Golden Retriever, etc.), so adulthood is reached anywhere from 12 to 24 months or 1 to 2 years. In contrast, many insect species have long larval stages and the adult stage is only for reproduction. The silkworm moths don't have mouthparts and don't feed, so they have to consume enough food during the larval stage for energy to survive and mate.
Senescence
Senescence is when cells stop dividing but don't die, but these cells can build up and cause problems in the body. These cells can release substances that cause inflammation and can damage healthy nearby cells. Senescence can be induced by un-repaired DNA damage (e.g., from radiation, old age, etc.) or other cellular stress and also is the state of being old.
Ontogenetic allometry
Most organisms undergo allometric changes in shape as they grow and mature, while others engage in metamorphosis. Even reptiles (non-avian sauropsids, e.g., crocodilians, turtles, snakes, and lizards), in which the offspring are often viewed as miniature adults, show a variety of ontogenetic changes in morphology and physiology.
See also
Developmental biology
Ernst Haeckel
Genetics
Recapitulation theory, the idea that ontogeny recapitulates phylogeny
Embryology
Organogenesis
Ontogeny (psychoanalysis)
Phylogenetics
Phylogeny (psychoanalysis)
Apoptosis
Evo-devo (evolutionary developmental biology)
Cellular differentiation
Cell biology
Nikolaas Tinbergen
Metamorphosis
Morphology
Physiology
Eco-evo-devo (ecological evolutionary developmental biology)
Darwinism
Fertilization
Cleavage
Blastulation
Gastrulation
Germ layers
Neurulation
Spinal cord
Metamorphosis
Larva
Adulthood
Senescence
Notes and references
External links
Developmental biology | 0.79225 | 0.993637 | 0.78721 |
Speciation | Speciation is the evolutionary process by which populations evolve to become distinct species. The biologist Orator F. Cook coined the term in 1906 for cladogenesis, the splitting of lineages, as opposed to anagenesis, phyletic evolution within lineages. Charles Darwin was the first to describe the role of natural selection in speciation in his 1859 book On the Origin of Species. He also identified sexual selection as a likely mechanism, but found it problematic.
There are four geographic modes of speciation in nature, based on the extent to which speciating populations are isolated from one another: allopatric, peripatric, parapatric, and sympatric. Whether genetic drift is a minor or major contributor to speciation is the subject of much ongoing discussion.
Rapid sympatric speciation can take place through polyploidy, such as by doubling of chromosome number; the result is progeny which are immediately reproductively isolated from the parent population. New species can also be created through hybridization, followed by reproductive isolation, if the hybrid is favoured by natural selection.
Historical background
In addressing the origin of species, there are two key issues:
the evolutionary mechanisms of speciation
how the separateness and individuality of species is maintained
Since Charles Darwin's time, efforts to understand the nature of species have primarily focused on the first aspect, and it is now widely agreed that the critical factor behind the origin of new species is reproductive isolation.
Darwin's dilemma: why do species exist?
In On the Origin of Species (1859), Darwin interpreted biological evolution in terms of natural selection, but was perplexed by the clustering of organisms into species. Chapter 6 of Darwin's book is entitled "Difficulties of the Theory". In discussing these "difficulties" he noted
This dilemma can be described as the absence or rarity of transitional varieties in habitat space.
Another dilemma, related to the first one, is the absence or rarity of transitional varieties in time. Darwin pointed out that by the theory of natural selection "innumerable transitional forms must have existed", and wondered "why do we not find them embedded in countless numbers in the crust of the earth". That clearly defined species actually do exist in nature in both space and time implies that some fundamental feature of natural selection operates to generate and maintain species.
Effect of sexual reproduction on species formation
It has been argued that the resolution of Darwin's first dilemma lies in the fact that out-crossing sexual reproduction has an intrinsic cost of rarity. The cost of rarity arises as follows. If, on a resource gradient, a large number of separate species evolve, each exquisitely adapted to a very narrow band on that gradient, each species will, of necessity, consist of very few members. Finding a mate under these circumstances may present difficulties when many of the individuals in the neighborhood belong to other species. Under these circumstances, if any species' population size happens, by chance, to increase (at the expense of one or other of its neighboring species, if the environment is saturated), this will immediately make it easier for its members to find sexual partners. The members of the neighboring species, whose population sizes have decreased, experience greater difficulty in finding mates, and therefore form pairs less frequently than the larger species. This has a snowball effect, with large species growing at the expense of the smaller, rarer species, eventually driving them to extinction. Eventually, only a few species remain, each distinctly different from the other. The cost of rarity not only involves the costs of failure to find a mate, but also indirect costs such as the cost of communication in seeking out a partner at low population densities.
Rarity brings with it other costs. Rare and unusual features are very seldom advantageous. In most instances, they indicate a (non-silent) mutation, which is almost certain to be deleterious. It therefore behooves sexual creatures to avoid mates sporting rare or unusual features (koinophilia). Sexual populations therefore rapidly shed rare or peripheral phenotypic features, thus canalizing the entire external appearance, as illustrated in the accompanying image of the African pygmy kingfisher, Ispidina picta. This uniformity of all the adult members of a sexual species has stimulated the proliferation of field guides on birds, mammals, reptiles, insects, and many other taxa, in which a species can be described with a single illustration (or two, in the case of sexual dimorphism). Once a population has become as homogeneous in appearance as is typical of most species (and is illustrated in the photograph of the African pygmy kingfisher), its members will avoid mating with members of other populations that look different from themselves. Thus, the avoidance of mates displaying rare and unusual phenotypic features inevitably leads to reproductive isolation, one of the hallmarks of speciation.
In the contrasting case of organisms that reproduce asexually, there is no cost of rarity; consequently, there are only benefits to fine-scale adaptation. Thus, asexual organisms very frequently show the continuous variation in form (often in many different directions) that Darwin expected evolution to produce, making their classification into "species" (more correctly, morphospecies) very difficult.
Modes
All forms of natural speciation have taken place over the course of evolution; however, debate persists as to the relative importance of each mechanism in driving biodiversity.
One example of natural speciation is the diversity of the three-spined stickleback, a marine fish that, after the last glacial period, has undergone speciation into new freshwater colonies in isolated lakes and streams. Over an estimated 10,000 generations, the sticklebacks show structural differences that are greater than those seen between different genera of fish including variations in fins, changes in the number or size of their bony plates, variable jaw structure, and color differences.
Allopatric
During allopatric (from the ancient Greek allos, "other" + patrā, "fatherland") speciation, a population splits into two geographically isolated populations (for example, by habitat fragmentation due to geographical change such as mountain formation). The isolated populations then undergo genotypic or phenotypic divergence as: (a) they become subjected to dissimilar selective pressures; (b) different mutations arise in the two populations. When the populations come back into contact, they have evolved such that they are reproductively isolated and are no longer capable of exchanging genes. Island genetics is the term associated with the tendency of small, isolated genetic pools to produce unusual traits. Examples include insular dwarfism and the radical changes among certain famous island chains, for example on Komodo. The Galápagos Islands are particularly famous for their influence on Charles Darwin. During his five weeks there he heard that Galápagos tortoises could be identified by island, and noticed that finches differed from one island to another, but it was only nine months later that he speculated that such facts could show that species were changeable. When he returned to England, his speculation on evolution deepened after experts informed him that these were separate species, not just varieties, and famously that other differing Galápagos birds were all species of finches. Though the finches were less important for Darwin, more recent research has shown the birds now known as Darwin's finches to be a classic case of adaptive evolutionary radiation.
Peripatric
In peripatric speciation, a subform of allopatric speciation, new species are formed in isolated, smaller peripheral populations that are prevented from exchanging genes with the main population. It is related to the concept of a founder effect, since small populations often undergo bottlenecks. Genetic drift is often proposed to play a significant role in peripatric speciation.
Case studies include Mayr's investigation of bird fauna; the Australian bird Petroica multicolor; and reproductive isolation in populations of Drosophila subject to population bottlenecking.
Parapatric
In parapatric speciation, there is only partial separation of the zones of two diverging populations afforded by geography; individuals of each species may come in contact or cross habitats from time to time, but reduced fitness of the heterozygote leads to selection for behaviours or mechanisms that prevent their interbreeding. Parapatric speciation is modelled on continuous variation within a "single", connected habitat acting as a source of natural selection rather than the effects of isolation of habitats produced in peripatric and allopatric speciation.
Parapatric speciation may be associated with differential landscape-dependent selection. Even if there is a gene flow between two populations, strong differential selection may impede assimilation and different species may eventually develop. Habitat differences may be more important in the development of reproductive isolation than the isolation time. Caucasian rock lizards Darevskia rudis, D. valentini and D. portschinskii all hybridize with each other in their hybrid zone; however, hybridization is stronger between D. portschinskii and D. rudis, which separated earlier but live in similar habitats than between D. valentini and two other species, which separated later but live in climatically different habitats.
Ecologists refer to parapatric and peripatric speciation in terms of ecological niches. A niche must be available in order for a new species to be successful. Ring species such as Larus gulls have been claimed to illustrate speciation in progress, though the situation may be more complex. The grass Anthoxanthum odoratum may be starting parapatric speciation in areas of mine contamination.
Sympatric
Sympatric speciation is the formation of two or more descendant species from a single ancestral species all occupying the same geographic location.
Often-cited examples of sympatric speciation are found in insects that become dependent on different host plants in the same area.
The best known example of sympatric speciation is that of the cichlids of East Africa inhabiting the Rift Valley lakes, particularly Lake Victoria, Lake Malawi and Lake Tanganyika. There are over 800 described species, and according to estimates, there could be well over 1,600 species in the region. Their evolution is cited as an example of both natural and sexual selection. A 2008 study suggests that sympatric speciation has occurred in Tennessee cave salamanders. Sympatric speciation driven by ecological factors may also account for the extraordinary diversity of crustaceans living in the depths of Siberia's Lake Baikal.
Budding speciation has been proposed as a particular form of sympatric speciation, whereby small groups of individuals become progressively more isolated from the ancestral stock by breeding preferentially with one another. This type of speciation would be driven by the conjunction of various advantages of inbreeding such as the expression of advantageous recessive phenotypes, reducing the recombination load, and reducing the cost of sex.
The hawthorn fly (Rhagoletis pomonella), also known as the apple maggot fly, appears to be undergoing sympatric speciation. Different populations of hawthorn fly feed on different fruits. A distinct population emerged in North America in the 19th century some time after apples, a non-native species, were introduced. This apple-feeding population normally feeds only on apples and not on the historically preferred fruit of hawthorns. The current hawthorn feeding population does not normally feed on apples. Some evidence, such as that six out of thirteen allozyme loci are different, that hawthorn flies mature later in the season and take longer to mature than apple flies; and that there is little evidence of interbreeding (researchers have documented a 4–6% hybridization rate) suggests that sympatric speciation is occurring.
Methods of selection
Reinforcement
Reinforcement, also called the Wallace effect, is the process by which natural selection increases reproductive isolation. It may occur after two populations of the same species are separated and then come back into contact. If their reproductive isolation was complete, then they will have already developed into two separate incompatible species. If their reproductive isolation is incomplete, then further mating between the populations will produce hybrids, which may or may not be fertile. If the hybrids are infertile, or fertile but less fit than their ancestors, then there will be further reproductive isolation and speciation has essentially occurred, as in horses and donkeys.
One reasoning behind this is that if the parents of the hybrid offspring each have naturally selected traits for their own certain environments, the hybrid offspring will bear traits from both, therefore would not fit either ecological niche as well as either parent (ecological speciation). The low fitness of the hybrids would cause selection to favor assortative mating, which would control hybridization. This is sometimes called the Wallace effect after the evolutionary biologist Alfred Russel Wallace who suggested in the late 19th century that it might be an important factor in speciation. Conversely, if the hybrid offspring are more fit than their ancestors, then the populations will merge back into the same species within the area they are in contact.
Another important theoretical mechanism is the arise of intrinsic genetic incompatibilities, addressed in the Bateson-Dobzhansky-Muller model. Genes from allopatric populations will have different evolutionary backgrounds and are never tested together until hybridization at secondary contact, when negative epistatic interactions will be exposed. In other words, new alleles will emerge in a population and only pass through selection if they work well together with other genes in the same population, but it may not be compatible with genes in an allopatric population, be those other newly derived alleles or retained ancestral alleles. This is only revealed through new hybridization. Such incompatibilities cause lower fitness in hybrids regardless of the ecological environment, and are thus intrinsic, although they can originate from the adaptation to different environments. The accumulation of such incompatibilities increases faster and faster with time, creating a "snowball" effect. There is a large amount of evidence supporting this theory, primarily from laboratory populations such as Drosophila and Mus, and some genes involved in incompatibilities have been identified.
Reinforcement favoring reproductive isolation is required for both parapatric and sympatric speciation. Without reinforcement, the geographic area of contact between different forms of the same species, called their "hybrid zone", will not develop into a boundary between the different species. Hybrid zones are regions where diverged populations meet and interbreed. Hybrid offspring are common in these regions, which are usually created by diverged species coming into secondary contact. Without reinforcement, the two species would have uncontrollable inbreeding. Reinforcement may be induced in artificial selection experiments as described below.
Ecological
Ecological selection is "the interaction of individuals with their environment during resource acquisition". Natural selection is inherently involved in the process of speciation, whereby, "under ecological speciation, populations in different environments, or populations exploiting different resources, experience contrasting natural selection pressures on the traits that directly or indirectly bring about the evolution of reproductive isolation". Evidence for the role ecology plays in the process of speciation exists. Studies of stickleback populations support ecologically-linked speciation arising as a by-product, alongside numerous studies of parallel speciation, where isolation evolves between independent populations of species adapting to contrasting environments than between independent populations adapting to similar environments. Ecological speciation occurs with much of the evidence, "...accumulated from top-down studies of adaptation and reproductive isolation".
Sexual selection
Sexual selection can drive speciation in a clade, independently of natural selection. However the term "speciation", in this context, tends to be used in two different, but not mutually exclusive senses. The first and most commonly used sense refers to the "birth" of new species. That is, the splitting of an existing species into two separate species, or the budding off of a new species from a parent species, both driven by a biological "fashion fad" (a preference for a feature, or features, in one or both sexes, that do not necessarily have any adaptive qualities). In the second sense, "speciation" refers to the wide-spread tendency of sexual creatures to be grouped into clearly defined species, rather than forming a continuum of phenotypes both in time and space – which would be the more obvious or logical consequence of natural selection. This was indeed recognized by Darwin as problematic, and included in his On the Origin of Species (1859), under the heading "Difficulties with the Theory". There are several suggestions as to how mate choice might play a significant role in resolving Darwin's dilemma. If speciation takes place in the absence of natural selection, it might be referred to as nonecological speciation.
Artificial speciation
New species have been created by animal husbandry, but the dates and methods of the initiation of such species are not clear. Often, the domestic counterpart can still interbreed and produce fertile offspring with its wild ancestor. This is the case with domestic cattle, which can be considered the same species as several varieties of wild ox, gaur, and yak; and with domestic sheep that can interbreed with the mouflon.
The best-documented creations of new species in the laboratory were performed in the late 1980s. William R. Rice and George W. Salt bred Drosophila melanogaster fruit flies using a maze with three different choices of habitat such as light/dark and wet/dry. Each generation was placed into the maze, and the groups of flies that came out of two of the eight exits were set apart to breed with each other in their respective groups. After thirty-five generations, the two groups and their offspring were isolated reproductively because of their strong habitat preferences: they mated only within the areas they preferred, and so did not mate with flies that preferred the other areas. The history of such attempts is described by Rice and Elen E. Hostert (1993).
Diane Dodd used a laboratory experiment to show how reproductive isolation can develop in Drosophila pseudoobscura fruit flies after several generations by placing them in different media, starch- and maltose-based media.
Dodd's experiment has been replicated many times, including with other kinds of fruit flies and foods. Such rapid evolution of reproductive isolation may sometimes be a relic of infection by Wolbachia bacteria.
An alternative explanation is that these observations are consistent with sexually-reproducing animals being inherently reluctant to mate with individuals whose appearance or behavior is different from the norm. The risk that such deviations are due to heritable maladaptations is high. Thus, if an animal, unable to predict natural selection's future direction, is conditioned to produce the fittest offspring possible, it will avoid mates with unusual habits or features. Sexual creatures then inevitably group themselves into reproductively isolated species.
Genetics
Few speciation genes have been found. They usually involve the reinforcement process of late stages of speciation. In 2008, a speciation gene causing reproductive isolation was reported. It causes hybrid sterility between related subspecies. The order of speciation of three groups from a common ancestor may be unclear or unknown; a collection of three such species is referred to as a "trichotomy".
Speciation via polyploidy
Polyploidy is a mechanism that has caused many rapid speciation events in sympatry because offspring of, for example, tetraploid x diploid matings often result in triploid sterile progeny. However, among plants, not all polyploids are reproductively isolated from their parents, and gene flow may still occur, such as through triploid hybrid x diploid matings that produce tetraploids, or matings between meiotically unreduced gametes from diploids and gametes from tetraploids (see also hybrid speciation).
It has been suggested that many of the existing plant and most animal species have undergone an event of polyploidization in their evolutionary history. Reproduction of successful polyploid species is sometimes asexual, by parthenogenesis or apomixis, as for unknown reasons many asexual organisms are polyploid. Rare instances of polyploid mammals are known, but most often result in prenatal death.
Hybrid speciation
Hybridization between two different species sometimes leads to a distinct phenotype. This phenotype can also be fitter than the parental lineage and as such natural selection may then favor these individuals. Eventually, if reproductive isolation is achieved, it may lead to a separate species. However, reproductive isolation between hybrids and their parents is particularly difficult to achieve and thus hybrid speciation is considered an extremely rare event. The Mariana mallard is thought to have arisen from hybrid speciation.
Hybridization is an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals. Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. Polyploids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations.
Hybridization without change in chromosome number is called homoploid hybrid speciation. It is considered very rare but has been shown in Heliconius butterflies and sunflowers. Polyploid speciation, which involves changes in chromosome number, is a more common phenomenon, especially in plant species.
Gene transposition
Theodosius Dobzhansky, who studied fruit flies in the early days of genetic research in 1930s, speculated that parts of chromosomes that switch from one location to another might cause a species to split into two different species. He mapped out how it might be possible for sections of chromosomes to relocate themselves in a genome. Those mobile sections can cause sterility in inter-species hybrids, which can act as a speciation pressure. In theory, his idea was sound, but scientists long debated whether it actually happened in nature. Eventually a competing theory involving the gradual accumulation of mutations was shown to occur in nature so often that geneticists largely dismissed the moving gene hypothesis. However, 2006 research shows that jumping of a gene from one chromosome to another can contribute to the birth of new species. This validates the reproductive isolation mechanism, a key component of speciation.
Rates
There is debate as to the rate at which speciation events occur over geologic time. While some evolutionary biologists claim that speciation events have remained relatively constant and gradual over time (known as "Phyletic gradualism" – see diagram), some palaeontologists such as Niles Eldredge and Stephen Jay Gould have argued that species usually remain unchanged over long stretches of time, and that speciation occurs only over relatively brief intervals, a view known as punctuated equilibrium. (See diagram, and Darwin's dilemma.)
Punctuated evolution
Evolution can be extremely rapid, as shown in the creation of domesticated animals and plants in a very short geological space of time, spanning only a few tens of thousands of years. Maize (Zea mays), for instance, was created in Mexico in only a few thousand years, starting about 7,000 to 12,000 years ago. This raises the question of why the long term rate of evolution is far slower than is theoretically possible.
Evolution is imposed on species or groups. It is not planned or striven for in some Lamarckist way. The mutations on which the process depends are random events, and, except for the "silent mutations" which do not affect the functionality or appearance of the carrier, are thus usually disadvantageous, and their chance of proving to be useful in the future is vanishingly small. Therefore, while a species or group might benefit from being able to adapt to a new environment by accumulating a wide range of genetic variation, this is to the detriment of the individuals who have to carry these mutations until a small, unpredictable minority of them ultimately contributes to such an adaptation. Thus, the capability to evolve would require group selection, a concept discredited by (for example) George C. Williams, John Maynard Smith and Richard Dawkins as selectively disadvantageous to the individual.
The resolution to Darwin's second dilemma might thus come about as follows:
If sexual individuals are disadvantaged by passing mutations on to their offspring, they will avoid mutant mates with strange or unusual characteristics. Mutations that affect the external appearance of their carriers will then rarely be passed on to the next and subsequent generations. They would therefore seldom be tested by natural selection. Evolution is, therefore, effectively halted or slowed down considerably. The only mutations that can accumulate in a population, on this punctuated equilibrium view, are ones that have no noticeable effect on the outward appearance and functionality of their bearers (i.e., they are "silent" or "neutral mutations", which can be, and are, used to trace the relatedness and age of populations and species.)
This argument implies that evolution can only occur if mutant mates cannot be avoided, as a result of a severe scarcity of potential mates. This is most likely to occur in small, isolated communities. These occur most commonly on small islands, in remote valleys, lakes, river systems, or caves, or during the aftermath of a mass extinction. Under these circumstances, not only is the choice of mates severely restricted but population bottlenecks, founder effects, genetic drift and inbreeding cause rapid, random changes in the isolated population's genetic composition. Furthermore, hybridization with a related species trapped in the same isolate might introduce additional genetic changes. If an isolated population such as this survives its genetic upheavals, and subsequently expands into an unoccupied niche, or into a niche in which it has an advantage over its competitors, a new species, or subspecies, will have come into being. In geological terms, this will be an abrupt event. A resumption of avoiding mutant mates will thereafter result, once again, in evolutionary stagnation.
In apparent confirmation of this punctuated equilibrium view of evolution, the fossil record of an evolutionary progression typically consists of species that suddenly appear, and ultimately disappear, hundreds of thousands or millions of years later, without any change in external appearance. Graphically, these fossil species are represented by lines parallel with the time axis, whose lengths depict how long each of them existed. The fact that the lines remain parallel with the time axis illustrates the unchanging appearance of each of the fossil species depicted on the graph. During each species' existence new species appear at random intervals, each also lasting many hundreds of thousands of years before disappearing without a change in appearance. The exact relatedness of these concurrent species is generally impossible to determine. This is illustrated in the diagram depicting the distribution of hominin species through time since the hominins separated from the line that led to the evolution of their closest living primate relatives, the chimpanzees.
For similar evolutionary time lines see, for instance, the paleontological list of African dinosaurs, Asian dinosaurs, the Lampriformes and Amiiformes.
See also
Bateson–Dobzhansky–Muller model
Chronospecies
Court jester hypothesis
Macroevolution
Selection (genetic algorithm)
Species problem
References
Bibliography
The book is available from .
Reprinted in
1982 edition via Internet Archive.
Further reading
External links
Ecology
Evolutionary biology
Sexual selection | 0.791106 | 0.995046 | 0.787187 |
Biologist | A biologist is a scientist who conducts research in biology. Biologists are interested in studying life on Earth, whether it is an individual cell, a multicellular organism, or a community of interacting populations. They usually specialize in a particular branch (e.g., molecular biology, zoology, and evolutionary biology) of biology and have a specific research focus (e.g., studying malaria or cancer).
Biologists who are involved in basic research have the aim of advancing knowledge about the natural world. They conduct their research using the scientific method, which is an empirical method for testing hypotheses. Their discoveries may have applications for some specific purpose such as in biotechnology, which has the goal of developing medically useful products for humans.
In modern times, most biologists have one or more academic degrees such as a bachelor's degree, as well as an advanced degree such as a master's degree or a doctorate. Like other scientists, biologists can be found working in different sectors of the economy such as in academia, nonprofits, private industry, or government.
History
Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time. Robert Hooke, an English natural philosopher, coined the term cell, suggesting plant structure's resemblance to honeycomb cells.
Charles Darwin and Alfred Wallace independently formulated the theory of evolution by natural selection, which was described in detail in Darwin's book On the Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes of descent with accumulated modification leading to divergence over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Separately, Gregor Mendel formulated the principles of inheritance in 1866, which became the basis of modern genetics.
In 1953, James D. Watson and Francis Crick described the basic structure of DNA, the genetic material for expressing life in all its forms, building on the work of Maurice Wilkins and Rosalind Franklin, suggested that the structure of DNA was a double helix.
Ian Wilmut led a research group that in 1996 first cloned a mammal from an adult somatic cell, a Finnish Dorset lamb named Dolly.
Education
An undergraduate degree in biology typically requires coursework in molecular and cellular biology, development, ecology, genetics, microbiology, anatomy, physiology, botany, and zoology. Additional requirements may include physics, chemistry (general, organic, and biochemistry), calculus, and statistics.
Students who aspire to a research-oriented career usually pursue a graduate degree such as a master's or a doctorate (e.g., PhD) whereby they would receive training from a research head based on an apprenticeship model that has been in existence since the 1800s. Students in these graduate programs often receive specialized training in a particular subdiscipline of biology.
Research
Biologists who work in basic research formulate theories and devise experiments to advance human knowledge on life including topics such as evolution, biochemistry, molecular biology, neuroscience and cell biology.
Biologists typically conduct laboratory experiments involving animals, plants, microorganisms or biomolecules. However, a small part of biological research also occurs outside the laboratory and may involve natural observation rather than experimentation. For example, a botanist may investigate the plant species present in a particular environment, while an ecologist might study how a forest area recovers after a fire.
Biologists who work in applied research use instead the accomplishments gained by basic research to further knowledge in particular fields or applications. For example, this applied research may be used to develop new pharmaceutical drugs, treatments and medical diagnostic tests. Biological scientists conducting applied research and product development in private industry may be required to describe their research plans or results to non-scientists who are in a position to veto or approve their ideas. These scientists must consider the business effects of their work.
Swift advances in knowledge of genetics and organic molecules spurred growth in the field of biotechnology, transforming the industries in which biological scientists work. Biological scientists can now manipulate the genetic material of animals and plants, attempting to make organisms (including humans) more productive or resistant to disease. Basic and applied research on biotechnological processes, such as recombining DNA, has led to the production of important substances, including human insulin and growth hormone. Many other substances not previously available in large quantities are now produced by biotechnological means. Some of these substances are useful in treating diseases.
Those working on various genome (chromosomes with their associated genes) projects isolate genes and determine their function. This work continues to lead to the discovery of genes associated with specific diseases and inherited health risks, such as sickle cell anemia. Advances in biotechnology have created research opportunities in almost all areas of biology, with commercial applications in areas such as medicine, agriculture, and environmental remediation.
Specializations
Most biological scientists specialize in the study of a certain type of organism or in a specific activity, although recent advances have blurred some traditional classifications.
Geneticists study genetics, the science of genes, heredity, and variation of organisms.
Neuroscientists study the nervous system.
Developmental biologists study the process of development and growth of organisms
Biochemists study the chemical composition of living things. They analyze the complex chemical combinations and reactions involved in metabolism, reproduction, and growth.
Molecular biologists study the biological activity between biomolecules.
Microbiologists investigate the growth and characteristics of microscopic organisms such as bacteria, algae, or fungi.
Physiologists study life functions of plants and animals, in the whole organism and at the cellular or molecular level, under normal and abnormal conditions. Physiologists often specialize in functions such as growth, reproduction, photosynthesis, respiration, or movement, or in the physiology of a certain area or system of the organism.
Biophysicists use experimental methods traditionally employed in physics to answer biological questions .
Computational biologists apply the techniques of computer science, applied mathematics and statistics to address biological problems. The main focus lies on developing mathematical modeling and computational simulation techniques. By these means it addresses scientific research topics with their theoretical and experimental questions without a laboratory.
Zoologists and wildlife biologists study animals and wildlife—their origin, behavior, diseases, and life processes. Some experiment with live animals in controlled or natural surroundings, while others dissect dead animals to study their structure. Zoologists and wildlife biologists also may collect and analyze biological data to determine the environmental effects of current and potential uses of land and water areas. Zoologists usually are identified by the animal group they study. For example, ornithologists study birds, mammalogists study mammals, herpetologists study reptiles and amphibians, ichthyologists study fish, cnidariologists study jellyfishes and entomologists study insects.
Botanists study plants and their environments. Some study all aspects of plant life, including algae, lichens, mosses, ferns, conifers, and flowering plants; others specialize in areas such as identification and classification of plants, the structure and function of plant parts, the biochemistry of plant processes, the causes and cures of plant diseases, the interaction of plants with other organisms and the environment, the geological record of plants and their evolution. Mycologists study fungi, such as yeasts, mold and mushrooms, which are a separate kingdom from plants.
Aquatic biologists study micro-organisms, plants, and animals living in water. Marine biologists study salt water organisms, and limnologists study fresh water organisms. Much of the work of marine biology centers on molecular biology, the study of the biochemical processes that take place inside living cells. Marine biology is a branch of oceanography, which is the study of the biological, chemical, geological, and physical characteristics of oceans and the ocean floor. (See the Handbook statements on environmental scientists and hydrologists and on geoscientists.)
Ecologists investigate the relationships among organisms and between organisms and their environments, examining the effects of population size, pollutants, rainfall, temperature, and altitude. Using knowledge of various scientific disciplines, ecologists may collect, study, and report data on the quality of air, food, soil, and water.
Evolutionary biologists investigate the evolutionary processes that produced the diversity of life on Earth, starting from a single common ancestor. These processes include natural selection, common descent, and speciation.
Employment
Biologists typically work regular hours but longer hours are not uncommon. Researchers may be required to work odd hours in laboratories or other locations (especially while in the field), depending on the nature of their research.
Many biologists depend on grant money to fund their research. They may be under pressure to meet deadlines and to conform to rigid grant-writing specifications when preparing proposals to seek new or extended funding.
Marine biologists encounter a variety of working conditions. Some work in laboratories; others work on research ships, and those who work underwater must practice safe diving while working around sharp coral reefs and hazardous marine life. Although some marine biologists obtain their specimens from the sea, many still spend a good deal of their time in laboratories and offices, conducting tests, running experiments, recording results, and compiling data.
Biologists are not usually exposed to unsafe or unhealthy conditions. Those who work with dangerous organisms or toxic substances in the laboratory must follow strict safety procedures to avoid contamination. Many biological scientists, such as botanists, ecologists, and zoologists, conduct field studies that involve strenuous physical activity and primitive living conditions. Biological scientists in the field may work in warm or cold climates, in all kinds of weather.
Honors and awards
The highest honor awarded to biologists is the Nobel Prize in Physiology or Medicine, awarded since 1901, by the Royal Swedish Academy of Sciences. Another significant award is the Crafoord Prize in Biosciences; established in 1980.
See also
Biology
Glossary of biology
List of biologists
Lists of biologists by author abbreviation
References
U.S. Department of Labor, Occupational Outlook Handbook
Science occupations
sl:Biolog | 0.79091 | 0.99505 | 0.786995 |
Abiogenesis | Abiogenesis is the natural process by which life arises from non-living matter, such as simple organic compounds. The prevailing scientific hypothesis is that the transition from non-living to living entities on Earth was not a single event, but a process of increasing complexity involving the formation of a habitable planet, the prebiotic synthesis of organic molecules, molecular self-replication, self-assembly, autocatalysis, and the emergence of cell membranes. The transition from non-life to life has never been observed experimentally, but many proposals have been made for different stages of the process.
The study of abiogenesis aims to determine how pre-life chemical reactions gave rise to life under conditions strikingly different from those on Earth today. It primarily uses tools from biology and chemistry, with more recent approaches attempting a synthesis of many sciences. Life functions through the specialized chemistry of carbon and water, and builds largely upon four key families of chemicals: lipids for cell membranes, carbohydrates such as sugars, amino acids for protein metabolism, and nucleic acid DNA and RNA for the mechanisms of heredity. Any successful theory of abiogenesis must explain the origins and interactions of these classes of molecules.
Many approaches to abiogenesis investigate how self-replicating molecules, or their components, came into existence. Researchers generally think that current life descends from an RNA world, although other self-replicating and self-catalyzing molecules may have preceded RNA. Other approaches ("metabolism-first" hypotheses) focus on understanding how catalysis in chemical systems on the early Earth might have provided the precursor molecules necessary for self-replication. The classic 1952 Miller–Urey experiment demonstrated that most amino acids, the chemical constituents of proteins, can be synthesized from inorganic compounds under conditions intended to replicate those of the early Earth. External sources of energy may have triggered these reactions, including lightning, radiation, atmospheric entries of micro-meteorites and implosion of bubbles in sea and ocean waves.
While the last universal common ancestor of all modern organisms (LUCA) is thought to have been quite different from the origin of life, investigations into LUCA can guide research into early universal characteristics. A genomics approach has sought to characterise LUCA by identifying the genes shared by Archaea and Bacteria, members of the two major branches of life (with Eukaryotes included in the archaean branch in the two-domain system). It appears there are 355 genes common to all life; their functions imply that the LUCA was anaerobic with the Wood–Ljungdahl pathway, deriving energy by chemiosmosis, and maintaining its hereditary material with DNA, the genetic code, and ribosomes. Although the LUCA lived over 4 billion years ago (4 Gya), researchers believe it was far from the first form of life. Earlier cells might have had a leaky membrane and been powered by a naturally occurring proton gradient near a deep-sea white smoker hydrothermal vent.
Earth remains the only place in the universe known to harbor life. Geochemical and fossil evidence from the Earth informs most studies of abiogenesis. The Earth was formed at 4.54 Gya, and the earliest evidence of life on Earth dates from at least 3.8 Gya from Western Australia. Some studies have suggested that fossil micro-organisms may have lived within hydrothermal vent precipitates dated 3.77 to 4.28 Gya from Quebec, soon after ocean formation 4.4 Gya during the Hadean.
Overview
Life consists of reproduction with (heritable) variations. NASA defines life as "a self-sustaining chemical system capable of Darwinian [i.e., biological] evolution." Such a system is complex; the last universal common ancestor (LUCA), presumably a single-celled organism which lived some 4 billion years ago, already had hundreds of genes encoded in the DNA genetic code that is universal today. That in turn implies a suite of cellular machinery including messenger RNA, transfer RNA, and ribosomes to translate the code into proteins. Those proteins included enzymes to operate its anaerobic respiration via the Wood–Ljungdahl metabolic pathway, and a DNA polymerase to replicate its genetic material.
The challenge for abiogenesis (origin of life) researchers is to explain how such a complex and tightly interlinked system could develop by evolutionary steps, as at first sight all its parts are necessary to enable it to function. For example, a cell, whether the LUCA or in a modern organism, copies its DNA with the DNA polymerase enzyme, which is in turn produced by translating the DNA polymerase gene in the DNA. Neither the enzyme nor the DNA can be produced without the other. The evolutionary process could have involved molecular self-replication, self-assembly such as of cell membranes, and autocatalysis via RNA ribozymes. Nonetheless, the transition of non-life to life has never been observed experimentally, nor has there been a satisfactory chemical explanation.
The preconditions to the development of a living cell like the LUCA are clear enough, though disputed in their details: a habitable world is formed with a supply of minerals and liquid water. Prebiotic synthesis creates a range of simple organic compounds, which are assembled into polymers such as proteins and RNA. On the other side, the process after the LUCA is readily understood: biological evolution caused the development of a wide range of species with varied forms and biochemical capabilities. However, the derivation of living things such as LUCA from simple components is far from understood.
Although Earth remains the only place where life is known, the science of astrobiology seeks evidence of life on other planets. The 2015 NASA strategy on the origin of life aimed to solve the puzzle by identifying interactions, intermediary structures and functions, energy sources, and environmental factors that contributed to the diversity, selection, and replication of evolvable macromolecular systems, and mapping the chemical landscape of potential primordial informational polymers. The advent of polymers that could replicate, store genetic information, and exhibit properties subject to selection was, it suggested, most likely a critical step in the emergence of prebiotic chemical evolution. Those polymers derived, in turn, from simple organic compounds such as nucleobases, amino acids, and sugars that could have been formed by reactions in the environment. A successful theory of the origin of life must explain how all these chemicals came into being.
Pre-1960s conceptual history
Spontaneous generation
One ancient view of the origin of life, from Aristotle until the 19th century, is of spontaneous generation. This theory held that "lower" animals such as insects were generated by decaying organic substances, and that life arose by chance. This was questioned from the 17th century, in works like Thomas Browne's Pseudodoxia Epidemica. In 1665, Robert Hooke published the first drawings of a microorganism. In 1676, Antonie van Leeuwenhoek drew and described microorganisms, probably protozoa and bacteria. Van Leeuwenhoek disagreed with spontaneous generation, and by the 1680s convinced himself, using experiments ranging from sealed and open meat incubation and the close study of insect reproduction, that the theory was incorrect. In 1668 Francesco Redi showed that no maggots appeared in meat when flies were prevented from laying eggs. By the middle of the 19th century, spontaneous generation was considered disproven.
Panspermia
Another ancient idea dating back to Anaxagoras in the 5th century BC is panspermia, the idea that life exists throughout the universe, distributed by meteoroids, asteroids, comets and planetoids. It does not attempt to explain how life originated in itself, but shifts the origin of life on Earth to another heavenly body. The advantage is that life is not required to have formed on each planet it occurs on, but rather in a more limited set of locations, or even a single location, and then spread about the galaxy to other star systems via cometary or meteorite impact. Panspermia did not get much scientific support because it was largely used to deflect the need of an answer instead of explaining observable phenomena. Although the interest in panspermia grew when the study of meteorites found traces of organic materials in them, it is currently accepted that life started locally on Earth.
"A warm little pond": primordial soup
The idea that life originated from non-living matter in slow stages appeared in Herbert Spencer's 1864–1867 book Principles of Biology, and in William Turner Thiselton-Dyer's 1879 paper "On spontaneous generation and evolution". On 1 February 1871 Charles Darwin wrote about these publications to Joseph Hooker, and set out his own speculation, suggesting that the original spark of life may have begun in a "warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity, , present, that a compound was chemically formed ready to undergo still more complex changes." Darwin went on to explain that "at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed."
Alexander Oparin in 1924 and J. B. S. Haldane in 1929 proposed that the first molecules constituting the earliest cells slowly self-organized from a primordial soup, and this theory is called the Oparin–Haldane hypothesis. Haldane suggested that the Earth's prebiotic oceans consisted of a "hot dilute soup" in which organic compounds could have formed. J. D. Bernal showed that such mechanisms could form most of the necessary molecules for life from inorganic precursors. In 1967, he suggested three "stages": the origin of biological monomers; the origin of biological polymers; and the evolution from molecules to cells.
Miller–Urey experiment
In 1952, Stanley Miller and Harold Urey carried out a chemical experiment to demonstrate how organic molecules could have formed spontaneously from inorganic precursors under prebiotic conditions like those posited by the Oparin–Haldane hypothesis. It used a highly reducing (lacking oxygen) mixture of gases—methane, ammonia, and hydrogen, as well as water vapor—to form simple organic monomers such as amino acids. Bernal said of the Miller–Urey experiment that "it is not enough to explain the formation of such molecules, what is necessary, is a physical-chemical explanation of the origins of these molecules that suggests the presence of suitable sources and sinks for free energy." However, current scientific consensus describes the primitive atmosphere as weakly reducing or neutral, diminishing the amount and variety of amino acids that could be produced. The addition of iron and carbonate minerals, present in early oceans, however, produces a diverse array of amino acids. Later work has focused on two other potential reducing environments: outer space and deep-sea hydrothermal vents.
Producing a habitable Earth
Evolutionary history
Early universe with first stars
Soon after the Big Bang, which occurred roughly 14 Gya, the only chemical elements present in the universe were hydrogen, helium, and lithium, the three lightest atoms in the periodic table. These elements gradually accreted and began orbiting in disks of gas and dust. Gravitational accretion of material at the hot and dense centers of these protoplanetary disks formed stars by the fusion of hydrogen. Early stars were massive and short-lived, producing all the heavier elements through stellar nucleosynthesis. Element formation through stellar nucleosynthesis proceeds to its most stable element Iron-56. Heavier elements were formed during supernovae at the end of a stars lifecycle. Carbon, currently the fourth most abundant chemical element in the universe (after hydrogen, helium, and oxygen), was formed mainly in white dwarf stars, particularly those bigger than twice the mass of the sun. As these stars reached the end of their lifecycles, they ejected these heavier elements, among them carbon and oxygen, throughout the universe. These heavier elements allowed for the formation of new objects, including rocky planets and other bodies. According to the nebular hypothesis, the formation and evolution of the Solar System began 4.6 Gya with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the center, forming the Sun, while the rest flattened into a protoplanetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed.
Emergence of Earth
The age of the Earth is 4.54 Gya as found by radiometric dating of calcium-aluminium-rich inclusions in carbonaceous chrondrite meteorites, the oldest material in the Solar System. The Hadean Earth (from its formation until 4 Gya) was at first inhospitable to any living organisms. During its formation, the Earth lost a significant part of its initial mass, and consequentially lacked the gravity to hold molecular hydrogen and the bulk of the original inert gases. Soon after initial accretion of Earth at 4.48 Ga, its collision with Theia, a hypothesised impactor, is thought to have created the ejected debris that would eventually form the Moon. This impact would have removed the Earth's primary atmosphere, leaving behind clouds of viscous silicates and carbon dioxide. This unstable atmosphere was short-lived and condensed shortly after to form the bulk silicate Earth, leaving behind an atmosphere largely consisting of water vapor, nitrogen, and carbon dioxide, with smaller amounts of carbon monoxide, hydrogen, and sulfur compounds. The solution of carbon dioxide in water is thought to have made the seas slightly acidic, with a pH of about 5.5.
Condensation to form liquid oceans is theorised to have occurred as early as the Moon-forming impact. This scenario has found support from the dating of 4.404 Gya zircon crystals with high δ18O values from metamorphosed quartzite of Mount Narryer in Western Australia. The Hadean atmosphere has been characterized as a "gigantic, productive outdoor chemical laboratory," similar to volcanic gases today which still support some abiotic chemistry. Despite the likely increased volcanism from early plate tectonics, the Earth may have been a predominantly water world between 4.4 and 4.3 Gya. It is debated whether or not crust was exposed above this ocean due to uncertainties of what early plate tectonics looked like. For early life to have developed, it is generally thought that a land setting is required, so this question is essential to determining when in Earth's history life evolved. The post-Moon-forming impact Earth likely existed with little if any continental crust, a turbulent atmosphere, and a hydrosphere subject to intense ultraviolet light from a T Tauri stage Sun, from cosmic radiation, and from continued asteroid and comet impacts. Despite all this, niche environments likely existed conducive to life on Earth in the Late-Hadean to Early-Archaean.
The Late Heavy Bombardment hypothesis posits that a period of intense impact occurred at ~3.9 Gya during the Hadean. A cataclysmic impact event would have had the potential to sterilise all life on Earth by volatilising liquid oceans and blocking the Sun needed for photosynthesising primary producers, pushing back the earliest possible emergence of life to after Late Heavy Bombardment. Recent research questions both the intensity of the Late Heavy Bombardment as well as its potential for sterilisation. Uncertainties as to whether Late Heavy Bombardment was one giant impact or a period of greater impact rates greatly changed the implication of its destructive power. The 3.9 Ga date arises from dating of Apollo mission sample returns collected mostly near the Imbrium Basin, biasing the age of recorded impacts. Impact modelling of the lunar surface reveals that rather than a cataclysmic event at 3.9 Ga, multiple small-scale, short-lived periods of bombardment likely occurred. Terrestrial data backs this idea by showing multiple periods of ejecta in the rock record both before and after the 3.9 Ga marker, suggesting that the early Earth was subject to continuous impacts that would not have had as great an impact on extinction as previously thought. If the Late Heavy Bombardment did not take place, this allows for the emergence of life to have taken place far before 3.9 Ga.
If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from late impacts and the then high levels of ultraviolet radiation from the sun. Geothermically heated oceanic crust could have yielded far more organic compounds through deep hydrothermal vents than the Miller–Urey experiments indicated. The available energy is maximized at 100–150 °C, the temperatures at which hyperthermophilic bacteria and thermoacidophilic archaea live.
Earliest evidence of life
The exact timing at which life emerged on Earth is unknown. Minimum age estimates are based on evidence from the geologic rock record. The earliest physical evidence of life so far found consists of microbialites in the Nuvvuagittuq Greenstone Belt of Northern Quebec, in banded iron formation rocks at least 3.77 and possibly as old as 4.32 Gya. The micro-organisms lived within hydrothermal vent precipitates, soon after the 4.4 Gya formation of oceans during the Hadean. The microbes resembled modern hydrothermal vent bacteria, supporting the view that abiogenesis began in such an environment.
Biogenic graphite has been found in 3.7 Gya metasedimentary rocks from southwestern Greenland and in microbial mat fossils from 3.49 Gya cherts in the Pilbara region of Western Australia. Evidence of early life in rocks from Akilia Island, near the Isua supracrustal belt in southwestern Greenland, dating to 3.7 Gya, have shown biogenic carbon isotopes. In other parts of the Isua supracrustal belt, graphite inclusions trapped within garnet crystals are connected to the other elements of life: oxygen, nitrogen, and possibly phosphorus in the form of phosphate, providing further evidence for life 3.7 Gya. In the Pilbara region of Western Australia, compelling evidence of early life was found in pyrite-bearing sandstone in a fossilized beach, with rounded tubular cells that oxidized sulfur by photosynthesis in the absence of oxygen. Carbon isotope ratios on graphite inclusions from the Jack Hills zircons suggest that life could have existed on Earth from 4.1 Gya.
The Pilbara region of Western Australia contains the Dresser Formation with rocks 3.48 Gya, including layered structures called stromatolites. Their modern counterparts are created by photosynthetic micro-organisms including cyanobacteria. These lie within undeformed hydrothermal-sedimentary strata; their texture indicates a biogenic origin. Parts of the Dresser formation preserve hot springs on land, but other regions seem to have been shallow seas. A molecular clock analysis suggests the LUCA emerged prior to the Late Heavy Bombardment (3.9 Gya).
Producing molecules: prebiotic synthesis
All chemical elements except for hydrogen and helium derive from stellar nucleosynthesis. The basic chemical ingredients of life – the carbon-hydrogen molecule (CH), the carbon-hydrogen positive ion (CH+) and the carbon ion (C+) – were produced by ultraviolet light from stars. Complex molecules, including organic molecules, form naturally both in space and on planets. Organic molecules on the early Earth could have had either terrestrial origins, with organic molecule synthesis driven by impact shocks or by other energy sources, such as ultraviolet light, redox coupling, or electrical discharges; or extraterrestrial origins (pseudo-panspermia), with organic molecules formed in interstellar dust clouds raining down on to the planet.
Observed extraterrestrial organic molecules
An organic compound is a chemical whose molecules contain carbon. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Organic compounds are relatively common in space, formed by "factories of complex molecular synthesis" which occur in molecular clouds and circumstellar envelopes, and chemically evolve after reactions are initiated mostly by ionizing radiation. Purine and pyrimidine nucleobases including guanine, adenine, cytosine, uracil, and thymine have been found in meteorites. These could have provided the materials for DNA and RNA to form on the early Earth. The amino acid glycine was found in material ejected from comet Wild 2; it had earlier been detected in meteorites. Comets are encrusted with dark material, thought to be a tar-like organic substance formed from simple carbon compounds under ionizing radiation. A rain of material from comets could have brought such complex organic molecules to Earth. It is estimated that during the Late Heavy Bombardment, meteorites may have delivered up to five million tons of organic prebiotic elements to Earth per year.
PAH world hypothesis
Polycyclic aromatic hydrocarbons (PAH) are the most common and abundant polyatomic molecules in the observable universe, and are a major store of carbon. They seem to have formed shortly after the Big Bang, and are associated with new stars and exoplanets. They are a likely constituent of Earth's primordial sea. PAHs have been detected in nebulae, and in the interstellar medium, in comets, and in meteorites.
The PAH world hypothesis posits PAHs as precursors to the RNA world. A star, HH 46-IR, resembling the sun early in its life, is surrounded by a disk of material which contains molecules including cyanide compounds, hydrocarbons, and carbon monoxide. PAHs in the interstellar medium can be transformed through hydrogenation, oxygenation, and hydroxylation to more complex organic compounds used in living cells.
Nucleobases and nucleotides
The majority of organic compounds introduced on Earth by interstellar dust particles have helped to form complex molecules, thanks to their peculiar surface-catalytic activities. Studies of the 12C/13C isotopic ratios of organic compounds in the Murchison meteorite suggest that the RNA component uracil and related molecules, including xanthine, were formed extraterrestrially. NASA studies of meteorites suggest that all four DNA nucleobases (adenine, guanine and related organic molecules) have been formed in outer space. The cosmic dust permeating the universe contains complex organics ("amorphous organic solids with a mixed aromatic–aliphatic structure") that could be created rapidly by stars. Glycolaldehyde, a sugar molecule and RNA precursor, has been detected in regions of space including around protostars and on meteorites.
Laboratory synthesis
As early as the 1860s, experiments demonstrated that biologically relevant molecules can be produced from interaction of simple carbon sources with abundant inorganic catalysts. The spontaneous formation of complex polymers from abiotically generated monomers under the conditions posited by the "soup" theory is not straightforward. Besides the necessary basic organic monomers, compounds that would have prohibited the formation of polymers were also formed in high concentration during the Miller–Urey and Joan Oró experiments. Biology uses essentially 20 amino acids for its coded protein enzymes, representing a very small subset of the structurally possible products. Since life tends to use whatever is available, an explanation is needed for why the set used is so small. Formamide is attractive as a medium that potentially provided a source of amino acid derivatives from simple aldehyde and nitrile feedstocks.
Sugars
Alexander Butlerov showed in 1861 that the formose reaction created sugars including tetroses, pentoses, and hexoses when formaldehyde is heated under basic conditions with divalent metal ions like calcium. R. Breslow proposed that the reaction was autocatalytic in 1959.
Nucleobases
Nucleobases, such as guanine and adenine, can be synthesized from simple carbon and nitrogen sources, such as hydrogen cyanide (HCN) and ammonia. Formamide produces all four ribonucleotides when warmed with terrestrial minerals. Formamide is ubiquitous in the Universe, produced by the reaction of water and HCN. It can be concentrated by the evaporation of water. HCN is poisonous only to aerobic organisms (eukaryotes and aerobic bacteria), which did not yet exist. It can play roles in other chemical processes such as the synthesis of the amino acid glycine.
DNA and RNA components including uracil, cytosine and thymine can be synthesized under outer space conditions, using starting chemicals such as pyrimidine found in meteorites. Pyrimidine may have been formed in red giant stars or in interstellar dust and gas clouds. All four RNA-bases may be synthesized from formamide in high-energy density events like extraterrestrial impacts.
Other pathways for synthesizing bases from inorganic materials have been reported. Freezing temperatures are advantageous for the synthesis of purines, due to the concentrating effect for key precursors such as hydrogen cyanide. However, while adenine and guanine require freezing conditions for synthesis, cytosine and uracil may require boiling temperatures. Seven amino acids and eleven types of nucleobases formed in ice when ammonia and cyanide were left in a freezer for 25 years. S-triazines (alternative nucleobases), pyrimidines including cytosine and uracil, and adenine can be synthesized by subjecting a urea solution to freeze-thaw cycles under a reductive atmosphere, with spark discharges as an energy source. The explanation given for the unusual speed of these reactions at such a low temperature is eutectic freezing, which crowds impurities in microscopic pockets of liquid within the ice, causing the molecules to collide more often.
Peptides
Prebiotic peptide synthesis is proposed to have occurred through a number of possible routes. Some center on high temperature/concentration conditions in which condensation becomes energetically favorable, while others focus on the availability of plausible prebiotic condensing agents.
Experimental evidence for the formation of peptides in uniquely concentrated environments is bolstered by work suggesting that wet-dry cycles and the presence of specific salts can greatly increase spontaneous condensation of glycine into poly-glycine chains. Other work suggests that while mineral surfaces, such as those of pyrite, calcite, and rutile catalyze peptide condensation, they also catalyze their hydrolysis. The authors suggest that additional chemical activation or coupling would be necessary to produce peptides at sufficient concentrations. Thus, mineral surface catalysis, while important, is not sufficient alone for peptide synthesis.
Many prebiotically plausible condensing/activating agents have been identified, including the following: cyanamide, dicyanamide, dicyandiamide, diaminomaleonitrile, urea, trimetaphosphate, NaCl, CuCl2, (Ni,Fe)S, CO, carbonyl sulfide (COS), carbon disulfide (CS2), SO2, and diammonium phosphate (DAP).
An experiment reported in 2024 used a saffire substrate with a web of thin cracks under a heat flow, similar to the environment of deep-ocean vents, as a mechanism to separate and concentrate prebiotically relevant building blocks from a dilute mixture, purifying their concentration by up to three orders of magnitude. The authors propose this as a plausible model for the origin of complex biopolymers. This presents another physical process that allows for concentrated peptide precursors to combine in the right conditions. A similar role of increasing amino acid concentration has been suggested for clays as well.
While all of these scenarios involve the condensation of amino acids, the prebiotic synthesis of peptides from simpler molecules such as CO, NH3 and C, skipping the step of amino acid formation, is very efficient.
Producing suitable vesicles
The largest unanswered question in evolution is how simple protocells first arose and differed in reproductive contribution to the following generation, thus initiating the evolution of life. The lipid world theory postulates that the first self-replicating object was lipid-like. Phospholipids form lipid bilayers in water while under agitation—the same structure as in cell membranes. These molecules were not present on early Earth, but other amphiphilic long-chain molecules also form membranes. These bodies may expand by insertion of additional lipids, and may spontaneously split into two offspring of similar size and composition. Lipid bodies may have provided sheltering envelopes for information storage, allowing the evolution and preservation of polymers like RNA that store information. Only one or two types of amphiphiles have been studied which might have led to the development of vesicles. There is an enormous number of possible arrangements of lipid bilayer membranes, and those with the best reproductive characteristics would have converged toward a hypercycle reaction, a positive feedback composed of two mutual catalysts represented by a membrane site and a specific compound trapped in the vesicle. Such site/compound pairs are transmissible to the daughter vesicles leading to the emergence of distinct lineages of vesicles, which would have allowed natural selection.
A protocell is a self-organized, self-ordered, spherical collection of lipids proposed as a stepping-stone to the origin of life. A functional protocell has (as of 2014) not yet been achieved in a laboratory setting. Self-assembled vesicles are essential components of primitive cells. The theory of classical irreversible thermodynamics treats self-assembly under a generalized chemical potential within the framework of dissipative systems. The second law of thermodynamics requires that overall entropy increases, yet life is distinguished by its great degree of organization. Therefore, a boundary is needed to separate ordered life processes from chaotic non-living matter.
Irene Chen and Jack W. Szostak suggest that elementary protocells can give rise to cellular behaviors including primitive forms of differential reproduction, competition, and energy storage. Competition for membrane molecules would favor stabilized membranes, suggesting a selective advantage for the evolution of cross-linked fatty acids and even the phospholipids of today. Such micro-encapsulation would allow for metabolism within the membrane and the exchange of small molecules, while retaining large biomolecules inside. Such a membrane is needed for a cell to create its own electrochemical gradient to store energy by pumping ions across the membrane. Fatty acid vesicles in conditions relevant to alkaline hydrothermal vents can be stabilized by isoprenoids which are synthesized by the formose reaction; the advantages and disadvantages of isoprenoids incorporated within the lipid bilayer in different microenvironments might have led to the divergence of the membranes of archaea and bacteria.
Laboratory experiments have shown that vesicles can undergo an evolutionary process under pressure cycling conditions. Simulating the systemic environment in tectonic fault zones within the Earth's crust, pressure cycling leads to the periodic formation of vesicles. Under the same conditions, random peptide chains are being formed, which are being continuously selected for their ability to integrate into the vesicle membrane. A further selection of the vesicles for their stability potentially leads to the development of functional peptide structures, associated with an increase in the survival rate of the vesicles.
Producing biology
Energy and entropy
Life requires a loss of entropy, or disorder, as molecules organize themselves into living matter. At the same time, the emergence of life is associated with the formation of structures beyond a certain threshold of complexity. The emergence of life with increasing order and complexity does not contradict the second law of thermodynamics, which states that overall entropy never decreases, since a living organism creates order in some places (e.g. its living body) at the expense of an increase of entropy elsewhere (e.g. heat and waste production).
Multiple sources of energy were available for chemical reactions on the early Earth. Heat from geothermal processes is a standard energy source for chemistry. Other examples include sunlight, lightning, atmospheric entries of micro-meteorites, and implosion of bubbles in sea and ocean waves. This has been confirmed by experiments and simulations.
Unfavorable reactions can be driven by highly favorable ones, as in the case of iron-sulfur chemistry. For example, this was probably important for carbon fixation. Carbon fixation by reaction of CO2 with H2S via iron-sulfur chemistry is favorable, and occurs at neutral pH and 100 °C. Iron-sulfur surfaces, which are abundant near hydrothermal vents, can drive the production of small amounts of amino acids and other biomolecules.
Chemiosmosis
In 1961, Peter Mitchell proposed chemiosmosis as a cell's primary system of energy conversion. The mechanism, now ubiquitous in living cells, powers energy conversion in micro-organisms and in the mitochondria of eukaryotes, making it a likely candidate for early life. Mitochondria produce adenosine triphosphate (ATP), the energy currency of the cell used to drive cellular processes such as chemical syntheses. The mechanism of ATP synthesis involves a closed membrane in which the ATP synthase enzyme is embedded. The energy required to release strongly bound ATP has its origin in protons that move across the membrane. In modern cells, those proton movements are caused by the pumping of ions across the membrane, maintaining an electrochemical gradient. In the first organisms, the gradient could have been provided by the difference in chemical composition between the flow from a hydrothermal vent and the surrounding seawater, or perhaps meteoric quinones that were conducive to the development of chemiosmotic energy across lipid membranes if at a terrestrial origin.
The RNA world
The RNA world hypothesis describes an early Earth with self-replicating and catalytic RNA but no DNA or proteins. Many researchers concur that an RNA world must have preceded the DNA-based life that now dominates. However, RNA-based life may not have been the first to exist. Another model echoes Darwin's "warm little pond" with cycles of wetting and drying.
RNA is central to the translation process. Small RNAs can catalyze all the chemical groups and information transfers required for life. RNA both expresses and maintains genetic information in modern organisms; and the chemical components of RNA are easily synthesized under the conditions that approximated the early Earth, which were very different from those that prevail today. The structure of the ribosome has been called the "smoking gun", with a central core of RNA and no amino acid side chains within 18 Å of the active site that catalyzes peptide bond formation.
The concept of the RNA world was proposed in 1962 by Alexander Rich, and the term was coined by Walter Gilbert in 1986. There were initial difficulties in the explanation of the abiotic synthesis of the nucleotides cytosine and uracil. Subsequent research has shown possible routes of synthesis; for example, formamide produces all four ribonucleotides and other biological molecules when warmed in the presence of various terrestrial minerals.
RNA replicase can function as both code and catalyst for further RNA replication, i.e. it can be autocatalytic. Jack Szostak has shown that certain catalytic RNAs can join smaller RNA sequences together, creating the potential for self-replication. The RNA replication systems, which include two ribozymes that catalyze each other's synthesis, showed a doubling time of the product of about one hour, and were subject to natural selection under the experimental conditions. If such conditions were present on early Earth, then natural selection would favor the proliferation of such autocatalytic sets, to which further functionalities could be added. Self-assembly of RNA may occur spontaneously in hydrothermal vents. A preliminary form of tRNA could have assembled into such a replicator molecule.
Possible precursors to protein synthesis include the synthesis of short peptide cofactors or the self-catalysing duplication of RNA. It is likely that the ancestral ribosome was composed entirely of RNA, although some roles have since been taken over by proteins. Major remaining questions on this topic include identifying the selective force for the evolution of the ribosome and determining how the genetic code arose.
Eugene Koonin has argued that "no compelling scenarios currently exist for the origin of replication and translation, the key processes that together comprise the core of biological systems and the apparent pre-requisite of biological evolution. The RNA World concept might offer the best chance for the resolution of this conundrum but so far cannot adequately account for the emergence of an efficient RNA replicase or the translation system."
From RNA to directed protein synthesis
In line with the RNA world hypothesis, much of modern biology's templated protein biosynthesis is done by RNA molecules—namely tRNAs and the ribosome (consisting of both protein and rRNA components). The most central reaction of peptide bond synthesis is understood to be carried out by base catalysis by the 23S rRNA domain V. Experimental evidence has demonstrated successful di- and tripeptide synthesis with a system consisting of only aminoacyl phosphate adaptors and RNA guides, which could be a possible stepping stone between an RNA world and modern protein synthesis. Aminoacylation ribozymes that can charge tRNAs with their cognate amino acids have also been selected in in vitro experimentation. The authors also extensively mapped fitness landscapes within their selection to find that chance emergence of active sequences was more important that sequence optimization.
Early functional peptides
The first proteins would have had to arise without a fully-fledged system of protein biosynthesis. As discussed above, numerous mechanisms for the prebiotic synthesis of polypeptides exist. However, these random sequence peptides would not have likely had biological function. Thus, significant study has gone into exploring how early functional proteins could have arisen from random sequences. First, some evidence on hydrolysis rates shows that abiotically plausible peptides likely contained significant "nearest-neighbor" biases. This could have had some effect on early protein sequence diversity. In other work by Anthony Keefe and Jack Szostak, mRNA display selection on a library of 6*1012 80-mers was used to search for sequences with ATP binding activity. They concluded that approximately 1 in 1011 random sequences had ATP binding function. While this is a single example of functional frequency in the random sequence space, the methodology can serve as a powerful simulation tool for understanding early protein evolution.
Phylogeny and LUCA
Starting with the work of Carl Woese from 1977, genomics studies have placed the last universal common ancestor (LUCA) of all modern life-forms between Bacteria and a clade formed by Archaea and Eukaryota in the phylogenetic tree of life. It lived over 4 Gya. A minority of studies have placed the LUCA in Bacteria, proposing that Archaea and Eukaryota are evolutionarily derived from within Eubacteria; Thomas Cavalier-Smith suggested in 2006 that the phenotypically diverse bacterial phylum Chloroflexota contained the LUCA.
In 2016, a set of 355 genes likely present in the LUCA was identified. A total of 6.1 million prokaryotic genes from Bacteria and Archaea were sequenced, identifying 355 protein clusters from among 286,514 protein clusters that were probably common to the LUCA. The results suggest that the LUCA was anaerobic with a Wood–Ljungdahl (reductive Acetyl-CoA) pathway, nitrogen- and carbon-fixing, thermophilic. Its cofactors suggest dependence upon an environment rich in hydrogen, carbon dioxide, iron, and transition metals. Its genetic material was probably DNA, requiring the 4-nucleotide genetic code, messenger RNA, transfer RNA, and ribosomes to translate the code into proteins such as enzymes. LUCA likely inhabited an anaerobic hydrothermal vent setting in a geochemically active environment. It was evidently already a complex organism, and must have had precursors; it was not the first living thing. The physiology of LUCA has been in dispute.
Leslie Orgel argued that early translation machinery for the genetic code would be susceptible to error catastrophe. Geoffrey Hoffmann however showed that such machinery can be stable in function against "Orgel's paradox". Metabolic reactions that have also been inferred in LUCA are the incomplete reverse Krebs cycle, gluconeogenesis, the pentose phosphate pathway, glycolysis, reductive amination, and transamination.
Suitable geological environments
A variety of geologic and environmental settings have been proposed for an origin of life. These theories are often in competition with one another as there are many differing views of prebiotic compound availability, geophysical setting, and early life characteristics. The first organism on Earth likely looked different from LUCA. Between the first appearance of life and where all modern phylogenies began branching, an unknown amount of time passed, with unknown gene transfers, extinctions, and evolutionary adaptation to various environmental niches. One major shift is believed to be from the RNA world to an RNA-DNA-protein world. Modern phylogenies provide more pertinent genetic evidence about LUCA than about its precursors.
The most popular hypotheses for settings for the origin of life are deep sea hydrothermal vents and surface bodies of water. Surface waters can be classified into hot springs, moderate temperature lakes and ponds, and cold settings.
Deep sea hydrothermal vents
Hot fluids
Early micro-fossils may have come from a hot world of gases such as methane, ammonia, carbon dioxide, and hydrogen sulfide, toxic to much current life. Analysis of the tree of life places thermophilic and hyperthermophilic bacteria and archaea closest to the root, suggesting that life may have evolved in a hot environment. The deep sea or alkaline hydrothermal vent theory posits that life began at submarine hydrothermal vents. William Martin and Michael Russell have suggested "that life evolved in structured iron monosulphide precipitates in a seepage site hydrothermal mound at a redox, pH, and temperature gradient between sulphide-rich hydrothermal fluid and iron(II)-containing waters of the Hadean ocean floor. The naturally arising, three-dimensional compartmentation observed within fossilized seepage-site metal sulphide precipitates indicates that these inorganic compartments were the precursors of cell walls and membranes found in free-living prokaryotes. The known capability of FeS and NiS to catalyze the synthesis of the acetyl-methylsulphide from carbon monoxide and methylsulphide, constituents of hydrothermal fluid, indicates that pre-biotic syntheses occurred at the inner surfaces of these metal-sulphide-walled compartments".
These form where hydrogen-rich fluids emerge from below the sea floor, as a result of serpentinization of ultra-mafic olivine with seawater and a pH interface with carbon dioxide-rich ocean water. The vents form a sustained chemical energy source derived from redox reactions, in which electron donors (molecular hydrogen) react with electron acceptors (carbon dioxide); see iron–sulfur world theory. These are exothermic reactions.
Chemiosmotic gradient
Russell demonstrated that alkaline vents created an abiogenic proton motive force chemiosmotic gradient, ideal for abiogenesis. Their microscopic compartments "provide a natural means of concentrating organic molecules," composed of iron-sulfur minerals such as mackinawite, endowed these mineral cells with the catalytic properties envisaged by Günter Wächtershäuser. This movement of ions across the membrane depends on a combination of two factors:
Diffusion force caused by concentration gradient—all particles including ions tend to diffuse from higher concentration to lower.
Electrostatic force caused by electrical potential gradient—cations like protons H+ tend to diffuse down the electrical potential, anions in the opposite direction.
These two gradients taken together can be expressed as an electrochemical gradient, providing energy for abiogenic synthesis. The proton motive force can be described as the measure of the potential energy stored as a combination of proton and voltage gradients across a membrane (differences in proton concentration and electrical potential).
The surfaces of mineral particles inside deep-ocean hydrothermal vents have catalytic properties similar to those of enzymes and can create simple organic molecules, such as methanol (CH3OH) and formic, acetic, and pyruvic acids out of the dissolved CO2 in the water, if driven by an applied voltage or by reaction with H2 or H2S.
The research reported by Martin in 2016 supports the thesis that life arose at hydrothermal vents, that spontaneous chemistry in the Earth's crust driven by rock–water interactions at disequilibrium thermodynamically underpinned life's origin and that the founding lineages of the archaea and bacteria were H2-dependent autotrophs that used CO2 as their terminal acceptor in energy metabolism. Martin suggests, based upon this evidence, that the LUCA "may have depended heavily on the geothermal energy of the vent to survive". Pores at deep sea hydrothermal vents are suggested to have been occupied by membrane-bound compartments which promoted biochemical reactions. Metabolic intermediates in the Krebs cycle, gluconeogenesis, amino acid bio-synthetic pathways, glycolysis, the pentose phosphate pathway, and including sugars like ribose, and lipid precursors can occur non-enzymatically at conditions relevant to deep-sea alkaline hydrothermal vents.
If the deep marine hydrothermal setting was the site for the origin of life, then abiogenesis could have happened as early as 4.0-4.2 Gya. If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from impacts and the then high levels of ultraviolet radiation from the sun. The available energy in hydrothermal vents is maximized at 100–150 °C, the temperatures at which hyperthermophilic bacteria and thermoacidophilic archaea live. Arguments against a hydrothermal origin of life state that hyperthermophily was a result of convergent evolution in bacteria and archaea, and that a mesophilic environment would have been more likely. This hypothesis, suggested in 1999 by Galtier, was proposed one year before the discovery of the Lost City Hydrothermal Field, where white-smoker hydrothermal vents average ~45-90 °C. Moderate temperatures and alkaline seawater at Lost City are now the favoured hydrothermal vent setting in contrast to acidic, high temperature (~350 °C) black-smokers.
Arguments against a vent setting
Production of prebiotic organic compounds at hydrothermal vents is estimated to be 1x108 kg yr−1. While a large amount of key prebiotic compounds, such as methane, are found at vents, they are in far lower concentrations than estimates of a Miller-Urey Experiment environment. In the case of methane, the production rate at vents is around 2-4 orders of magnitude lower than predicted amounts in a Miller-Urey Experiment surface atmosphere.
Other arguments against an oceanic vent setting for the origin of life include the inability to concentrate prebiotic materials due to strong dilution from seawater. This open-system cycles compounds through minerals that make up vents, leaving little residence time to accumulate. All modern cells rely on phosphates and potassium for nucleotide backbone and protein formation respectively, making it likely that the first life forms also shared these functions. These elements were not available in high quantities in the Archaean oceans as both primarily come from the weathering of continental rocks on land, far from vent settings. Submarine hydrothermal vents are not conducive to condensation reactions needed for polymerisation to form macromolecules.
An older argument was that key polymers were encapsulated in vesicles after condensation, which supposedly would not happen in saltwater because of the high concentrations of ions. However, while it is true that salinity inhibits vesicle formation from low-diversity mixtures of fatty acids, vesicle formation from a broader, more realistic mix of fatty-acid and 1-alkanol species is more resilient.
Surface bodies of water
Surface bodies of water provide environments able to dry out and be rewetted. Continued wet-dry cycles allow the concentration of prebiotic compounds and condensation reactions to polymerise macromolecules. Moreover, lake and ponds on land allow for detrital input from the weathering of continental rocks which contain apatite, the most common source of phosphates needed for nucleotide backbones. The amount of exposed continental crust in the Hadean is unknown, but models of early ocean depths and rates of ocean island and continental crust growth make it plausible that there was exposed land. Another line of evidence for a surface start to life is the requirement for UV for organism function. UV is necessary for the formation of the U+C nucleotide base pair by partial hydrolysis and nucleobase loss. Simultaneously, UV can be harmful and sterilising to life, especially for simple early lifeforms with little ability to repair radiation damage. Radiation levels from a young Sun were likely greater, and, with no ozone layer, harmful shortwave UV rays would reach the surface of Earth. For life to begin, a shielded environment with influx from UV-exposed sources is necessary to both benefit and protect from UV. Shielding under ice, liquid water, mineral surfaces (e.g. clay) or regolith is possible in a range of surface water settings. While deep sea vents may have input from raining down of surface exposed materials, the likelihood of concentration is lessened by the ocean's open system.
Hot springs
Most branching phylogenies are thermophilic or hyperthermophilic, making it possible that the Last universal common ancestor (LUCA) and preceding lifeforms were similarly thermophilic. Hot springs are formed from the heating of groundwater by geothermal activity. This intersection allows for influxes of material from deep penetrating waters and from surface runoff that transports eroded continental sediments. Interconnected groundwater systems create a mechanism for distribution of life to wider area.
Mulkidjanian and co-authors argue that marine environments did not provide the ionic balance and composition universally found in cells, or the ions required by essential proteins and ribozymes, especially with respect to high K+/Na+ ratio, Mn2+, Zn2+ and phosphate concentrations. They argue that the only environments that mimic the needed conditions on Earth are hot springs similar to ones at Kamchatka. Mineral deposits in these environments under an anoxic atmosphere would have suitable pH (while current pools in an oxygenated atmosphere would not), contain precipitates of photocatalytic sulfide minerals that absorb harmful ultraviolet radiation, have wet-dry cycles that concentrate substrate solutions to concentrations amenable to spontaneous formation of biopolymers created both by chemical reactions in the hydrothermal environment, and by exposure to UV light during transport from vents to adjacent pools that would promote the formation of biomolecules. The hypothesized pre-biotic environments are similar to hydrothermal vents, with additional components that help explain peculiarities of the LUCA.
A phylogenomic and geochemical analysis of proteins plausibly traced to the LUCA shows that the ionic composition of its intracellular fluid is identical to that of hot springs. The LUCA likely was dependent upon synthesized organic matter for its growth. Experiments show that RNA-like polymers can be synthesized in wet-dry cycling and UV light exposure. These polymers were encapsulated in vesicles after condensation. Potential sources of organics at hot springs might have been transport by interplanetary dust particles, extraterrestrial projectiles, or atmospheric or geochemical synthesis. Hot springs could have been abundant in volcanic landmasses during the Hadean.
Temperate surface bodies of water
A mesophilic start in surface bodies of waters hypothesis has evolved from Darwin's concept of a 'warm little pond' and the Oparin-Haldane hypothesis. Freshwater bodies under temperate climates can accumulate prebiotic materials while providing suitable environmental conditions conducive to simple life forms. The climate during the Archaean is still a highly debated topic, as there is uncertainty about what continents, oceans, and the atmosphere looked like then. Atmospheric reconstructions of the Archaean from geochemical proxies and models state that sufficient greenhouse gases were present to maintain surface temperatures between 0-40 °C. Under this assumption, there is a greater abundance of moderate temperature niches in which life could begin.
Strong lines of evidence for mesophily from biomolecular studies include Galtier's G+C nucleotide thermometer. G+C are more abundant in thermophiles due to the added stability of an additional hydrogen bond not present between A+T nucleotides. rRNA sequencing on a diverse range of modern lifeforms show that LUCA's reconstructed G+C content was likely representative of moderate temperatures.
Although most modern phylogenies are thermophilic or hyperthermophilic, it is possible that their widespread diversity today is a product of convergent evolution and horizontal gene transfer rather than an inherited trait from LUCA. The reverse gyrase topoisomerase is found exclusively in thermophiles and hyperthermophiles as it allows for coiling of DNA. The reverse gyrase enzyme requires ATP to function, both of which are complex biomolecules. If an origin of life is hypothesised to involve a simple organism that had not yet evolved a membrane, let alone ATP, this would make the existence of reverse gyrase improbable. Moreover, phylogenetic studies show that reverse gyrase had an archaeal origin, and that it was transferred to bacteria by horizontal gene transfer. This implies that reverse gyrase was not present in the LUCA.
Icy surface bodies of water
Cold-start origin of life theories stem from the idea there may have been cold enough regions on the early Earth that large ice cover could be found. Stellar evolution models predict that the Sun's luminosity was ~25% weaker than it is today. Fuelner states that although this significant decrease in solar energy would have formed an icy planet, there is strong evidence for liquid water to be present, possibly driven by a greenhouse effect. This would create an early Earth with both liquid oceans and icy poles.
Ice melts that form from ice sheets or glaciers melts create freshwater pools, another niche capable of experiencing wet-dry cycles. While these pools that exist on the surface would be exposed to intense UV radiation, bodies of water within and under ice are sufficiently shielded while remaining connected to UV exposed areas through ice cracks. Suggestions of impact melting of ice allow freshwater paired with meteoritic input, a popular vessel for prebiotic components. Near-seawater levels of sodium chloride are found to destabilize fatty acid membrane self-assembly, making freshwater settings appealing for early membranous life.
Icy environments would trade the faster reaction rates that occur in warm environments for increased stability and accumulation of larger polymers. Experiments simulating Europa-like conditions of ~20 °C have synthesised amino acids and adenine, showing that Miller-Urey type syntheses can still occur at cold temperatures. In an RNA world, the ribozyme would have had even more functions than in a later DNA-RNA-protein-world. For RNA to function, it must be able to fold, a process that is hindered by temperatures above 30 °C. While RNA folding in psychrophilic organisms is slower, the process is more successful as hydrolysis is also slower. Shorter nucleotides would not suffer from higher temperatures.
Inside the continental crust
An alternative geological environment has been proposed by the geologist Ulrich Schreiber and the physical chemist Christian Mayer: the continental crust. Tectonic fault zones could present a stable and well-protected environment for long-term prebiotic evolution. Inside these systems of cracks and cavities, water and carbon dioxide present the bulk solvents. Their phase state would depend on the local temperature and pressure conditions and could vary between liquid, gaseous and supercritical. When forming two separate phases (e.g., liquid water and supercritical carbon dioxide in depths of little more than 1 km), the system provides optimal conditions for phase transfer reactions. Concurrently, the contents of the tectonic fault zones are being supplied by a multitude of inorganic educts (e.g., carbon monoxide, hydrogen, ammonia, hydrogen cyanide, nitrogen, and even phosphate from dissolved apatite) and simple organic molecules formed by hydrothermal chemistry (e.g. amino acids, long-chain amines, fatty acids, long-chain aldehydes). Finally, the abundant mineral surfaces provide a rich choice of catalytic activity.
An especially interesting section of the tectonic fault zones is located at a depth of approximately 1000 m. For the carbon dioxide part of the bulk solvent, it provides temperature and pressure conditions near the phase transition point between the supercritical and the gaseous state. This leads to a natural accumulation zone for lipophilic organic molecules that dissolve well in supercritical CO2, but not in its gaseous state, leading to their local precipitation. Periodic pressure variations such as caused by geyser activity or tidal influences result in periodic phase transitions, keeping the local reaction environment in a constant non-equilibrium state. In presence of amphiphilic compounds (such as the long chain amines and fatty acids mentioned above), subsequent generations of vesicles are being formed that are constantly and efficiently being selected for their stability. The resulting structures could provide hydrothermal vents as well as hot springs with raw material for further development.
Homochirality
Homochirality is the geometric uniformity of materials composed of chiral (non-mirror-symmetric) units. Living organisms use molecules that have the same chirality (handedness): with almost no exceptions, amino acids are left-handed while nucleotides and sugars are right-handed. Chiral molecules can be synthesized, but in the absence of a chiral source or a chiral catalyst, they are formed in a 50/50 (racemic) mixture of both forms. Known mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction; asymmetric environments, such as those caused by circularly polarized light, quartz crystals, or the Earth's rotation, statistical fluctuations during racemic synthesis, and spontaneous symmetry breaking.
Once established, chirality would be selected for. A small bias (enantiomeric excess) in the population can be amplified into a large one by asymmetric autocatalysis, such as in the Soai reaction. In asymmetric autocatalysis, the catalyst is a chiral molecule, which means that a chiral molecule is catalyzing its own production. An initial enantiomeric excess, such as can be produced by polarized light, then allows the more abundant enantiomer to outcompete the other.
Homochirality may have started in outer space, as on the Murchison meteorite the amino acid L-alanine (left-handed) is more than twice as frequent as its D (right-handed) form, and L-glutamic acid is more than three times as abundant as its D counterpart. Amino acids from meteorites show a left-handed bias, whereas sugars show a predominantly right-handed bias: this is the same preference found in living organisms, suggesting an abiogenic origin of these compounds.
In a 2010 experiment by Robert Root-Bernstein, "two D-RNA-oligonucleotides having inverse base sequences (D-CGUA and D-AUGC) and their corresponding L-RNA-oligonucleotides (L-CGUA and L-AUGC) were synthesized and their affinity determined for Gly and eleven pairs of L- and D-amino acids". The results suggest that homochirality, including codon directionality, might have "emerged as a function of the origin of the genetic code".
See also
Autopoiesis
Manganese metallic nodules
Notes
References
Sources
International Symposium on the Origin of Life on the Earth (held at Moscow, 19–24 August 1957)
Proceedings of the SPIE held at San Jose, California, 22–24 January 2001
Proceedings of the SPIE held at San Diego, California, 31 July–2 August 2005
External links
Making headway with the mysteries of life's origins – Adam Mann (PNAS; 14 April 2021)
Exploring Life's Origins a virtual exhibit at the Museum of Science (Boston)
How life began on Earth – Marcia Malory (Earth Facts; 2015)
The Origins of Life – Richard Dawkins et al. (BBC Radio; 2004)
Life in the Universe – Essay by Stephen Hawking (1996)
Astrobiology
Evolutionarily significant biological phenomena
Evolutionary biology
Global events
Natural events
Prebiotic chemistry | 0.787259 | 0.999338 | 0.786738 |
Ecosystem diversity | Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment.
Ecosystem diversity addresses the combined characteristics of biotic properties which are living organisms (biodiversity) and abiotic properties such as nonliving things like water or soil (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity.
Impact
Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result.
Examples
Some examples of ecosystems that are rich in diversity are:
Deserts
Forests
Large marine ecosystems
Marine ecosystems
Old-growth forests
Rainforests
Tundra
Coral reefs
Marine
Ecosystem diversity as a result of evolutionary pressure
Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are formed as a result of evolutionary pressures. Even seemingly small evolutionary interactions can have large impacts on the diversity of the ecosystems throughout the world. One of the best studied cases of this is of the honeybee's interaction with angiosperms on every continent in the world except Antarctica.
In 2010, Robert Brodschneider and Karl Crailsheim conducted a study on the health and nutrition in honeybee colonies. The study focused on overall colony health, adult nutrition, and larva nutrition as a function of the effect of pesticides, monocultures and genetically modified crops to see if the anthropogenically created problems can have an effect pollination levels. The results indicate that human activity does have a role in the destruction of the fitness of the bee colony. The extinction or near extinction of these pollinators would result in many plants that feed humans on a wide scale needing alternative pollination methods. Crop pollinating insects are worth annually $14.6 billion to the US economy and the cost to hand pollinate over insect pollination is estimated to cost $5,715-$7,135 more per hectare. Not only will there be a cost increase but also an decrease in colony fitness, leading to a decrease in genetic diversity, which studies have shown has a direct link to the long-term survival of the honeybee colonies.
According to a study, there are over 50 plants that are dependent on bee pollination, many of these being key staples to feeding the world. Another study conducted states that a lack of plant diversity will lead to a decline in the bee population fitness, and a low bee colony fitness has impacts on the fitness of plant ecosystem diversity. By allowing for bee pollination and working to reduce anthropogenically harmful footprints, bee pollination can increase genetic diversity of flora growth and create a unique ecosystem that is highly diverse and can provide a habitat and niche for many other organisms to thrive. Due to the evolutionary pressures of bees being located on six out of seven continents, there can be no denying the impact of pollinators on the ecosystem diversity. The pollen collected by the bees is harvested and used as an energy source for wintertime; this act of collecting pollen from local plants also has a more important effect of facilitating the movement of genes between organisms.
The new evolutionary pressures that are largely anthropogenically catalyzed can potentially cause widespread collapse of ecosystems. In the north Atlantic Sea, a study was conducted that followed the effects of the human interaction on surrounding ocean habitats. They found that there was no habitat or trophic level that in some way was affected negatively by human interaction, and that much of the diversity of life was being stunted as a result.
See also
Bioregion
Disparity (ecology)
Ecology
Evolutionary biology
Genetic diversity
Nature
Natural environment
Species diversity
Sustainable development
References
Biodiversity
Systems ecology | 0.797485 | 0.98648 | 0.786703 |
Agroecology | Agroecology (IPA: ) is an academic discipline that studies ecological processes applied to agricultural production systems. Bringing ecological principles to bear can suggest new management approaches in agroecosystems. The term can refer to a science, a movement, or an agricultural practice. Agroecologists study a variety of agroecosystems. The field of agroecology is not associated with any one particular method of farming, whether it be organic, regenerative, integrated, or industrial, intensive or extensive, although some use the name specifically for alternative agriculture.
Definition
Agroecology is defined by the OECD as "the study of the relation of agricultural crops and environment." Dalgaard et al. refer to agroecology as the study of the interactions between plants, animals, humans and the environment within agricultural systems. Francis et al. also use the definition in the same way, but thought it should be restricted to growing food.
Agroecology is a holistic approach that seeks to reconcile agriculture and local communities with natural processes for the common benefit of nature and livelihoods.
Agroecology is inherently multidisciplinary, including sciences such as agronomy, ecology, environmental science, sociology, economics, history and others. Agroecology uses different sciences to understand elements of ecosystems such as soil properties and plant-insect interactions, as well as using social sciences to understand the effects of farming practices on rural communities, economic constraints to developing new production methods, or cultural factors determining farming practices. The system properties of agroecosystems studied may include: productivity, stability, sustainability and equitability. Agroecology is not limited to any one scale; it can range from an individual gene to an entire population, or from a single field in a given farm to global systems.
Wojtkowski differentiates the ecology of natural ecosystems from agroecology inasmuch as in natural ecosystems there is no role for economics, whereas in agroecology, focusing as it does on organisms within planned and managed environments, it is human activities, and hence economics, that are the primary governing forces that ultimately control the field. Wojtkowski discusses the application of agroecology in agriculture, forestry and agroforestry in his 2002 book.
Varieties
Buttel identifies four varieties of agroecology in a 2003 conference paper. The main varieties he calls ecosystem agroecology which he claims derives from the ecosystem ecology of Howard T. Odum and focuses less on the rural sociology, and agronomic agroecology which he identifies as being oriented towards developing knowledge and practices to agriculture more sustainable. The third long-standing variety Buttel calls ecological political economy which he defines as critiquing the politics and economy of agriculture and weighted to radical politics. The smallest and newest variety Buttel coins agro-population ecology, which he says is very similar to the first, but is derived from the science of ecology primarily based on the more modern theories of population ecology such as population dynamics of constituent species, and their relationships to climate and biogeochemistry, and the role of genetics.
Dalgaard et al. identify different points of view: what they call early "integrative" agroecology, such as the investigations of Henry Gleason or Frederic Clements. The second version they cite Hecht (1995) as coining "hard" agroecology which they identify as more reactive to environmental politics but rooted in measurable units and technology. They themselves name "soft" agroecology which they define as trying to measure agroecology in terms of "soft capital" such as culture or experience.
The term agroecology may used by people for a science, movement or practice. Using the name as a movement became more common in the 1990s, especially in the Americas. Miguel Altieri, whom Buttel groups with the "political" agroecologists, has published prolifically in this sense. He has applied agroecology to sustainable agriculture, alternative agriculture and traditional knowledge.
History
Overview
The history of agroecology depends on whether you are referring to it as a body of thought or a method of practice, as many indigenous cultures around the world historically used and currently use practices we would now consider utilizing knowledge of agroecology. Examples include Maori, Nahuatl, and many other indigenous peoples.
The Mexica people that inhabited Tenochtitlan pre-colonization of the Americas used a process called chinampas that in many ways mirrors the use of composting in sustainable agriculture today. The use of agroecological practices such as nutrient cycling and intercropping occurs across hundreds of years and many different cultures. Indigenous peoples also currently make up a large proportion of people using agroecological practices, and those involved in the movement to move more farming into an agroecological paradigm.
Pre-WWII academic thought
According to Gliessman and Francis et al., agronomy and ecology were first linked with the study of crop ecology by Klages in 1928. This work is a study of where crops can best be grown.
Wezel et al. say the first mention of the term agroecology was in 1928, with the publication of the term by Basil Bensin. Dalgaard et al. claim the German zoologist Friederichs was the first to use the name in 1930 in his book on the zoology of agriculture and forestry, followed by American crop physiologist Hansen in 1939, both using the word for the application of ecology within agriculture.
Post-WWII academic thought
Tischler's 1965 book Agrarökologie may be the first to be titled 'agroecology'. He analyzed the different components (plants, animals, soils and climate) and their interactions within an agroecosystem as well as the impact of human agricultural management on these components.
Gliessman describes that post-WWII ecologists gave more focus to experiments in the natural environment, while agronomists dedicated their attention to the cultivated systems in agriculture, but in the 1970s agronomists saw the value of ecology, and ecologists began to use the agricultural systems as study plots, studies in agroecology grew more rapidly. More books and articles using the concept of agroecosystems and the word agroecology started to appear in 1970s. According to Dalgaard et al., it probably was the concept of "process ecology" such as studied by Arthur Tansley in the 1930s which inspired Harper's 1974 concept of agroecosystems, which they consider the foundation of modern agroecology. Dalgaard et al. claim Frederic Clements's investigations on ecology using social sciences, community ecology and a "landscape perspective" is agroecology, as well as Henry Gleason's investigations of the population ecology of plants using different scientific disciplines. Ethnobotanist Efraim Hernandez X.'s work on traditional knowledge in Mexico in the 1970s led to new education programs in agroecology.
Works such as Silent Spring and The Limits to Growth caused the public to be aware of the environmental costs of agricultural production, which caused more research in sustainability starting in the 1980s. The view that the socio-economic context are fundamental was used in the 1982 article Agroecologia del Tropico Americano by Montaldo, who argues that this context cannot be separated from agriculture when designing agricultural practices. In 1985 Miguel Altieri studied how the consolidation of the farms and cropping systems impact pest populations, and Gliessman how socio-economic, technological, and ecological components gave rise to producer choices of food production systems.
In 1995, Edens et al. in Sustainable Agriculture and Integrated Farming Systems considered the economics of systems, ecological impacts, and ethics and values in agriculture.
Social movements
Several social movements have adopted agroecology as part of their larger organizing strategy. Groups like La Via Campesina have used agroecology as a method for achieving food sovereignty. Agroecology has also been utilized by farmers to resist global agricultural development patterns associated with the green revolution.
By region
Latin America
Africa
Garí wrote two papers for the FAO in the early 2000s about using an agroecological approach which he called "agrobiodiversity" to empower farmers to cope with the impacts of the AIDS on rural areas in Africa.
In 2011, the first encounter of agroecology trainers took place in Zimbabwe and issued the Shashe Declaration.
Europe
The European Commission supports the use of sustainable practices, such as precision agriculture, organic farming, agroecology, agroforestry and stricter animal welfare standards through the Green Deal and the Farm to Fork Strategy.
Debate
Within academic research areas that focus on topics related to agriculture or ecology, such as agronomy, veterinarian science, environmental science, and others, there is much debate regarding what model of agriculture or agroecology should be supported through policy. Agricultural departments of different countries support agroecology to varying degrees, with the UN perhaps its biggest proponent.
See also
References
Further reading
Buttel, F.H. and M.E. Gertler 1982. Agricultural structure, agricultural policy and environmental quality. Agriculture and Environment 7: 101–119.
Carrol, C. R., J.H. Vandermeer and P.M. Rosset. 1990. Agroecology. McGraw Hill Publishing Company, New York.
Paoletti, M.G., B.R. Stinner, and G.G. Lorenzoni, ed. Agricultural Ecology and Environment. New York: Elsevier Science Publisher B.V., 1989.
Robertson, Philip, and Scott M Swinton. "Reconciling agricultural productivity and environmental integrity: a grand challenge for agriculture." Frontiers in Ecology and the Environment 3.1 (2005): 38–46.
Monbiot, George. 2022. "Regenesis: Feeding the World without Devouring the Planet."
Advances in Agroecology Book Series
Soil Organic Matter in Sustainable Agriculture (Advances in Agroecology) by Fred Magdoff and Ray R. Weil (Hardcover - May 27, 2004)
Agroforestry in Sustainable Agricultural Systems (Advances in Agroecology) by Louise E. Buck, James P. Lassoie, and Erick C.M. Fernandes (Hardcover - Oct 1, 1998)
Agroecosystem Sustainability: Developing Practical Strategies (Advances in Agroecology) by Stephen R. Gliessman (Hardcover - Sep 25, 2000)
Interactions Between Agroecosystems and Rural Communities (Advances in Agroecology) by Cornelia Flora (Hardcover - Feb 5, 2001)
Landscape Ecology in Agroecosystems Management (Advances in Agroecology) by Lech Ryszkowski (Hardcover - Dec 27, 2001)
Integrated Assessment of Health and Sustainability of Agroecosystems (Advances in Agroecology) by Thomas Gitau, Margaret W. Gitau, David Waltner-ToewsClive A. Edwards June 2008 | Hardback: 978-1-4200-7277-8 (CRC Press)
Multi-Scale Integrated Analysis of Agroecosystems (Advances in Agroecology) by Mario Giampietro 2003 | Hardback: 978-0-8493-1067-6 (CRC Press)
Soil Tillage in Agroecosystems (Advances in Agroecology) edited by Adel El Titi 2002 | Hardback: 978-0-8493-1228-1 (CRC Press)
Tropical Agroecosystems (Advances in Agroecology) edited by John H. Vandermeer 2002 | Hardback: 978-0-8493-1581-7 (CRC Press)
Structure and Function in Agroecosystem Design and Management (Advances in Agroecology) edited by Masae Shiyomi, Hiroshi Koizumi 2001 | Hardback: 978-0-8493-0904-5 (CRC Press)
Biodiversity in Agroecosystems (Advances in Agroecology) edited by Wanda W. Collins, Calvin O. Qualset 1998 | Hardback: 978-1-56670-290-4 (CRC Press)
Sustainable Agroecosystem Management: Integrating Ecology, Economics and Society. (Advances in Agroecology) edited by Patrick J. Bohlen and Gar House 2009 | Hardback: 978-1-4200-5214-5 (CRC Press)
External links
Topic
Agroecology
Agroecology by Project Regeneration
International Agroecology Action Network
Spain
The 10 elements of Agroecology
Organisations
Agroecology Europe - A European association for Agroecology
Agroecology Map
One Million Voices of Agroecology
Courses
University of Wisconsin–Madison
Montpellier, France
University of Illinois at Urbana-Champaign
European Master Agroecology
Norwegian University of Life Sciences
UC Santa Cruz Center for Agroecology & Sustainable Food Systems
Sustainable agriculture
Agronomy
Agriculture
Agricultural soil science
Environmental social science
Organic farming
Habitat management equipment and methods
Sustainable food system
Environmental conservation | 0.795709 | 0.98863 | 0.786662 |
Food chain | A food chain is a linear network of links in a food web, often starting with an autotroph (such as grass or algae), also called a producer, and typically ending at an apex predator (such as grizzly bears or killer whales), detritivore (such as earthworms and woodlice), or decomposer (such as fungi or bacteria). It is not the same as a food web. A food chain depicts relations between species based on what they consume for energy in trophic levels, and they are most commonly quantified in length: the number of links between a trophic consumer and the base of the chain.
Food chain studies play an important role in many biological studies.
Food chain stability is very important for the survival of most species. When only one element is removed from the food chain it can result in extinction or immense decreases of survival of a species. Many food chains and food webs contain a keystone species, a species that has a large impact on the surrounding environment and that can directly affect the food chain. If a keystone species is removed it can set the entire food chain off balance.
The efficiency of a food chain depends on the energy first consumed by the primary producers. This energy then moves through the trophic levels.
History
Food Chains were first discussed by al-Jahiz, a 10th century Arab philosopher. The modern concepts of food chains and food webs were introduced by Charles Elton.
Food chain vs. food web
A food chain differs from a food web as a food chain follows a direct linear pathway of consumption and energy transfer. Natural interconnections between food chains make a food web, which are non-linear and depict interconnecting pathways of consumption and energy transfer.
Trophic levels
Food chain models typically predict that communities are controlled by predators at the top and plants (autotrophs or producers) at the bottom.
Thus, the foundation of the food chain typically consists of primary producers. Primary producers, or autotrophs, utilize energy derived from either sunlight or inorganic chemical compounds to create complex organic compounds, such as starch, for energy. Because the sun's light is necessary for photosynthesis, most life could not exist if the sun disappeared. Even so, it has recently been discovered that there are some forms of life, chemotrophs, that appear to gain all their metabolic energy from chemosynthesis driven by hydrothermal vents, thus showing that some life may not require solar energy to thrive. Chemosynthetic bacteria and archaea use hydrogen sulfide and methane from hydrothermal vents and cold seeps as an energy source (just as plants use sunlight) to produce carbohydrates; they form the base of the food chain in regions with little to no sunlight. Regardless of where the energy is obtained, a species that produces its own energy lies at the base of the food chain model, and is a critically important part of an ecosystem.
Higher trophic levels cannot produce their own energy and so must consume producers or other life that itself consumes producers. In the higher trophic levels lies consumers (secondary consumers, tertiary consumers, etc.).Consumers are organisms that eat other organisms. All organisms in a food chain, except the first organism, are consumers. Secondary consumers eat and obtain energy from primary consumers, tertiary consumers eat and obtain energy from secondary consumers, etc.
At the highest trophic level is typically an apex predator; a consumer with no natural predators in the food chain model.
When any trophic level dies, detritivores and decomposers consume their organic material for energy and expel nutrients into the environment in their waste. Decomposers and detritivores break down the organic compounds into simple nutrients that are returned to the soil. These are the simple nutrients that plants require to create organic compounds. It is estimated that there are more than 100,000 different decomposers in existence.
Models of trophic levels also often model energy transfer between trophic levels. Primary consumers get energy from the producer and pass it to the secondary and tertiary consumers.
Studies
Food chains are vital in ecotoxicology studies, which trace the pathways and biomagnification of environmental contaminants. It is also necessary to consider interactions amongst different trophic levels to predict community dynamics; food chains are often the base level for theory development of trophic levels and community/ecosystem investigations.
Length
The length of a food chain is a continuous variable providing a measure of the passage of energy and an index of ecological structure that increases through the linkages from the lowest to the highest trophic (feeding) levels.
Food chains are often used in ecological modeling (such as a three-species food chain). They are simplified abstractions of real food webs, but complex in their dynamics and mathematical implications.
In its simplest form, the length of a chain is the number of links between a trophic consumer and the base of the web. The mean chain length of an entire web is the arithmetic average of the lengths of all chains in the food web. The food chain is an energy source diagram. The food chain begins with a producer, which is eaten by a primary consumer. The primary consumer may be eaten by a secondary consumer, which in turn may be consumed by a tertiary consumer. The tertiary consumers may sometimes become prey to the top predators known as the quaternary consumers. For example, a food chain might start with a green plant as the producer, which is eaten by a snail, the primary consumer. The snail might then be the prey of a secondary consumer such as a frog, which itself may be eaten by a tertiary consumer such as a snake which in turn may be consumed by an eagle. This simple view of a food chain with fixed trophic levels within a species -species A is eaten by species B, B is eaten by C…- is often contrasted by the real situation in which the juveniles of a species belong to a lower trophic level than the adults, a situation more often seen in aquatic and amphibious environments, e.g., in insects and fishes. This complexity was denominated metaphoetesis by G. E. Hutchinson, 1959.
Ecologists have formulated and tested hypotheses regarding the nature of ecological patterns associated with food chain length, such as length increasing with ecosystem volume, limited by the reduction of energy at each successive level, or reflecting habitat type.
Food chain length is important because the amount of energy transferred decreases as trophic level increases; generally only ten percent of the total energy at one trophic level is passed to the next, as the remainder is used in the metabolic process. There are usually no more than five tropic levels in a food chain. Humans are able to receive more energy by going back a level in the chain and consuming the food before, for example getting more energy per pound from consuming a salad than an animal which ate lettuce.
Keystone species
A keystone species is a singular species within an ecosystem that others within the same ecosystem, or the entire ecosystem itself, rely upon.Keystone species' are so vital for an ecosystem that without their presence, an ecosystem could transform or stop existing entirely.
One way keystone species impact an ecosystem is through their presence in an ecosystem's food web and, by extension, a food chain within said ecosystem. Sea otters, a keystone species in Pacific coastal regions, prey on sea urchins. Without the presence of sea otters, sea urchins practice destructive grazing on kelp populations which contributes to declines in coastal ecosystems within the northern pacific regions. The presence of sea otters controls sea urchin populations and helps maintain kelp forests, which are vital for other species within the ecosystem.
See also
Heterotroph
Lithotroph
Ecological pyramid
Predator-prey interaction
References | 0.788166 | 0.998024 | 0.786609 |
Resource | Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified according to their availability as renewable or national and international resources. An item may become a resource with technology. The benefits of resource utilization may include increased wealth, proper functioning of a system, or enhanced well. From a human perspective, a regular resource is anything to satisfy human needs and wants.
The concept of resources has been developed across many established areas of work, in economics, biology and ecology, computer science, management, and human resources for example - linked to the concepts of competition, sustainability, conservation, and stewardship. In application within human society, commercial or non-commercial factors require resource allocation through resource management.
The concept of resources can also be tied to the direction of leadership over resources; this may include human resources issues, for which leaders are responsible, in managing, supporting, or directing those matters and the resulting necessary actions. For example, in the cases of professional groups, innovative leaders and technical experts in archiving expertise, academic management, association management, business management, healthcare management, military management, public administration, spiritual leadership and social networking administration.
Definition of size asymmetry
Resource competition can vary from completely symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition) to perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size asymmetric (the largest individuals exploit all the available resource).
Economic versus biological
There are three fundamental differences between economic versus ecological views: 1) the economic resource definition is human-centered (anthropocentric) and the biological or ecological resource definition is nature-centered (biocentric or ecocentric); 2) the economic view includes desire along with necessity, whereas the biological view is about basic biological needs; and 3) economic systems are based on markets of currency exchanged for goods and services, whereas biological systems are based on natural processes of growth, maintenance, and reproduction.
Computer resources
A computer resource is any physical or virtual component of limited availability within a computer or information management system. Computer resources include means for input, processing, output, communication, and storage.
Natural
Natural resources are derived from the environment. Many natural resources are essential for human survival, while others are used to satisfy human desire. Conservation is the management of natural resources with the goal of sustainability. Natural resources may be further classified in different ways.
Resources can be categorized based on origin:
Abiotic resources comprise non-living things (e.g., land, water, air, and minerals such as gold, iron, copper, silver).
Biotic resources are obtained from the biosphere. Forests and their products, animals, birds and their products, fish and other marine organisms are important examples. Minerals such as coal and petroleum are sometimes included in this category because they were formed from fossilized organic matter, over long periods.
Natural resources are also categorized based on the stage of development:
Potential resources are known to exist and may be used in the future. For example, petroleum may exist in many parts of India and Kuwait that have sedimentary rocks, but until the time it is actually drilled out and put into use, it remains a potential resource.
Actual resources are those, that have been surveyed, their quantity and quality determined, and are being used in present times. For example, petroleum and natural gas are actively being obtained from the Mumbai High Fields. The development of an actual resource, such as wood processing depends on the technology available and the cost involved. That part of the actual resource that can be developed profitably with the available technology is known as a reserve resource, while that part that can not be developed profitably due to a lack of technology is known as a stock resource.
Natural resources can be categorized based on renewability:
Non-renewable resources are formed over very long geological periods. Minerals and fossils are included in this category. Since their formation rate is extremely slow, they cannot be replenished, once they are depleted. Even though metals can be recycled and reused, whereas petroleum and gas cannot, they are still considered non-renewable resources.
Renewable resources, such as forests and fisheries, can be replenished or reproduced relatively quickly. The highest rate at which a resource can be used sustainably is the sustainable yield. Some resources, such as sunlight, air, and wind, are called perpetual resources because they are available continuously, though at a limited rate. Human consumption does not affect their quantity. Many renewable resources can be depleted by human use, but may also be replenished, thus maintaining a flow. Some of these, such as crops, take a short time for renewal; others, such as water, take a comparatively longer time, while others, such as forests, need even longer periods.
Depending upon the speed and quantity of consumption, overconsumption can lead to depletion or the total and everlasting destruction of a resource. Important examples are agricultural areas, fish and other animals, forests, healthy water and soil, cultivated and natural landscapes. Such conditionally renewable resources are sometimes classified as a third kind of resource or as a subtype of renewable resources. Conditionally renewable resources are presently subject to excess human consumption and the only sustainable long-term use of such resources is within the so-called zero ecological footprint, where humans use less than the Earth's ecological capacity to regenerate.
Natural resources are also categorized based on distribution:
Ubiquitous resources are found everywhere (for example, air, light, and water).
Localized resources are found only in certain parts of the world (for example metal ores and geothermal power).
Actual vs. potential natural resources are distinguished as follows:
Actual resources are those resources whose location and quantity are known and we have the technology to exploit and use them.
Potential resources are those of which we have insufficient knowledge or do not have the technology to exploit them at present.
Based on ownership, resources can be classified as individual, community, national, and international.
Labour or human resources
In economics, labor or human resources refers to the human work in the production of goods and rendering of services. Human resources can be defined in terms of skills, energy, talent, abilities, or knowledge.
In a project management context, human resources are those employees responsible for undertaking the activities defined in the project plan.
Capital or infrastructure
In economics, capital goods or capital are "those durable produced goods that are in turn used as productive inputs for further production" of goods and services. A typical example is the machinery used in a factory. At the macroeconomic level, "the nation's capital stock includes buildings, equipment, software, and inventories during a given year." Capitals are the most important economic resource.
Tangible versus intangible
Whereas, tangible resources such as equipment have an actual physical existence, intangible resources such as corporate images, brands and patents, and other intellectual properties exist in abstraction.
Use and sustainable development
Typically resources cannot be consumed in their original form, but rather through resource development they must be processed into more usable commodities and usable things. The demand for resources is increasing as economies develop. There are marked differences in resource distribution and associated economic inequality between regions or countries, with developed countries using more natural resources than developing countries. Sustainable development is a pattern of resource use, that aims to meet human needs while preserving the environment. Sustainable development means that we should exploit our resources carefully to meet our present requirement without compromising the ability of future generations to meet their own needs. The practice of the three R's – reduce, reuse, and recycle must be followed to save and extend the availability of resources.
Various problems are related to the usage of resources:
Environmental degradation
Over-consumption
Resource curse
Resource depletion
Tragedy of the commons
Various benefits can result from the wise usage of resources:
Economic growth
Ethical consumerism
Prosperity
Quality of life
Sustainability
Wealth
See also
Natural resource management
Resource-based view
Waste management
References
Further reading
Elizabeth Kolbert, "Needful Things: The raw materials for the world we've built come at a cost" (largely based on Ed Conway, Material World: The Six Raw Materials That Shape Modern Civilization, Knopf, 2023; Vince Beiser, The World in a Grain; and Chip Colwell, So Much Stuff: How Humans Discovered Tools, Invented Meaning, and Made More of Everything, Chicago), The New Yorker, 30 October 2023, pp. 20–23. Kolbert mainly discusses the importance to modern civilization, and the finite sources of, six raw materials: high-purity quartz (needed to produce silicon chips), sand, iron, copper, petroleum (which Conway lumps together with another fossil fuel, natural gas), and lithium. Kolbert summarizes archeologist Colwell's review of the evolution of technology, which has ended up giving the Global North a superabundance of "stuff," at an unsustainable cost to the world's environment and reserves of raw materials.
External links
Resource economics
Ecology | 0.789002 | 0.996928 | 0.786578 |
Genetics | Genetics is the study of genes, genetic variation, and heredity in organisms. It is an important branch in biology because heredity is vital to organisms' evolution. Gregor Mendel, a Moravian Augustinian friar working in the 19th century in Brno, was the first to study genetics scientifically. Mendel studied "trait inheritance", patterns in the way traits are handed down from parents to offspring over time. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.
Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded to study the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance), and within the context of a population. Genetics has given rise to a number of subfields, including molecular genetics, epigenetics, and population genetics. Organisms studied within the broad field span the domains of life (archaea, bacteria, and eukarya).
Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intracellular or extracellular environment of a living cell or organism may increase or decrease gene transcription. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate (lacking sufficient waterfall or rain). While the average height the two corn stalks could grow to is genetically determined, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.
Etymology
The word genetics stems from the ancient Greek meaning "genitive"/"generative", which in turn derives from meaning "origin".
History
The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding. The modern science of genetics, seeking to understand this process, began with the work of the Augustinian friar Gregor Mendel in the mid-19th century.
Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kőszeg before Mendel, was the first who used the word "genetic" in hereditarian context, and is considered the first geneticist. He described several rules of biological inheritance in his work The genetic laws of nature (Die genetischen Gesetze der Natur, 1819). His second law is the same as that which Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries). Festetics argued that changes observed in the generation of farm animals, plants, and humans are the result of scientific laws. Festetics empirically deduced that organisms inherit their characteristics, not acquire them. He recognized recessive traits and inherent variation by postulating that traits of past generations could reappear later, and organisms could produce progeny with different attributes. These observations represent an important prelude to Mendel's theory of particulate inheritance insofar as it features a transition of heredity from its status as myth to that of a scientific discipline, by providing a fundamental theoretical basis for genetics in the twentieth century.
Other theories of inheritance preceded Mendel's work. A popular theory during the 19th century, and implied by Charles Darwin's 1859 On the Origin of Species, was blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents. Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrong—the experiences of individuals do not affect the genes they pass to their children. Other theories included Darwin's pangenesis (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited.
Mendelian genetics
Modern genetics started with Mendel's studies of the nature of inheritance in plants. In his paper "Versuche über Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brno, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically. Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.
The importance of Mendel's work did not gain wide understanding until 1900, after his death, when Hugo de Vries and other scientists rediscovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905. The adjective genetic, derived from the Greek word genesis—γένεσις, "origin", predates the noun and was first used in a biological sense in 1860. Bateson both acted as a mentor and was aided significantly by the work of other scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow. Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London in 1906.
After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1900, Nettie Stevens began studying the mealworm. Over the next 11 years, she discovered that females only had the X chromosome and males had both X and Y chromosomes. She was able to conclude that sex is a chromosomal factor and is determined by the male. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies. In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.
Molecular genetics
Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation: dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the Avery–MacLeod–McCarty experiment identified DNA as the molecule responsible for transformation. The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hämmerling in 1943 in his work on the single celled alga Acetabularia. The Hershey–Chase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance.
James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA has a helical structure (i.e., shaped like a corkscrew). Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what look like rungs on a twisted ladder. This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand.
Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production. It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code.
With the newfound molecular understanding of inheritance came an explosion of research. A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs. One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule. In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture. The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003.
Features of inheritance
Discrete inheritance and Mendel's laws
At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to offspring. This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants, showing for example that flowers on a single plant were either purple or white—but never an intermediate between the two colors. The discrete versions of the same gene controlling the inherited appearance (phenotypes) are called alleles.
In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent. Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous. The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once.
When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation. However, the probability of getting one gene over the other can change due to dominant, recessive, homozygous, or heterozygous genes. For example, Mendel found that if you cross heterozygous organisms your odds of getting the dominant trait is 3:1. Real geneticist study and calculate probabilities by using theoretical probabilities, empirical probabilities, the product rule, the sum rule, and more.
Notation and diagrams
Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene.
In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square.
When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits. These charts map the inheritance of a trait in a family tree.
Multiple gene interactions
Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "law of independent assortment," means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. Different genes often interact to influence the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are white—regardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first.
Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes. The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability. Measurement of the heritability of a trait is relative—in a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%.
Molecular basis for inheritance
DNA and chromosomes
The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of deoxyribose (sugar molecule), a phosphate group, and a base (amine group). There are four types of bases: adenine (A), cytosine (C), guanine (G), and thymine (T). The phosphates make phosphodiester bonds with the sugars to make long phosphate-sugar backbones. Bases specifically pair together (T&A, C&G) between two backbones and make like rungs on a ladder. The bases, phosphates, and sugars together make a nucleotide that connects to make long chains of DNA. Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain. These chains coil into a double a-helix structure and wrap around proteins called Histones which provide the structural support. DNA wrapped around these histones are called chromosomes. Viruses sometimes use the similar molecule RNA instead of DNA as their genetic material.
DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand.
Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length. The DNA of a chromosome is associated with structural proteins that organize, compact, and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins. The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome.
DNA is most often found in the nucleus of cells, but Ruth Sager helped in the discovery of nonchromosomal genes found outside of the nucleus. In plants, these are often found in the chloroplasts and in other organisms, in the mitochondria. These nonchromosomal genes can still be passed on by either partner in sexual reproduction and they control a variety of hereditary characteristics that replicate and remain active throughout generations.
While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene. The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent.
Many species have so-called sex chromosomes that determine the sex of each organism. In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. This being said, Mary Frances Lyon discovered that there is X-chromosome inactivation during reproduction to avoid passing on twice as many genes to the offspring. Lyon's discovery led to the discovery of X-linked diseases.
Reproduction
When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones.
Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid). Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs.
Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium. Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation. These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated. Natural bacterial transformation occurs in many bacterial species, and can be regarded as a sexual process for transferring DNA from one cell to another cell (usually of the same species). Transformation requires the action of numerous bacterial gene products, and its primary adaptive function appears to be repair of DNA damages in the recipient cell.
Recombination and genetic linkage
The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do, via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes. This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells. Meiotic recombination, particularly in microbial eukaryotes, appears to serve the adaptive function of repair of DNA damages.
The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other.
The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated. For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome.
Gene expression
Genetic code
Genes express their functional effect through the production of proteins, which are molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each composed of a sequence of amino acids. The DNA sequence of a gene is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription.
This messenger RNA molecule then serves to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code. The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNA—a phenomenon Francis Crick called the central dogma of molecular biology.
The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions. Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood.
A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the β-globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties.
Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease.
Some DNA sequences are transcribed into RNA but are not translated into protein products—such RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (such as microRNA).
Nature and nurture
Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. The phrase "nature and nurture" refers to this complementary relationship. The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are colder—such as its legs, ears, tail, and faceso the cat has dark hair at its extremities.
Environment plays a major role in effects of the human genetic disease phenylketonuria. The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive intellectual disability and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy.
A common method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births. Identical siblings are genetically the same since they come from the same zygote. Meanwhile, fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors. One famous example involved the study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia.
Gene regulation
The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene. Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genes—tryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process.
Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells.
Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells. These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance.
Genetic change
Mutations
During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases. Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence. A particularly important source of DNA damages appears to be reactive oxygen species produced by cellular aerobic respiration, and these can lead to mutations.
In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions, deletions of entire regions—or the accidental exchange of whole parts of sequences between different chromosomes, chromosomal translocation.
Natural selection and evolution
Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness. Mutations that do have an effect are usually detrimental, but occasionally some can be beneficial. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations are harmful with the remainder being either neutral or weakly beneficial.
Population genetics studies the distribution of genetic differences within populations and how these distributions change over time. Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism, as well as other factors such as mutation, genetic drift, genetic hitchhiking, artificial selection and migration.
Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment. New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.
By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria).
Research and technology
Model organisms
Although geneticists originally studied inheritance in a wide variety of organisms, the range of species studied has narrowed. One reason is that when significant research already exists for a given organism, new researchers are more likely to choose it for further study, and so eventually a few model organisms became the basis for most genetics research. Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer. Organisms were chosen, in part, for convenience—short generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), the zebrafish (Danio rerio), and the common house mouse (Mus musculus).
Medicine
Medical genetics seeks to understand how genetic variation relates to human health and disease. When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of Mendelian randomization to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene. Once a candidate gene is found, further research is often done on the corresponding (or homologous) genes of model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses.
Individuals differ in their inherited tendency to develop cancer, and cancer is a genetic disease. The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body. Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (three to seven). A cancer cell can divide without growth factor and ignores inhibitory signals. Also, it is immortal and can grow indefinitely, even after it makes contact with neighboring cells. It may escape from the epithelium and ultimately from the primary tumor. Then, the escaped cell can cross the endothelium of a blood vessel and get transported by the bloodstream to colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the Ras proteins, or in other oncogenes.
Research methods
DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA. DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length.
The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells). "Cloning" can also refer to the various means of creating cloned ("clonal") organisms.
DNA can also be amplified using a procedure called the polymerase chain reaction (PCR). By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences.
DNA sequencing and genomics
DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments. Using this technology, researchers have been able to study the molecular sequences associated with many human diseases.
As sequencing has become less expensive, researchers have sequenced the genomes of many organisms using a process called genome assembly, which uses computational tools to stitch together sequences from many different fragments. These technologies were used to sequence the human genome in the Human Genome Project completed in 2003. New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars.
Next-generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently. The large amount of sequence data available has created the subfield of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. A common problem to these fields of research is how to manage and share data that deals with human subject and personally identifiable information.
Society and culture
On 19 March 2015, a group of leading biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited. In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.
See also
Bacterial genome size
Cryoconservation of animal genetic resources
Eugenics
Embryology
Genetic disorder
Genetic diversity
Genetic engineering
Genetic enhancement
Glossary of genetics (M−Z)
Index of genetics articles
Medical genetics
Molecular tools for gene study
Neuroepigenetics
Outline of genetics
Timeline of the history of genetics
Plant genetic resources
References
Further reading
External links
Genetics | 0.787306 | 0.998999 | 0.786518 |
Structural biology | Structural biology, as defined by the Journal of Structural Biology, deals with structural analysis of living material (formed, composed of, and/or maintained and refined by living cells) at every level of organization.
Early structural biologists throughout the 19th and early 20th centuries were primarily only able to study structures to the limit of the naked eye's visual acuity and through magnifying glasses and light microscopes. In the 20th century, a variety of experimental techniques were developed to examine the 3D structures of biological molecules. The most prominent techniques are X-ray crystallography, nuclear magnetic resonance, and electron microscopy. Through the discovery of X-rays and its applications to protein crystals, structural biology was revolutionized, as now scientists could obtain the three-dimensional structures of biological molecules in atomic detail. Likewise, NMR spectroscopy allowed information about protein structure and dynamics to be obtained. Finally, in the 21st century, electron microscopy also saw a drastic revolution with the development of more coherent electron sources, aberration correction for electron microscopes, and reconstruction software that enabled the successful implementation of high resolution cryo-electron microscopy, thereby permitting the study of individual proteins and molecular complexes in three-dimensions at angstrom resolution.
With the development of these three techniques, the field of structural biology expanded and also became a branch of molecular biology, biochemistry, and biophysics concerned with the molecular structure of biological macromolecules (especially proteins, made up of amino acids, RNA or DNA, made up of nucleotides, and membranes, made up of lipids), how they acquire the structures they have, and how alterations in their structures affect their function. This subject is of great interest to biologists because macromolecules carry out most of the functions of cells, and it is only by coiling into specific three-dimensional shapes that they are able to perform these functions. This architecture, the "tertiary structure" of molecules, depends in a complicated way on each molecule's basic composition, or "primary structure." At lower resolutions, tools such as FIB-SEM tomography have allowed for greater understanding of cells and their organelles in 3-dimensions, and how each hierarchical level of various extracellular matrices contributes to function (for example in bone). In the past few years it has also become possible to predict highly accurate physical molecular models to complement the experimental study of biological structures. Computational techniques such as molecular dynamics simulations can be used in conjunction with empirical structure determination strategies to extend and study protein structure, conformation and function.
History
In 1912 Max Von Laue directed X-rays at crystallized copper sulfate generating a diffraction pattern. These experiments led to the development of X-ray crystallography, and its usage in exploring biological structures. In 1951, Rosalind Franklin and Maurice Wilkins used X-ray diffraction patterns to capture the first image of deoxyribonucleic acid (DNA). Francis Crick and James Watson modeled the double helical structure of DNA using this same technique in 1953 and received the Nobel Prize in Medicine along with Wilkins in 1962.
Pepsin crystals were the first proteins to be crystallized for use in X-ray diffraction, by Theodore Svedberg who received the 1962 Nobel Prize in Chemistry. The first tertiary protein structure, that of myoglobin, was published in 1958 by John Kendrew. During this time, modeling of protein structures was done using balsa wood or wire models. With the invention of modeling software such as CCP4 in the late 1970s, modeling is now done with computer assistance. Recent developments in the field have included the generation of X-ray free electron lasers, allowing analysis of the dynamics and motion of biological molecules, and the use of structural biology in assisting synthetic biology.
In the late 1930s and early 1940s, the combination of work done by Isidor Rabi, Felix Bloch, and Edward Mills Purcell led to the development of nuclear magnetic resonance (NMR). Currently, solid-state NMR is widely used in the field of structural biology to determine the structure and dynamic nature of proteins (protein NMR).
In 1990, Richard Henderson produced the first three-dimensional, high resolution image of bacteriorhodopsin using cryogenic electron microscopy (cryo-EM). Since then, cryo-EM has emerged as an increasingly popular technique to determine three-dimensional, high resolution structures of biological images.
More recently, computational methods have been developed to model and study biological structures. For example, molecular dynamics (MD) is commonly used to analyze the dynamic movements of biological molecules. In 1975, the first simulation of a biological folding process using MD was published in Nature. Recently, protein structure prediction was significantly improved by a new machine learning method called AlphaFold. Some claim that computational approaches are starting to lead the field of structural biology research.
Techniques
Biomolecules are too small to see in detail even with the most advanced light microscopes. The methods that structural biologists use to determine their structures generally involve measurements on vast numbers of identical molecules at the same time. These methods include:
Mass spectrometry
Macromolecular crystallography
Neutron diffraction
Proteolysis
Nuclear magnetic resonance spectroscopy of proteins (NMR)
Electron paramagnetic resonance (EPR)
Cryogenic electron microscopy (cryoEM)
Electron crystallography and microcrystal electron diffraction
Multiangle light scattering
Small angle scattering
Ultrafast laser spectroscopy
Anisotropic terahertz microspectroscopy
Two-dimensional infrared spectroscopy
Dual-polarization interferometry and circular dichroism
Most often researchers use them to study the "native states" of macromolecules. But variations on these methods are also used to watch nascent or denatured molecules assume or reassume their native states. See protein folding.
A third approach that structural biologists take to understanding structure is bioinformatics to look for patterns among the diverse sequences that give rise to particular shapes. Researchers often can deduce aspects of the structure of integral membrane proteins based on the membrane topology predicted by hydrophobicity analysis. See protein structure prediction.
Applications
Structural biologists have made significant contributions towards understanding the molecular components and mechanisms underlying human diseases. For example, cryo-EM and ssNMR have been used to study the aggregation of amyloid fibrils, which are associated with Alzheimer's disease, Parkinson's disease, and type II diabetes. In addition to amyloid proteins, scientists have used cryo-EM to produce high resolution models of tau filaments in the brain of Alzheimer's patients which may help develop better treatments in the future. Structural biology tools can also be used to explain interactions between pathogens and hosts. For example, structural biology tools have enabled virologists to understand how the HIV envelope allows the virus to evade human immune responses.
Structural biology is also an important component of drug discovery. Scientists can identify targets using genomics, study those targets using structural biology, and develop drugs that are suited for those targets. Specifically, ligand-NMR, mass spectrometry, and X-ray crystallography are commonly used techniques in the drug discovery process. For example, researchers have used structural biology to better understand Met, a protein encoded by a protooncogene that is an important drug target in cancer. Similar research has been conducted for HIV targets to treat people with AIDS. Researchers are also developing new antimicrobials for mycobacterial infections using structure-driven drug discovery.
See also
Primary structure
Secondary structure
Tertiary structure
Quaternary structure
Structural domain
Structural motif
Protein subunit
Molecular model
Cooperativity
Chaperonin
Structural genomics
Stereochemistry
Resolution (electron density)
Proteopedia The collaborative, 3D encyclopedia of proteins and other molecules.
Protein structure prediction
References
External links
Nature: Structural & Molecular Biology magazine website
Journal of Structural Biology
Structural Biology - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Structural Biology in Europe
Learning Crystallography
Molecular biology
Protein structure
Biophysics | 0.797803 | 0.985671 | 0.786371 |
Ecotype | In evolutionary ecology, an ecotype, sometimes called ecospecies, describes a genetically distinct geographic variety, population, or race within a species, which is genotypically adapted to specific environmental conditions.
Typically, though ecotypes exhibit phenotypic differences (such as in morphology or physiology) stemming from environmental heterogeneity, they are capable of interbreeding with other geographically adjacent ecotypes without loss of fertility or vigor.
Definition
An ecotype is a variant in which the phenotypic differences are too few or too subtle to warrant being classified as a subspecies. These different variants can occur in the same geographic region where distinct habitats such as meadow, forest, swamp, and sand dunes provide ecological niches. Where similar ecological conditions occur in widely separated places, it is possible for a similar ecotype to occur in the separated locations. An ecotype is different from a subspecies, which may exist across a number of different habitats. In animals, ecotypes owe their differing characteristics to the effects of a very local environment. Therefore, ecotypes have no taxonomic rank.
Terminology
Ecotypes are closely related to morphs. In the context of evolutionary biology, genetic polymorphism is the occurrence in the equilibrium of two or more distinctly different phenotypes within a population of a species, in other words, the occurrence of more than one form or morph. The frequency of these discontinuous forms (even that of the rarest) is too high to be explained by mutation. In order to be classified as such, morphs must occupy the same habitat at the same time and belong to a panmictic population (whose members can all potentially interbreed). Polymorphism is actively and steadily maintained in populations of species by natural selection (most famously sexual dimorphism in humans) in contrast to transient polymorphisms where conditions in a habitat change in such a way that a "form" is being replaced completely by another.
In fact, Begon, Townsend, and Harper assert that
The notions "form" and "ecotype" may appear to correspond to a static phenomenon, however; this is not always the case. Evolution occurs continuously both in time and space, so that two ecotypes or forms may qualify as distinct species in only a few generations. Begon, Townsend, and Harper use an illuminating analogy on this:
Thus ecotypes and morphs can be thought of as precursory steps of potential speciation.
Range and distribution
Experiments indicate that sometimes ecotypes manifest only when separated by great spatial distances (of the order of 1,000 km). This is due to hybridization whereby different but adjacent varieties of the same species (or generally of the same taxonomic rank) interbreed, thus overcoming local selection. However other studies reveal that the opposite may happen, i.e., ecotypes revealing at very small scales (of the order of 10 m), within populations, and despite hybridization.
In ecotypes, it is common for continuous, gradual geographic variation to impose analogous phenotypic and genetic variation. This situation is called cline. A well-known example of a cline is the skin color gradation in indigenous human populations worldwide, which is related to latitude and amounts of sunlight. But often the distribution of ecotypes is bimodal or multimodal. This means that ecotypes may display two or more distinct and discontinuous phenotypes even within the same population. Such phenomenon may lead to speciation and can occur if conditions in a local environment change dramatically through space or time.
Examples
Tundra reindeer and woodland reindeer are two ecotypes of reindeer. The first migrate (travelling 5,000 km) annually between the two environments in large numbers whereas the other (who are much fewer) remain in the forest for the summer. In North America, the species Rangifer tarandus (locally known as caribou), was subdivided into five subspecies by Banfield in 1961. Caribou are classified by ecotype depending on several behavioural factors – predominant habitat use (northern, tundra, mountain, forest, boreal forest, forest-dwelling), spacing (dispersed or aggregated) and migration (sedentary or migratory). For example, the subspecies Rangifer tarandus caribou is further distinguished by a number of ecotypes, including boreal woodland caribou, mountain woodland caribou, and migratory woodland caribou (such as the migratory George River Caribou Herd in the Ungava region of Quebec).
Arabis fecunda, a herb endemic to some calcareous soils of Montana, United States, can be divided into two ecotypes. The one "low elevation" group lives near the ground in an arid, warm environment and has thus developed a significantly greater tolerance against drought than the "high elevation" group. The two ecotypes are separated by a horizontal distance of about .
It is commonly accepted that the Tucuxi dolphin has two ecotypes – the riverine ecotype found in some South American rivers and the pelagic ecotype found in the South Atlantic Ocean. In 2022, the common bottlenose dolphin (Tursiops truncatus), which had been considered to have two ecotypes in the western North Atlantic, was separated into two species by Costa et al. based on morphometric and genetic data, with the near-shore ecotype becoming Tursiops erebennus Cope, 1865, described in the nineteenth century from a specimen collected in the Delaware River.
The warbler finch and the Cocos Island finch are viewed as separate ecotypes.
The aromatic plant Artemisia campestris also known as the field sagewort grows in a wide range of habitats from North America to the Atlantic coast and also in Eurasia. It has different forms arccoding to the environment where it grows. One variety which grows on shifting dunes at Falstrebo on the coast of Sweden has broad leaves, and white hairs while exhibiting upright growth. Another variety that grows in Oland in calcareous rocks displays horizontally expanded branches with no upright growth. These two extreme types are considered different varieties. Other examples include Artemisia campestris var. borealis which occupies the west of the Cascades crest in the Olympic Mountains in Washington while Artemisia campestris var. wormskioldii grows on the east side. The Northern wormwood, var. borealis has spike like-inflorescences with leaves concentrated on the plant base and divided into long narrow lobes. Wormskiold's northern wormwood, Artemisia campestris var. wormskioldii is generally shorter and hairy with large leaves surrounding the flowers.
The Scots pine (Pinus sylvestris) has 20 different ecotypes in an area from Scotland to Siberia, all capable of interbreeding.
Ecotype distinctions can be subtle and do not always require large distances; it has been observed that two populations of the same Helix snail species separated by only a few hundred kilometers prefer not to cross-mate, i.e., they reject one another as mates. This event probably occurs during the process of courtship, which may last for hours.
See also
Adaptation
Biological classification
Cline (biology)
Ecotope
Epigenetics
Evolution
Polymorphism (biology)
Ring species
Speciation
Species problem
Terroir
Explanatory notes
References
Landscape ecology
Botany
Zoology
Ecology | 0.801619 | 0.980764 | 0.786199 |
Biorobotics | Biorobotics is an interdisciplinary science that combines the fields of biomedical engineering, cybernetics, and robotics to develop new technologies that integrate biology with mechanical systems to develop more efficient communication, alter genetic information, and create machines that imitate biological systems.
Cybernetics
Cybernetics focuses on the communication and system of living organisms and machines that can be applied and combined with multiple fields of study such as biology, mathematics, computer science, engineering, and much more.
This discipline falls under the branch of biorobotics because of its combined field of study between biological bodies and mechanical systems. Studying these two systems allow for advanced analysis on the functions and processes of each system as well as the interactions between them.
History
Cybernetic theory is a concept that has existed for centuries, dating back to the era of Plato where he applied the term to refer to the "governance of people". The term cybernetique is seen in the mid-1800s used by physicist André-Marie Ampère. The term cybernetics was popularized in the late 1940s to refer to a discipline that touched on, but was separate, from established disciplines, such as electrical engineering, mathematics, and biology.
Science
Cybernetics is often misunderstood because of the breadth of disciplines it covers. In the early 20th century, it was coined as an interdisciplinary field of study that combines biology, science, network theory, and engineering. Today, it covers all scientific fields with system related processes. The goal of cybernetics is to analyze systems and processes of any system or systems in an attempt to make them more efficient and effective.
Applications
Cybernetics is used as an umbrella term so applications extend to all systems related scientific fields such as biology, mathematics, computer science, engineering, management, psychology, sociology, art, and more. Cybernetics is used amongst several fields to discover principles of systems, adaptation of organisms, information analysis and much more.
Genetic engineering
Genetic engineering is a field that uses advances in technology to modify biological organisms. Through different methods, scientists are able to alter the genetic material of microorganisms, plants and animals to provide them with desirable traits. For example, making plants grow bigger, better, and faster. Genetic engineering is included in biorobotics because it uses new technologies to alter biology and change an organism's DNA for their and society's benefit.
History
Although humans have modified genetic material of animals and plants through artificial selection for millennia (such as the genetic mutations that developed teosinte into corn and wolves into dogs), genetic engineering refers to the deliberate alteration or insertion of specific genes to an organism's DNA. The first successful case of genetic engineering occurred in 1973 when Herbert Boyer and Stanley Cohen were able to transfer a gene with antibiotic resistance to a bacterium.
Science
There are three main techniques used in genetic engineering: The plasmid method, the vector method and the biolistic method.
Plasmid method
This technique is used mainly for microorganisms such as bacteria. Through this method, DNA molecules called plasmids are extracted from bacteria and placed in a lab where restriction enzymes break them down. As the enzymes break the molecules down, some develop a rough edge that resembles that of a staircase which is considered 'sticky' and capable of reconnecting. These 'sticky' molecules are inserted into another bacteria where they will connect to the DNA rings with the altered genetic material.
Vector method
The vector method is considered a more precise technique than the plasmid method as it involves the transfer of a specific gene instead of a whole sequence. In the vector method, a specific gene from a DNA strand is isolated through restriction enzymes in a laboratory and is inserted into a vector. Once the vector accepts the genetic code, it is inserted into the host cell where the DNA will be transferred.
Biolistic method
The biolistic method is typically used to alter the genetic material of plants. This method embeds the desired DNA with a metallic particle such as gold or tungsten in a high speed gun. The particle is then bombarded into the plant. Due to the high velocities and the vacuum generated during bombardment, the particle is able to penetrate the cell wall and inserts the new DNA into the cell.
Applications
Genetic engineering has many uses in the fields of medicine, research and agriculture. In the medical field, genetically modified bacteria are used to produce drugs such as insulin, human growth hormones and vaccines. In research, scientists genetically modify organisms to observe physical and behavioral changes to understand the function of specific genes. In agriculture, genetic engineering is extremely important as it is used by farmers to grow crops that are resistant to herbicides and to insects such as BTCorn.
Bionics
Bionics is a medical engineering field and a branch of biorobotics consisting of electrical and mechanical systems that imitate biological systems, such as prosthetics and hearing aids. It's a portmanteau that combines biology and electronics.
History
The history of bionics goes as far back in time as ancient Egypt. A prosthetic toe made out of wood and leather was found on the foot of a mummy. The time period of the mummy corpse was estimated to be from around the fifteenth century B.C. Bionics can also be witnessed in ancient Greece and Rome. Prosthetic legs and arms were made for amputee soldiers. In the early 16th century, a French military surgeon by the name of Ambroise Pare became a pioneer in the field of bionics. He was known for making various types of upper and lower prosthetics. One of his most famous prosthetics, Le Petit Lorrain, was a mechanical hand operated by catches and springs. During the early 19th century, Alessandro Volta further progressed bionics. He set the foundation for the creation of hearing aids with his experiments. He found that electrical stimulation could restore hearing by inserting an electrical implant to the saccular nerve of a patient's ear. In 1945, the National Academy of Sciences created the Artificial Limb Program, which focused on improving prosthetics since there were a large number of World War II amputee soldiers. Since this creation, prosthetic materials, computer design methods, and surgical procedures have improved, creating modern-day bionics.
Science
Prosthetics
The important components that make up modern-day prosthetics are the pylon, the socket, and the suspension system. The pylon is the internal frame of the prosthetic that is made up of metal rods or carbon-fiber composites. The socket is the part of the prosthetic that connects the prosthetic to the person's missing limb. The socket consists of a soft liner that makes the fit comfortable, but also snug enough to stay on the limb. The suspension system is important in keeping the prosthetic on the limb. The suspension system is usually a harness system made up of straps, belts or sleeves that are used to keep the limb attached.
The operation of a prosthetic could be designed in various ways. The prosthetic could be body-powered, externally-powered, or myoelectrically powered. Body-powered prosthetics consist of cables attached to a strap or harness, which is placed on the person's functional shoulder, allowing the person to manipulate and control the prosthetic as he or she deems fit. Externally-powered prosthetics consist of motors to power the prosthetic and buttons and switches to control the prosthetic. Myoelectrically powered prosthetics are new, advanced forms of prosthetics where electrodes are placed on the muscles above the limb. The electrodes will detect the muscle contractions and send electrical signals to the prosthetic to move the prosthetic. The downside to this type of prosthetic is that if the sensors are not placed correctly on the limb then the electrical impulses will fail to move the prosthetic. TrueLimb is a specific brand of prosthetics that uses myoelectrical sensors which enable a person to have control of their bionic limb.
Hearing aids
Four major components make up the hearing aid: the microphone, the amplifier, the receiver, and the battery. The microphone takes in outside sound, turns that sound to electrical signals, and sends those signals to the amplifier. The amplifier increases the sound and sends that sound to the receiver. The receiver changes the electrical signal back into sound and sends the sound into the ear. Hair cells in the ear will sense the vibrations from the sound, convert the vibrations into nerve signals, and send it to the brain so the sounds can become coherent to the person. The battery simply powers the hearing aid.
Applications
Cochlear Implant
Cochlear implants are a type of hearing aid for those who are deaf. Cochlear implants send electrical signals straight to the auditory nerve, the nerve responsible for sound signals, instead of just sending the signals to the ear canal like normal hearing aids.
Bone-Anchored Hearing Aids
These hearing aids are also used for people with severe hearing loss. They attach to the bones of the middle ear to create sound vibrations in the skull and send those vibrations to the cochlea.
Artificial sensing skin
This artificial sensing skin detects any pressure put on it and is meant for people who have lost any sense of feeling on parts of their bodies, such as diabetics with peripheral neuropathy.
Bionic eye
The bionic eye is a bioelectronic implant that restores vision for people with blindness.
The bionic eye, although isn't perfect yet, helped 5 individuals classified as legally blind help to make out letters again.
As the retina has millions of photoreceptors, and the human eye has extraordinary capabilities in lensing and dynamic range, it is very hard to replicate with technology. Neural integration is another major challenge. Despite these hurdles, intense research and prototyping is ongoing with many major accomplishments in recent times.
Orthopedic bionics
Orthopedic bionics consist of advanced bionic limbs that use a person's neuromuscular system to control the bionic limb. A new advancement in the comprehension of brain function has led to the development and implementation of brain-machine interfaces (BMIs). BMIs allow for the processing of neural messaging between motor regions of the brain to muscles of a specific limb to initiate movement. BMIs contribute greatly to the restoration of a person's independent movement who has a bionic limb and or an exoskeleton.
Endoscopic robotics
These robotics can remove a polyp during a colonoscopy.
See also
Android (robot)
Bio-inspired robotics
Molecular machine#Biological
Biological devices
Biomechatronics
Biomimetics
Cultured neural networks
Cyborg
Cylon (reimagining)
Nanobot
Nanomedicine
Plantoid
Remote control animal
Replicant
Roborat
Technorganic
References
External links
The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
The BioRobotics Lab. Robotics Institute, Carnegie Mellon University *
Bioroïdes - A timeline of the popularization of the idea (in French)
Harvard BioRobotics Laboratory, Harvard University
Locomotion in Mechanical and Biological Systems (LIMBS) Laboratory, Johns Hopkins University
BioRobotics Lab in Korea
Laboratory of Biomedical Robotics and Biomicrosystems, Italy
Tiny backpacks for cells (MIT News)
Biologically Inspired Robotics Lab, Case Western Reserve University
Bio-Robotics and Human Modeling Laboratory - Georgia Institute of Technology
Biorobotics Laboratory at École Polytechnique Fédérale de Lausanne (Switzerland)
BioRobotics Laboratory, Free University of Berlin (Germany)
Biorobotics research group, Institute of Movement Science, CNRS/Aix-Marseille University (France)
Center for Biorobotics, Tallinn University of Technology (Estonia)
Biopunk
Biotechnology
Cyberpunk
Cybernetics
Fictional technology
Postcyberpunk
Health care robotics
Science fiction themes
Robotics | 0.797477 | 0.985754 | 0.786116 |
Physical geography | Physical geography (also known as physiography) is one of the three main branches of geography. Physical geography is the branch of natural science which deals with the processes and patterns in the natural environment such as the atmosphere, hydrosphere, biosphere, and geosphere. This focus is in contrast with the branch of human geography, which focuses on the built environment, and technical geography, which focuses on using, studying, and creating tools to obtain, analyze, interpret, and understand spatial information. The three branches have significant overlap, however.
Sub-branches
Physical geography can be divided into several branches or related fields, as follows:
Geomorphology is concerned with understanding the surface of the Earth and the processes by which it is shaped, both at the present as well as in the past. Geomorphology as a field has several sub-fields that deal with the specific landforms of various environments, e.g. desert geomorphology and fluvial geomorphology; however, these sub-fields are united by the core processes which cause them, mainly tectonic or climatic processes. Geomorphology seeks to understand landform history and dynamics, and predict future changes through a combination of field observation, physical experiment, and numerical modeling (Geomorphometry). Early studies in geomorphology are the foundation for pedology, one of two main branches of soil science.
Hydrology is predominantly concerned with the amounts and quality of water moving and accumulating on the land surface and in the soils and rocks near the surface and is typified by the hydrological cycle. Thus the field encompasses water in rivers, lakes, aquifers and to an extent glaciers, in which the field examines the process and dynamics involved in these bodies of water. Hydrology has historically had an important connection with engineering and has thus developed a largely quantitative method in its research; however, it does have an earth science side that embraces the systems approach. Similar to most fields of physical geography it has sub-fields that examine the specific bodies of water or their interaction with other spheres e.g. limnology and ecohydrology.
Glaciology is the study of glaciers and ice sheets, or more commonly the cryosphere or ice and phenomena that involve ice. Glaciology groups the latter (ice sheets) as continental glaciers and the former (glaciers) as alpine glaciers. Although research in the areas is similar to research undertaken into both the dynamics of ice sheets and glaciers, the former tends to be concerned with the interaction of ice sheets with the present climate and the latter with the impact of glaciers on the landscape. Glaciology also has a vast array of sub-fields examining the factors and processes involved in ice sheets and glaciers e.g. snow hydrology and glacial geology.
Biogeography is the science which deals with geographic patterns of species distribution and the processes that result in these patterns. Biogeography emerged as a field of study as a result of the work of Alfred Russel Wallace, although the field prior to the late twentieth century had largely been viewed as historic in its outlook and descriptive in its approach. The main stimulus for the field since its founding has been that of evolution, plate tectonics and the theory of island biogeography. The field can largely be divided into five sub-fields: island biogeography, paleobiogeography, phylogeography, zoogeography and phytogeography.
Climatology is the study of the climate, scientifically defined as weather conditions averaged over a long period of time. Climatology examines both the nature of micro (local) and macro (global) climates and the natural and anthropogenic influences on them. The field is also sub-divided largely into the climates of various regions and the study of specific phenomena or time periods e.g. tropical cyclone rainfall climatology and paleoclimatology.
Soil geography deals with the distribution of soils across the terrain. This discipline, between geography and soil science, is fundamental to both physical geography and pedology. Pedology is the study of soils in their natural environment. It deals with pedogenesis, soil morphology, soil classification. Soil geography studies the spatial distribution of soils as it relates to topography, climate (water, air, temperature), soil life (micro-organisms, plants, animals) and mineral materials within soils (biogeochemical cycles).
Palaeogeography is a cross-disciplinary study that examines the preserved material in the stratigraphic record to determine the distribution of the continents through geologic time. Almost all the evidence for the positions of the continents comes from geology in the form of fossils or paleomagnetism. The use of these data has resulted in evidence for continental drift, plate tectonics, and supercontinents. This, in turn, has supported palaeogeographic theories such as the Wilson cycle.
Coastal geography is the study of the dynamic interface between the ocean and the land, incorporating both the physical geography (i.e. coastal geomorphology, geology, and oceanography) and the human geography of the coast. It involves an understanding of coastal weathering processes, particularly wave action, sediment movement and weathering, and also the ways in which humans interact with the coast. Coastal geography, although predominantly geomorphological in its research, is not just concerned with coastal landforms, but also the causes and influences of sea level change.
Oceanography is the branch of physical geography that studies the Earth's oceans and seas. It covers a wide range of topics, including marine organisms and ecosystem dynamics (biological oceanography); ocean currents, waves, and geophysical fluid dynamics (physical oceanography); plate tectonics and the geology of the sea floor (geological oceanography); and fluxes of various chemical substances and physical properties within the ocean and across its boundaries (chemical oceanography). These diverse topics reflect multiple disciplines that oceanographers blend to further knowledge of the world ocean and understanding of processes within it.
Quaternary science is an interdisciplinary field of study focusing on the Quaternary period, which encompasses the last 2.6 million years. The field studies the last ice age and the recent interstadial the Holocene and uses proxy evidence to reconstruct the past environments during this period to infer the climatic and environmental changes that have occurred.
Landscape ecology is a sub-discipline of ecology and geography that address how spatial variation in the landscape affects ecological processes such as the distribution and flow of energy, materials, and individuals in the environment (which, in turn, may influence the distribution of landscape "elements" themselves such as hedgerows). The field was largely funded by the German geographer Carl Troll. Landscape ecology typically deals with problems in an applied and holistic context. The main difference between biogeography and landscape ecology is that the latter is concerned with how flows or energy and material are changed and their impacts on the landscape whereas the former is concerned with the spatial patterns of species and chemical cycles.
Geomatics is the field of gathering, storing, processing, and delivering geographic information, or spatially referenced information. Geomatics includes geodesy (scientific discipline that deals with the measurement and representation of the earth, its gravitational field, and other geodynamic phenomena, such as crustal motion, oceanic tides, and polar motion), cartography, geographical information science (GIS) and remote sensing (the short or large-scale acquisition of information of an object or phenomenon, by the use of either recording or real-time sensing devices that are not in physical or intimate contact with the object).
Environmental geography is a branch of geography that analyzes the spatial aspects of interactions between humans and the natural world. The branch bridges the divide between human and physical geography and thus requires an understanding of the dynamics of geology, meteorology, hydrology, biogeography, and geomorphology, as well as the ways in which human societies conceptualize the environment. Although the branch was previously more visible in research than at present with theories such as environmental determinism linking society with the environment. It has largely become the domain of the study of environmental management or anthropogenic influences.
Journals and literature
Main category: Geography Journals
Mental geography and earth science journals communicate and document the results of research carried out in universities and various other research institutions. Most journals cover a specific publish the research within that field, however unlike human geographers, physical geographers tend to publish in inter-disciplinary journals rather than predominantly geography journal; the research is normally expressed in the form of a scientific paper. Additionally, textbooks, books, and communicate research to laypeople, although these tend to focus on environmental issues or cultural dilemmas. Examples of journals that publish articles from physical geographers are:
Historical evolution of the discipline
From the birth of geography as a science during the Greek classical period and until the late nineteenth century with the birth of anthropogeography (human geography), geography was almost exclusively a natural science: the study of location and descriptive gazetteer of all places of the known world. Several works among the best known during this long period could be cited as an example, from Strabo (Geography), Eratosthenes (Geographika) or Dionysius Periegetes (Periegesis Oiceumene) in the Ancient Age. In more modern times, these works include the Alexander von Humboldt (Kosmos) in the nineteenth century, in which geography is regarded as a physical and natural science through the work Summa de Geografía of Martín Fernández de Enciso from the early sixteenth century, which indicated for the first time the New World.
During the eighteenth and nineteenth centuries, a controversy exported from geology, between supporters of James Hutton (uniformitarianism thesis) and Georges Cuvier (catastrophism) strongly influenced the field of geography, because geography at this time was a natural science.
Two historical events during the nineteenth century had a great effect on the further development of physical geography. The first was the European colonial expansion in Asia, Africa, Australia and even America in search of raw materials required by industries during the Industrial Revolution. This fostered the creation of geography departments in the universities of the colonial powers and the birth and development of national geographical societies, thus giving rise to the process identified by Horacio Capel as the institutionalization of geography.
The exploration of Siberia is an example. In the mid-eighteenth century, many geographers were sent to perform geographical surveys in the area of Arctic Siberia. Among these is who is considered the patriarch of Russian geography, Mikhail Lomonosov. In the mid-1750s Lomonosov began working in the Department of Geography, Academy of Sciences to conduct research in Siberia. They showed the organic origin of soil and developed a comprehensive law on the movement of the ice, thereby founding a new branch of geography: glaciology. In 1755 on his initiative was founded Moscow University where he promoted the study of geography and the training of geographers. In 1758 he was appointed director of the Department of Geography, Academy of Sciences, a post from which would develop a working methodology for geographical survey guided by the most important long expeditions and geographical studies in Russia.
The contributions of the Russian school became more frequent through his disciples, and in the nineteenth century we have great geographers such as Vasily Dokuchaev who performed works of great importance as a "principle of comprehensive analysis of the territory" and "Russian Chernozem". In the latter, he introduced the geographical concept of soil, as distinct from a simple geological stratum, and thus found a new geographic area of study: pedology. Climatology also received a strong boost from the Russian school by Wladimir Köppen whose main contribution, climate classification, is still valid today. However, this great geographer also contributed to the paleogeography through his work "The climates of the geological past" which is considered the father of paleoclimatology. Russian geographers who made great contributions to the discipline in this period were: NM Sibirtsev, Pyotr Semyonov, K.D. Glinka, Neustrayev, among others.
The second important process is the theory of evolution by Darwin in mid-century (which decisively influenced the work of Friedrich Ratzel, who had academic training as a zoologist and was a follower of Darwin's ideas) which meant an important impetus in the development of Biogeography.
Another major event in the late nineteenth and early twentieth centuries took place in the United States. William Morris Davis not only made important contributions to the establishment of discipline in his country but revolutionized the field to develop cycle of erosion theory which he proposed as a paradigm for geography in general, although in actually served as a paradigm for physical geography. His theory explained that mountains and other landforms are shaped by factors that are manifested cyclically. He explained that the cycle begins with the lifting of the relief by geological processes (faults, volcanism, tectonic upheaval, etc.). Factors such as rivers and runoff begin to create V-shaped valleys between the mountains (the stage called "youth"). During this first stage, the terrain is steeper and more irregular. Over time, the currents can carve wider valleys ("maturity") and then start to wind, towering hills only ("senescence"). Finally, everything comes to what is a plain flat plain at the lowest elevation possible (called "baseline") This plain was called by Davis' "peneplain" meaning "almost plain" Then river rejuvenation occurs and there is another mountain lift and the cycle continues.
Although Davis's theory is not entirely accurate, it was absolutely revolutionary and unique in its time and helped to modernize and create a geography subfield of geomorphology. Its implications prompted a myriad of research in various branches of physical geography. In the case of the Paleogeography, this theory provided a model for understanding the evolution of the landscape. For hydrology, glaciology, and climatology as a boost investigated as studying geographic factors shape the landscape and affect the cycle. The bulk of the work of William Morris Davis led to the development of a new branch of physical geography: Geomorphology whose contents until then did not differ from the rest of geography. Shortly after this branch would present a major development. Some of his disciples made significant contributions to various branches of physical geography such as Curtis Marbut and his invaluable legacy for Pedology, Mark Jefferson, Isaiah Bowman, among others.
Notable physical geographers
Eratosthenes (276194 BC) who invented the discipline of geography. He made the first known reliable estimation of the Earth's size. He is considered the father of mathematical geography and geodesy.
Ptolemy (c. 90c. 168), who compiled Greek and Roman knowledge to produce the book Geographia.
Abū Rayhān Bīrūnī (9731048 AD), considered the father of geodesy.
Ibn Sina (Avicenna, 980–1037), who formulated the law of superposition and concept of uniformitarianism in Kitāb al-Šifāʾ (also called The Book of Healing).
Muhammad al-Idrisi (Dreses, 1100), who drew the Tabula Rogeriana, the most accurate world map in pre-modern times.
Piri Reis (1465c. 1554), whose Piri Reis map is the oldest surviving world map to include the Americas and possibly Antarctica
Gerardus Mercator (1512–1594), an innovative cartographer and originator of the Mercator projection.
Bernhardus Varenius (1622–1650), Wrote his important work "General Geography" (1650), first overview of the geography, the foundation of modern geography.
Mikhail Lomonosov (1711–1765), father of Russian geography and founded the study of glaciology.
Alexander von Humboldt (1769–1859), considered the father of modern geography. Published Cosmos and founded the study of biogeography.
Arnold Henry Guyot (1807–1884), who noted the structure of glaciers and advanced the understanding of glacial motion, especially in fast ice flow.
Louis Agassiz (1807–1873), the author of a glacial theory which disputed the notion of a steady-cooling Earth.
Alfred Russel Wallace (1823–1913), founder of modern biogeography and the Wallace line.
Vasily Dokuchaev (1840–1903), patriarch of Russian geography and founder of pedology.
Wladimir Peter Köppen (1846–1940), developer of most important climate classification and founder of Paleoclimatology.
William Morris Davis (1850–1934), father of American geography, founder of Geomorphology and developer of the geographical cycle theory.
John Francon Williams FRGS (1854-1911), wrote his seminal work Geography of the Oceans published in 1881.
Walther Penck (1888–1923), proponent of the cycle of erosion and the simultaneous occurrence of uplift and denudation.
Sir Ernest Shackleton (1874–1922), Antarctic explorer during the Heroic Age of Antarctic Exploration.
Robert E. Horton (1875–1945), founder of modern hydrology and concepts such as infiltration capacity and overland flow.
J Harlen Bretz (1882–1981), pioneer of research into the shaping of landscapes by catastrophic floods, most notably the Bretz (Missoula) floods.
Luis García Sáinz (1894–1965), pioneer of physical geography in Spain.
Willi Dansgaard (1922–2011), palaeoclimatologist and quaternary scientist, instrumental in the use of oxygen-isotope dating and co-identifier of Dansgaard-Oeschger events.
Hans Oeschger (1927–1998), palaeoclimatologist and pioneer in ice core research, co-identifier of Dansgaard-Orschger events.
Richard Chorley (1927–2002), a key contributor to the quantitative revolution and the use of systems theory in geography.
Sir Nicholas Shackleton (1937–2006), who demonstrated that oscillations in climate over the past few million years could be correlated with variations in the orbital and positional relationship between the Earth and the Sun.
See also
Areography
Atmosphere of Earth
Concepts and Techniques in Modern Geography
Earth system science
Environmental science
Environmental studies
Geographic information science
Geographic information system
Geophysics
Geostatistics
Global Positioning System
Planetary science
Physiographic regions of the world
Selenography
Technical geography
References
Further reading
Pidwirny, Michael. (2014). Glossary of Terms for Physical Geography. Planet Earth Publishing, Kelowna, Canada. . Available on Google Play.
Pidwirny, Michael. (2014). Understanding Physical Geography. Planet Earth Publishing, Kelowna, Canada. . Available on Google Play.
Reynolds, Stephen J. et al. (2015). Exploring Physical Geography. [A Visual Textbook, Featuring more than 2500 Photographies & Illustrations]. McGraw-Hill Education, New York.
External links
Physiography by T.X. Huxley, 1878, full text, physical geography of the Thames River Basin
Fundamentals of Physical Geography, 2nd Edition, by M. Pidwirny, 2006, full text
Physical Geography for Students and Teachers, UK National Grid For Learning
Earth sciences | 0.787795 | 0.997665 | 0.785955 |
Biological system | A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Exocrine system: various functions including lubrication and protection by exocrine glands such sweat glands, mucous glands, lacrimal glands and mammary glands
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from foreign bodies.
Nervous system: collecting, transferring and processing information with brain, spinal cord, peripheral nervous system and sense organs.
Sensory systems: visual system, auditory system, olfactory system, gustatory system, somatosensory system, vestibular system.
Muscular system: allows for manipulation of the environment, provides locomotion, maintains posture, and produces heat. Includes skeletal muscles, smooth muscles and cardiac muscle.
Reproductive system: the sex organs, such as ovaries, fallopian tubes, uterus, vagina, mammary glands, testes, vas deferens, seminal vesicles and prostate.
History
The notion of system (or apparatus) relies upon the concept of vital or organic function: a system is a set of organs with a definite function. This idea was already present in Antiquity (Galen, Aristotle), but the application of the term "system" is more recent. For example, the nervous system was named by Monro (1783), but Rufus of Ephesus (c. 90–120), clearly viewed for the first time the brain, spinal cord, and craniospinal nerves as an anatomical unit, although he wrote little about its function, nor gave a name to this unit.
The enumeration of the principal functions - and consequently of the systems - remained almost the same since Antiquity, but the classification of them has been very various, e.g., compare Aristotle, Bichat, Cuvier.
The notion of physiological division of labor, introduced in the 1820s by the French physiologist Henri Milne-Edwards, allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils).
Cellular organelle systems
The exact components of a cell are determined by whether the cell is a eukaryote or prokaryote.
Nucleus (eukaryotic only): storage of genetic material; control center of the cell.
Cytosol: component of the cytoplasm consisting of jelly-like fluid in which organelles are suspended within
Cell membrane (plasma membrane):
Endoplasmic reticulum: outer part of the nuclear envelope forming a continuous channel used for transportation; consists of the rough endoplasmic reticulum and the smooth endoplasmic reticulum
Rough endoplasmic reticulum (RER): considered "rough" due to the ribosomes attached to the channeling; made up of cisternae that allow for protein production
Smooth endoplasmic reticulum (SER): storage and synthesis of lipids and steroid hormones as well as detoxification
Ribosome: site of biological protein synthesis essential for internal activity and cannot be reproduced in other organs
Mitochondrion (mitochondria): powerhouse of the cell; site of cellular respiration producing ATP (adenosine triphosphate)
Lysosome: center of breakdown for unwanted/unneeded material within the cell
Peroxisome: breaks down toxic materials from the contained digestive enzymes such as H2O2(hydrogen peroxide)
Golgi apparatus (eukaryotic only): folded network involved in modification, transport, and secretion
Chloroplast: site of photosynthesis; storage of chlorophyllyourmom.com.in.us.33.11.44.55.66.77.88.99.1010.1111.1212.1313.1414.1515.1616.1717.1818.1919.2020
See also
Biological network
Artificial life
Biological systems engineering
Evolutionary systems
Organ system
Systems biology
Systems ecology
Systems theory
External links
Systems Biology: An Overview by Mario Jardon: A review from the Science Creative Quarterly, 2005.
Synthesis and Analysis of a Biological System, by Hiroyuki Kurata, 1999.
It from bit and fit from bit. On the origin and impact of information in the average evolution. Includes how life forms and biological systems originate and from there evolve to become more and more complex, including evolution of genes and memes, into the complex memetics from organisations and multinational corporations and a "global brain", (Yves Decadt, 2000). Book published in Dutch with English paper summary in The Information Philosopher, http://www.informationphilosopher.com/solutions/scientists/decadt/
Schmidt-Rhaesa, A. 2007. The Evolution of Organ Systems. Oxford University Press, Oxford, .
References
Biological systems | 0.79099 | 0.993594 | 0.785923 |
Biomedical sciences | Biomedical sciences are a set of sciences applying portions of natural science or formal science, or both, to develop knowledge, interventions, or technology that are of use in healthcare or public health. Such disciplines as medical microbiology, clinical virology, clinical epidemiology, genetic epidemiology, and biomedical engineering are medical sciences. In explaining physiological mechanisms operating in pathological processes, however, pathophysiology can be regarded as basic science.
Biomedical Sciences, as defined by the UK Quality Assurance Agency for Higher Education Benchmark Statement in 2015, includes those science disciplines whose primary focus is the biology of human health and disease and ranges from the generic study of biomedical sciences and human biology to more specialised subject areas such as pharmacology, human physiology and human nutrition. It is underpinned by relevant basic sciences including anatomy and physiology, cell biology, biochemistry, microbiology, genetics and molecular biology, pharmacology, immunology, mathematics and statistics, and bioinformatics. As such the biomedical sciences have a much wider range of academic and research activities and economic significance than that defined by hospital laboratory sciences. Biomedical Sciences are the major focus of bioscience research and funding in the 21st century.
Roles within biomedical science
A sub-set of biomedical sciences is the science of clinical laboratory diagnosis. This is commonly referred to in the UK as 'biomedical science' or 'healthcare science'. There are at least 45 different specialisms within healthcare science, which are traditionally grouped into three main divisions:
specialisms involving life sciences
specialisms involving physiological science
specialisms involving medical physics or bioengineering
Life sciences specialties
Molecular toxicology
Molecular pathology
Blood transfusion science
Cervical cytology
Clinical biochemistry
Clinical embryology
Clinical immunology
Clinical pharmacology and therapeutics
Electron microscopy
External quality assurance
Haematology
Haemostasis and thrombosis
Histocompatibility and immunogenetics
Histopathology and cytopathology
Molecular genetics and cytogenetics
Molecular biology and cell biology
Microbiology including mycology
Bacteriology
Tropical diseases
Phlebotomy
Tissue banking/transplant
Virology
Physiological science specialisms
Physics and bioengineering specialisms
Biomedical science in the United Kingdom
The healthcare science workforce is an important part of the UK's National Health Service. While people working in healthcare science are only 5% of the staff of the NHS, 80% of all diagnoses can be attributed to their work.
The volume of specialist healthcare science work is a significant part of the work of the NHS. Every year, NHS healthcare scientists carry out:
nearly 1 billion pathology laboratory tests
more than 12 million physiological tests
support for 1.5 million fractions of radiotherapy
The four governments of the UK have recognised the importance of healthcare science to the NHS, introducing the Modernising Scientific Careers initiative to make certain that the education and training for healthcare scientists ensures there is the flexibility to meet patient needs while keeping up to date with scientific developments.
Graduates of an accredited biomedical science degree programme can also apply for the NHS' Scientist training programme, which gives successful applicants an opportunity to work in a clinical setting whilst also studying towards an MSc or Doctoral qualification.
Biomedical Science in the 20th century
At this point in history the field of medicine was the most prevalent sub field of biomedical science, as several breakthroughs on how to treat diseases and help the immune system were made. As well as the birth of body augmentations.
1910s
In 1912, the Institute of Biomedical Science was founded in the United Kingdom. The institute is still standing today and still regularly publishes works in the major breakthroughs in disease treatments and other breakthroughs in the field 117 years later. The IBMS today represents approximately 20,000 members employed mainly in National Health Service and private laboratories.
1920s
In 1928, British Scientist Alexander Fleming discovered the first antibiotic penicillin. This was a huge breakthrough in biomedical science because it allowed for the treatment of bacterial infections.
In 1926, the first artificial pacemaker was made by Australian physician Dr. Mark C. Lidwell. This portable machine was plugged into a lighting point. One pole was applied to a skin pad soaked with strong salt solution, while the other consisted of a needle insulated up to the point and was plunged into the appropriate cardiac chamber and the machine started. A switch was incorporated to change the polarity. The pacemaker rate ranged from about 80 to 120 pulses per minute and the voltage also variable from 1.5 to 120 volts.
1930s
The 1930s was a huge era for biomedical research, as this was the era where antibiotics became more widespread and vaccines started to be developed. In 1935, the idea of a polio vaccine was introduced by Dr. Maurice Brodie. Brodie prepared a died poliomyelitis vaccine, which he then tested on chimpanzees, himself, and several children. Brodie's vaccine trials went poorly since the polio-virus became active in many of the human test subjects. Many subjects had fatal side effects, paralyzing, and causing death.
1940s
During and after World War II, the field of biomedical science saw a new age of technology and treatment methods. For instance in 1941 the first hormonal treatment for prostate cancer was implemented by Urologist and cancer researcher Charles B. Huggins. Huggins discovered that if you remove the testicles from a man with prostate cancer, the cancer had nowhere to spread, and nothing to feed on thus putting the subject into remission. This advancement lead to the development of hormonal blocking drugs, which is less invasive and still used today. At the tail end of this decade, the first bone marrow transplant was done on a mouse in 1949. The surgery was conducted by Dr. Leon O. Jacobson, he discovered that he could transplant bone marrow and spleen tissues in a mouse that had both no bone marrow and a destroyed spleen. The procedure is still used in modern medicine today and is responsible for saving countless lives.
1950s
In the 1950s, we saw innovation in technology across all fields, but most importantly there were many breakthroughs which led to modern medicine. On 6 March 1953, Dr. Jonas Salk announced the completion of the first successful killed-virus Polio vaccine. The vaccine was tested on about 1.6 million Canadian, American, and Finnish children in 1954. The vaccine was announced as safe on 12 April 1955.
See also
Biomedical research institution Austral University Hospital
References
External links
Extraordinary You: Case studies of Healthcare scientists in the UK's National Health Service
National Institute of Environmental Health Sciences
The US National Library of Medicine
National Health Service
Health sciences
Health care occupations
Science occupations | 0.789607 | 0.995015 | 0.785671 |
Autotroph | An autotroph is an organism that can convert abiotic sources of energy into energy stored in organic compounds, which can be used by other organisms. Autotrophs produce complex organic compounds (such as carbohydrates, fats, and proteins) using carbon from simple substances such as carbon dioxide, generally using energy from light or inorganic chemical reactions. Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water. Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent, but some can use other hydrogen compounds such as hydrogen sulfide.
The primary producers can convert the energy in the light (phototroph and photoautotroph) or the energy in inorganic chemical compounds (chemotrophs or chemolithotrophs) to build organic molecules, which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis, ultimately building organic molecules from carbon dioxide, an inorganic carbon source. Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs, and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level, and are the reasons why Earth sustains life to this day.
Most chemoautotrophs are lithotrophs, using inorganic electron donors such as hydrogen sulfide, hydrogen gas, elemental sulfur, ammonium and ferrous oxide as reducing agents and hydrogen sources for biosynthesis and chemical energy release. Autotrophs use a portion of the ATP produced during photosynthesis or the oxidation of chemical compounds to reduce NADP+ to NADPH to form organic compounds.
History
The term autotroph was coined by the German botanist Albert Bernhard Frank in 1892. It stems from the ancient Greek word , meaning "nourishment" or "food". The first autotrophic organisms likely evolved early in the Archean but proliferated across Earth's Great Oxidation Event with an increase to the rate of oxygenic photosynthesis by cyanobacteria. Photoautotrophs evolved from heterotrophic bacteria by developing photosynthesis. The earliest photosynthetic bacteria used hydrogen sulphide. Due to the scarcity of hydrogen sulphide, some photosynthetic bacteria evolved to use water in photosynthesis, leading to cyanobacteria.
Variants
Some organisms rely on organic compounds as a source of carbon, but are able to use light or inorganic compounds as a source of energy. Such organisms are mixotrophs. An organism that obtains carbon from organic compounds but obtains energy from light is called a photoheterotroph, while an organism that obtains carbon from organic compounds and energy from the oxidation of inorganic compounds is termed a chemolithoheterotroph.
Evidence suggests that some fungi may also obtain energy from ionizing radiation: Such radiotrophic fungi were found growing inside a reactor of the Chernobyl nuclear power plant.
Examples
There are many different types of autotrophs in Earth's ecosystems. Lichens located in tundra climates are an exceptional example of a primary producer that, by mutualistic symbiosis, combines photosynthesis by algae (or additionally nitrogen fixation by cyanobacteria) with the protection of a decomposer fungus. Also, plant-like primary producers (trees, algae) use the sun as a form of energy and put it into the air for other organisms. There are of course H2O primary producers, including a form of bacteria, and phytoplankton. As there are many examples of primary producers, two dominant types are coral and one of the many types of brown algae, kelp.
Photosynthesis
Gross primary production occurs by photosynthesis. This is also the main way that primary producers take energy and produce/release it somewhere else. Plants, coral, bacteria, and algae do this. During photosynthesis, primary producers take energy from the sun and convert it into energy, sugar, and oxygen. Primary producers also need the energy to convert this same energy elsewhere, so they get it from nutrients. One type of nutrient is nitrogen.
Ecology
Without primary producers, organisms that are capable of producing energy on their own, the biological systems of Earth would be unable to sustain themselves. Plants, along with other primary producers, produce the energy that other living beings consume, and the oxygen that they breathe. It is thought that the first organisms on Earth were primary producers located on the ocean floor.
Autotrophs are fundamental to the food chains of all ecosystems in the world. They take energy from the environment in the form of sunlight or inorganic chemicals and use it to create fuel molecules such as carbohydrates. This mechanism is called primary production. Other organisms, called heterotrophs, take in autotrophs as food to carry out functions necessary for their life. Thus, heterotrophs – all animals, almost all fungi, as well as most bacteria and protozoa – depend on autotrophs, or primary producers, for the raw materials and fuel they need. Heterotrophs obtain energy by breaking down carbohydrates or oxidizing organic molecules (carbohydrates, fats, and proteins) obtained in food. Carnivorous organisms rely on autotrophs indirectly, as the nutrients obtained from their heterotrophic prey come from autotrophs they have consumed.
Most ecosystems are supported by the autotrophic primary production of plants and cyanobacteria that capture photons initially released by the sun. Plants can only use a fraction (approximately 1%) of this energy for photosynthesis. The process of photosynthesis splits a water molecule (H2O), releasing oxygen (O2) into the atmosphere, and reducing carbon dioxide (CO2) to release the hydrogen atoms that fuel the metabolic process of primary production. Plants convert and store the energy of the photon into the chemical bonds of simple sugars during photosynthesis. These plant sugars are polymerized for storage as long-chain carbohydrates, including other sugars, starch, and cellulose; glucose is also used to make fats and proteins. When autotrophs are eaten by heterotrophs, i.e., consumers such as animals, the carbohydrates, fats, and proteins contained in them become energy sources for the heterotrophs. Proteins can be made using nitrates, sulfates, and phosphates in the soil.
Primary production in tropical streams and rivers
Aquatic algae are a significant contributor to food webs in tropical rivers and streams. This is displayed by net primary production, a fundamental ecological process that reflects the amount of carbon that is synthesized within an ecosystem. This carbon ultimately becomes available to consumers. Net primary production displays that the rates of in-stream primary production in tropical regions are at least an order of magnitude greater than in similar temperate systems.
Origin of autotrophs
Researchers believe that the first cellular lifeforms were not heterotrophs as they would rely upon autotrophs since organic substrates delivered from space were either too heterogeneous to support microbial growth or too reduced to be fermented. Instead, they consider that the first cells were autotrophs. These autotrophs might have been thermophilic and anaerobic chemolithoautotrophs that lived at deep sea alkaline hydrothermal vents. Catalytic Fe(Ni)S minerals in these environments are shown to catalyze biomolecules like RNA. This view is supported by phylogenetic evidence as the physiology and habitat of the last universal common ancestor (LUCA) was inferred to have also been a thermophilic anaerobe with a Wood-Ljungdahl pathway, its biochemistry was replete with FeS clusters and radical reaction mechanisms. It was dependent upon Fe, H2, and CO2. The high concentration of K+ present within the cytosol of most life forms suggests that early cellular life had Na+/H+ antiporters or possibly symporters. Autotrophs possibly evolved into heterotrophs when they were at low H2 partial pressures where the first form of heterotrophy were likely amino acid and clostridial type purine fermentations and photosynthesis emerged in the presence of long-wavelength geothermal light emitted by hydrothermal vents. The first photochemically active pigments are inferred to be Zn-tetrapyrroles.
See also
Electrolithoautotroph
Electrotroph
Heterotrophic nutrition
Organotroph
Primary nutritional groups
References
External links
Trophic ecology
Microbial growth and nutrition
Biology terminology
Plant nutrition | 0.788469 | 0.996389 | 0.785621 |
Plant morphology | Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive structures. The vegetative structures of vascular plants includes the study of the shoot system, composed of stems and leaves, as well as the root system. The reproductive structures are more varied, and are usually specific to a particular group of plants, such as flowers and seeds, fern sori, and moss capsules. The detailed study of reproductive structures in plants led to the discovery of the alternation of generations found in all plants and most algae. This area of plant morphology overlaps with the study of biodiversity and plant systematics.
Thirdly, plant morphology studies plant structure at a range of scales. At the smallest scales are ultrastructure, the general structural features of cells visible only with the aid of an electron microscope, and cytology, the study of cells using optical microscopy. At this scale, plant morphology overlaps with plant anatomy as a field of study. At the largest scale is the study of plant growth habit, the overall architecture of a plant. The pattern of branching in a tree will vary from species to species, as will the appearance of a plant as a tree, herb, or grass.
Fourthly, plant morphology examines the pattern of development, the process by which structures originate and mature as a plant grows. While animals produce all the body parts they will ever have from early in their life, plants constantly produce new tissues and structures throughout their life. A living plant always has embryonic tissues. The way in which new structures mature as they are produced may be affected by the point in the plant's life when they begin to develop, as well as by the environment to which the structures are exposed. A morphologist studies this process, the causes, and its result. This area of plant morphology overlaps with plant physiology and ecology.
A comparative science
A plant morphologist makes comparisons between structures in many different plants of the same or different species. Making such comparisons between similar structures in different plants tackles the question of why the structures are similar. It is quite likely that similar underlying causes of genetics, physiology, or response to the environment have led to this similarity in appearance. The result of scientific investigation into these causes can lead to one of two insights into the underlying biology:
Homology - the structure is similar between the two species because of shared ancestry and common genetics.
Convergence - the structure is similar between the two species because of independent adaptation to common environmental pressures.
Understanding which characteristics and structures belong to each type is an important part of understanding plant evolution. The evolutionary biologist relies on the plant morphologist to interpret structures, and in turn provides phylogenies of plant relationships that may lead to new morphological insights.
Homology
When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well.
Convergence
When structures in different species are believed to exist and develop as a result of common adaptive responses to environmental pressure, those structures are termed convergent. For example, the fronds of Bryopsis plumosa and stems of Asparagus setaceus both have the same feathery branching appearance, even though one is an alga and one is a flowering plant. The similarity in overall structure occurs independently as a result of convergence. The growth form of many cacti and species of Euphorbia is very similar, even though they belong to widely distant families. The similarity results from common solutions to the problem of surviving in a hot, dry environment.
Vegetative and reproductive characteristics
Plant morphology treats both the vegetative structures of plants, as well as the reproductive structures.
The vegetative (somatic) structures of vascular plants include two major organ systems: (1) a shoot system, composed of stems and leaves, and (2) a root system. These two systems are common to nearly all vascular plants, and provide a unifying theme for the study of plant morphology.
By contrast, the reproductive structures are varied, and are usually specific to a particular group of plants. Structures such as flowers and fruits are only found in the angiosperms; sori are only found in ferns; and seed cones are only found in conifers and other gymnosperms. Reproductive characters are therefore regarded as more useful for the classification of plants than vegetative characters.
Use in identification
Plant biologists use morphological characters of plants which can be compared, measured, counted and described to assess the differences or similarities in plant taxa and use these characters for plant identification, classification and descriptions.
When characters are used in descriptions or for identification they are called diagnostic or key characters which can be either qualitative and quantitative.
Quantitative characters are morphological features that can be counted or measured for example a plant species has flower petals 10–12 mm wide.
Qualitative characters are morphological features such as leaf shape, flower color or pubescence.
Both kinds of characters can be very useful for the identification of plants.
Alternation of generations
The detailed study of reproductive structures in plants led to the discovery of the alternation of generations, found in all plants and most algae, by the German botanist Wilhelm Hofmeister. This discovery is one of the most important made in all of plant morphology, since it provides a common basis for understanding the life cycle of all plants.
Pigmentation in plants
The primary function of pigments in plants is photosynthesis, which uses the green pigment chlorophyll along with several red and yellow pigments that help to capture as much light energy as possible the other pigments ic carotenoids'. Pigments are also an important factor in attracting insects to flowers to encourage pollination.
Plant pigments include a variety of different kinds of molecule, including porphyrins, carotenoids, anthocyanins and betalains. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment will appear to the eye.
Morphology in development
Plant development is the process by which structures originate and mature as a plant grows. It is a subject studies in plant anatomy and plant physiology as well as plant morphology.
The process of development in plants is fundamentally different from that seen in vertebrate animals. When an animal embryo begins to develop, it will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. By contrast, plants constantly produce new tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues.
The properties of organisation seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts." In other words, knowing everything about the molecules in a plant are not enough to predict characteristics of the cells; and knowing all the properties of the cells will not predict all the properties of a plant's structure.
Growth
A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organise so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin in its life.
Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialised tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium.
In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism).
Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin.
Morphological variation
Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility.
Evolution of plant morphology
Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants.
Positional effects
Although plants produce numerous copies of the same organ during their lives, not all copies of a particular organ will be identical. There is variation among the parts of a mature plant resulting from the relative position where the organ is produced. For example, along a new branch the leaves may vary in a consistent pattern along the branch. The form of leaves produced near the base of the branch will differ from leaves produced at the tip of the plant, and this difference is consistent from branch to branch on a given plant and in a given species. This difference persists after the leaves at both ends of the branch have matured, and is not the result of some leaves being younger than others.
Environmental effects
The way in which new structures mature as they are produced may be affected by the point in the plant's life when they begin to develop, as well as by the environment to which the structures are exposed. This can be seen in aquatic plants.
Temperature
Temperature has a multiplicity of effects on plants depending on a variety of factors, including the size and condition of the plant and the temperature and duration of exposure. The smaller and more succulent the plant, the greater the susceptibility to damage or death from temperatures that are too high or too low. Temperature affects the rate of biochemical and physiological processes, rates generally (within limits) increasing with temperature. However, the Van’t Hoff relationship for monomolecular reactions (which states that the velocity of a reaction is doubled or trebled by a temperature increase of 10 °C) does not strictly hold for biological processes, especially at low and high temperatures.
When water freezes in plants, the consequences for the plant depend very much on whether the freezing occurs intracellularly (within cells) or outside cells in intercellular (extracellular) spaces. Intracellular freezing usually kills the cell regardless of the hardiness of the plant and its tissues. Intracellular freezing seldom occurs in nature, but moderate rates of decrease in temperature, e.g., 1 °C to 6 °C/hour, cause intercellular ice to form, and this "extraorgan ice" may or may not be lethal, depending on the hardiness of the tissue.
At freezing temperatures, water in the intercellular spaces of plant tissues freezes first, though the water may remain unfrozen until temperatures fall below 7 °C. After the initial formation of ice intercellularly, the cells shrink as water is lost to the segregated ice. The cells undergo freeze-drying, the dehydration being the basic cause of freezing injury.
The rate of cooling has been shown to influence the frost resistance of tissues, but the actual rate of freezing will depend not only on the cooling rate, but also on the degree of supercooling and the properties of the tissue. Sakai (1979a) demonstrated ice segregation in shoot primordia of Alaskan white and black spruces when cooled slowly to 30 °C to -40 °C. These freeze-dehydrated buds survived immersion in liquid nitrogen when slowly rewarmed. Floral primordia responded similarly. Extraorgan freezing in the primordia accounts for the ability of the hardiest of the boreal conifers to survive winters in regions when air temperatures often fall to -50 °C or lower. The hardiness of the winter buds of such conifers is enhanced by the smallness of the buds, by the evolution of faster translocation of water, and an ability to tolerate intensive freeze dehydration. In boreal species of Picea and Pinus, the frost resistance of 1-year-old seedlings is on a par with mature plants, given similar states of dormancy.
Juvenility
The organs and tissues produced by a young plant, such as a seedling, are often different from those that are produced by the same plant when it is older. This phenomenon is known as juvenility or heteroblasty. For example, young trees will produce longer, leaner branches that grow upwards more than the branches they will produce as a fully grown tree. In addition, leaves produced during early growth tend to be larger, thinner, and more irregular than leaves on the adult plant. Specimens of juvenile plants may look so completely different from adult plants of the same species that egg-laying insects do not recognise the plant as food for their young.
Differences are seen in rootability and flowering and can be seen in the same mature tree. Juvenile cuttings taken from the base of a tree will form roots much more readily than cuttings originating from the mid to upper crown. Flowering close to the base of a tree is absent or less profuse than flowering in the higher branches especially when a young tree first reaches flowering age.
The transition from early to late growth forms is referred to as 'vegetative phase change',
but there is some disagreement about terminology.
Modern Innovations
Rolf Sattler has revised fundamental concepts of comparative morphology such as the concept of homology. He emphasised that homology should also include partial homology and quantitative homology. This leads to a continuum morphology that demonstrates a continuum between the morphological categories of root, shoot, stem (caulome), leaf (phyllome), and hair (trichome). How intermediates between the categories are best described has been discussed by Bruce K. Kirchoff et al. A recent study conducted by Stalk Institute extracted coordinates corresponding to each plant's base and leaves in 3D space. When plants on the graph were placed according to their actual nutrient travel distances and total branch lengths, the plants fell almost perfectly on the Pareto curve. "This means the way plants grow their architectures also optimises a very common network design tradeoff. Based on the environment and the species, the plant is selecting different ways to make tradeoffs for those particular environmental conditions."
Honoring Agnes Arber, author of the partial-shoot theory of the leaf, Rutishauser and Isler called the continuum approach Fuzzy Arberian Morphology (FAM). “Fuzzy” refers to fuzzy logic, “Arberian” to Agnes Arber. Rutishauser and Isler emphasised that this approach is not only supported by many morphological data but also by evidence from molecular genetics. More recent evidence from molecular genetics provides further support for continuum morphology. James (2009) concluded that "it is now widely accepted that... radiality [characteristic of most stems] and dorsiventrality [characteristic of leaves] are but extremes of a continuous spectrum. In fact, it is simply the timing of the KNOX gene expression!." Eckardt and Baum (2010) concluded that "it is now generally accepted that compound leaves express both leaf and shoot properties.”
Process morphology describes and analyses the dynamic continuum of plant form. According to this approach, structures do not have process(es), they are process(es). Thus, the structure/process dichotomy is overcome by "an enlargement of our concept of 'structure' so as to include and recognise that in the living organism it is not merely a question of spatial structure with an 'activity' as something over or against it, but that the concrete organism is a spatio-temporal structure and that this spatio-temporal structure is the activity itself".
For Jeune, Barabé and Lacroix, classical morphology (that is, mainstream morphology, based on a qualitative homology concept implying mutually exclusive categories) and continuum morphology are sub-classes of the more encompassing process morphology (dynamic morphology).
Classical morphology, continuum morphology, and process morphology are highly relevant to plant evolution, especially the field of plant evolutionary biology (plant evo-devo) that tries to integrate plant morphology and plant molecular genetics. In a detailed case study on unusual morphologies, Rutishauser (2016) illustrated and discussed various topics of plant evo-devo such as the fuzziness (continuity) of morphological concepts, the lack of a one-to-one correspondence between structural categories and gene expression, the notion of morphospace, the adaptive value of bauplan features versus patio ludens, physiological adaptations, hopeful monsters and saltational evolution, the significance and limits of developmental robustness, etc. Rutishauser (2020) discussed the past and future of plant evo-devo. Our conception of the gynoecium and the search for a fossil ancestor of Angiosperms changes fundamentally from the perspective of evo-devo.
Whether we like it or not, morphological research is influenced by philosophical assumptions such as either/or logic, fuzzy logic, structure/process dualism or its transcendence. And empirical findings may influence the philosophical assumptions. Thus there are interactions between philosophy and empirical findings. These interactions are the subject of what has been referred to as philosophy of plant morphology.
One important and unique event in plant morphology of the 21st century was the publication of Kaplan's Principles of Plant Morphology by Donald R. Kaplan, edited by Chelsea D. Specht (2020). It is a well illustrated volume of 1305 pages in a very large format that presents a wealth of morphological data. Unfortunately, all of these data are only interpreted in terms of classical morphology and the qualitative homology concept, disregarding modern conceptional innovations.
Including continuum and process morphology as well as molecular genetics would provide an enlarged scope.
See also
Glossary of plant morphology
Plant anatomy
Plant identification
Plant physiology
Plant evolutionary developmental biology
Taxonomy
References
External links
Botanical Visual Glossary
Plant morphology: fundamental issues
Branches of botany | 0.796384 | 0.986346 | 0.78551 |
Conservation biology | Conservation biology is the study of the conservation of nature and of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. It is an interdisciplinary subject drawing on natural and social sciences, and the practice of natural resource management.
The conservation ethic is based on the findings of conservation biology.
Origins
The term conservation biology and its conception as a new field originated with the convening of "The First International Conference on Research in Conservation Biology" held at the University of California, San Diego in La Jolla, California, in 1978 led by American biologists Bruce A. Wilcox and Michael E. Soulé with a group of leading university and zoo researchers and conservationists including Kurt Benirschke, Sir Otto Frankel, Thomas Lovejoy, and Jared Diamond. The meeting was prompted due to concern over tropical deforestation, disappearing species, and eroding genetic diversity within species. The conference and proceedings that resulted sought to initiate the bridging of a gap between theory in ecology and evolutionary genetics on the one hand and conservation policy and practice on the other.
Conservation biology and the concept of biological diversity (biodiversity) emerged together, helping crystallize the modern era of conservation science and policy. The inherent multidisciplinary basis for conservation biology has led to new subdisciplines including conservation social science, conservation behavior and conservation physiology. It stimulated further development of conservation genetics which Otto Frankel had originated first but is now often considered a subdiscipline as well.
Description
The rapid decline of established biological systems around the world means that conservation biology is often referred to as a "Discipline with a deadline". Conservation biology is tied closely to ecology in researching the population ecology (dispersal, migration, demographics, effective population size, inbreeding depression, and minimum population viability) of rare or endangered species. Conservation biology is concerned with phenomena that affect the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. The concern stems from estimates suggesting that up to 50% of all species on the planet will disappear within the next 50 years, which will increase poverty and starvation, and will reset the course of evolution on this planet. Researchers acknowledge that projections are difficult, given the unknown potential impacts of many variables, including species introduction to new biogeographical settings and a non-analog climate.
Conservation biologists research and educate on the trends and process of biodiversity loss, species extinctions, and the negative effect these are having on our capabilities to sustain the well-being of human society. Conservation biologists work in the field and office, in government, universities, non-profit organizations and industry. The topics of their research are diverse, because this is an interdisciplinary network with professional alliances in the biological as well as social sciences. Those dedicated to the cause and profession advocate for a global response to the current biodiversity crisis based on morals, ethics, and scientific reason. Organizations and citizens are responding to the biodiversity crisis through conservation action plans that direct research, monitoring, and education programs that engage concerns at local through global scales. There is increasing recognition that conservation is not just about what is achieved but how it is done. A "conservation acrostic" has been created to emphasize that point where C = co-produced, O = open, N = nimble, S = solutions-oriented, E = empowering, R = relational, V = values-based, A = actionable, T = transdisciplinary, I = inclusive, O = optimistic, and N = nurturing.
History
Natural resource conservation
Conscious efforts to conserve and protect global biodiversity are a recent phenomenon. Natural resource conservation, however, has a history that extends prior to the age of conservation. Resource ethics grew out of necessity through direct relations with nature. Regulation or communal restraint became necessary to prevent selfish motives from taking more than could be locally sustained, therefore compromising the long-term supply for the rest of the community. This social dilemma with respect to natural resource management is often called the "Tragedy of the Commons".
From this principle, conservation biologists can trace communal resource based ethics throughout cultures as a solution to communal resource conflict. For example, the Alaskan Tlingit peoples and the Haida of the Pacific Northwest had resource boundaries, rules, and restrictions among clans with respect to the fishing of sockeye salmon. These rules were guided by clan elders who knew lifelong details of each river and stream they managed. There are numerous examples in history where cultures have followed rules, rituals, and organized practice with respect to communal natural resource management.
The Mauryan emperor Ashoka around 250 BC issued edicts restricting the slaughter of animals and certain kinds of birds, as well as opened veterinary clinics.
Conservation ethics are also found in early religious and philosophical writings. There are examples in the Tao, Shinto, Hindu, Islamic and Buddhist traditions. In Greek philosophy, Plato lamented about pasture land degradation: "What is left now is, so to say, the skeleton of a body wasted by disease; the rich, soft soil has been carried off and only the bare framework of the district left." In the bible, through Moses, God commanded to let the land rest from cultivation every seventh year. Before the 18th century, however, much of European culture considered it a pagan view to admire nature. Wilderness was denigrated while agricultural development was praised. However, as early as AD 680 a wildlife sanctuary was founded on the Farne Islands by St Cuthbert in response to his religious beliefs.
Early naturalists
Natural history was a major preoccupation in the 18th century, with grand expeditions and the opening of popular public displays in Europe and North America. By 1900 there were 150 natural history museums in Germany, 250 in Great Britain, 250 in the United States, and 300 in France. Preservationist or conservationist sentiments are a development of the late 18th to early 20th centuries.
Before Charles Darwin set sail on HMS Beagle, most people in the world, including Darwin, believed in special creation and that all species were unchanged. George-Louis Leclerc was one of the first naturalist that questioned this belief. He proposed in his 44 volume natural history book that species evolve due to environmental influences. Erasmus Darwin was also a naturalist who also suggested that species evolved. Erasmus Darwin noted that some species have vestigial structures which are anatomical structures that have no apparent function in the species currently but would have been useful for the species' ancestors. The thinking of these early 18th century naturalists helped to change the mindset and thinking of the early 19th century naturalists.
By the early 19th century biogeography was ignited through the efforts of Alexander von Humboldt, Charles Lyell and Charles Darwin. The 19th-century fascination with natural history engendered a fervor to be the first to collect rare specimens with the goal of doing so before they became extinct by other such collectors. Although the work of many 18th and 19th century naturalists were to inspire nature enthusiasts and conservation organizations, their writings, by modern standards, showed insensitivity towards conservation as they would kill hundreds of specimens for their collections.
Conservation movement
The modern roots of conservation biology can be found in the late 18th-century Enlightenment period particularly in England and Scotland. Thinkers including Lord Monboddo described the importance of "preserving nature"; much of this early emphasis had its origins in Christian theology.
Scientific conservation principles were first practically applied to the forests of British India. The conservation ethic that began to evolve included three core principles: that human activity damaged the environment, that there was a civic duty to maintain the environment for future generations, and that scientific, empirically based methods should be applied to ensure this duty was carried out. Sir James Ranald Martin was prominent in promoting this ideology, publishing many medico-topographical reports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities in British India through the establishment of Forest Departments.
The Madras Board of Revenue started local conservation efforts in 1842, headed by Alexander Gibson, a professional botanist who systematically adopted a forest conservation program based on scientific principles. This was the first case of state conservation management of forests in the world. Governor-General Lord Dalhousie introduced the first permanent and large-scale forest conservation program in the world in 1855, a model that soon spread to other colonies, as well the United States, where Yellowstone National Park was opened in 1872 as the world's first national park.
The term conservation came into widespread use in the late 19th century and referred to the management, mainly for economic reasons, of such natural resources as timber, fish, game, topsoil, pastureland, and minerals. In addition it referred to the preservation of forests (forestry), wildlife (wildlife refuge), parkland, wilderness, and watersheds. This period also saw the passage of the first conservation legislation and the establishment of the first nature conservation societies. The Sea Birds Preservation Act of 1869 was passed in Britain as the first nature protection law in the world after extensive lobbying from the Association for the Protection of Seabirds and the respected ornithologist Alfred Newton. Newton was also instrumental in the passage of the first Game laws from 1872, which protected animals during their breeding season so as to prevent the stock from being brought close to extinction.
One of the first conservation societies was the Royal Society for the Protection of Birds, founded in 1889 in Manchester as a protest group campaigning against the use of great crested grebe and kittiwake skins and feathers in fur clothing. Originally known as "the Plumage League", the group gained popularity and eventually amalgamated with the Fur and Feather League in Croydon, and formed the RSPB. The National Trust formed in 1895 with the manifesto to "...promote the permanent preservation, for the benefit of the nation, of lands, ... to preserve (so far practicable) their natural aspect." In May 1912, a month after the Titanic sank, banker and expert naturalist Charles Rothschild held a meeting at the Natural History Museum in London to discuss his idea for a new organisation to save the best places for wildlife in the British Isles. This meeting led to the formation of the Society for the Promotion of Nature Reserves, which later became the Wildlife Trusts.
In the United States, the Forest Reserve Act of 1891 gave the President power to set aside forest reserves from the land in the public domain. John Muir founded the Sierra Club in 1892, and the New York Zoological Society was set up in 1895. A series of national forests and preserves were established by Theodore Roosevelt from 1901 to 1909. The 1916 National Parks Act, included a 'use without impairment' clause, sought by John Muir, which eventually resulted in the removal of a proposal to build a dam in Dinosaur National Monument in 1959.
In the 20th century, Canadian civil servants, including Charles Gordon Hewitt and James Harkin, spearheaded the movement toward wildlife conservation.
In the 21st century professional conservation officers have begun to collaborate with indigenous communities for protecting wildlife in Canada. Some conservation efforts are yet to fully take hold due to ecological neglect. For example in the USA, 21st century bowfishing of native fishes, which amounts to killing wild animals for recreation and disposing of them immediately afterwards, remains unregulated and unmanaged.
Global conservation efforts
In the mid-20th century, efforts arose to target individual species for conservation, notably efforts in big cat conservation in South America led by the New York Zoological Society. In the early 20th century the New York Zoological Society was instrumental in developing concepts of establishing preserves for particular species and conducting the necessary conservation studies to determine the suitability of locations that are most appropriate as conservation priorities; the work of Henry Fairfield Osborn Jr., Carl E. Akeley, Archie Carr and his son Archie Carr III is notable in this era. Akeley for example, having led expeditions to the Virunga Mountains and observed the mountain gorilla in the wild, became convinced that the species and the area were conservation priorities. He was instrumental in persuading Albert I of Belgium to act in defense of the mountain gorilla and establish Albert National Park (since renamed Virunga National Park) in what is now Democratic Republic of Congo.
By the 1970s, led primarily by work in the United States under the Endangered Species Act along with the Species at Risk Act (SARA) of Canada, Biodiversity Action Plans developed in Australia, Sweden, the United Kingdom, hundreds of species specific protection plans ensued. Notably the United Nations acted to conserve sites of outstanding cultural or natural importance to the common heritage of mankind. The programme was adopted by the General Conference of UNESCO in 1972. As of 2006, a total of 830 sites are listed: 644 cultural, 162 natural. The first country to pursue aggressive biological conservation through national legislation was the United States, which passed back to back legislation in the Endangered Species Act (1966) and National Environmental Policy Act (1970), which together injected major funding and protection measures to large-scale habitat protection and threatened species research. Other conservation developments, however, have taken hold throughout the world. India, for example, passed the Wildlife Protection Act of 1972.
In 1980, a significant development was the emergence of the urban conservation movement. A local organization was established in Birmingham, UK, a development followed in rapid succession in cities across the UK, then overseas. Although perceived as a grassroots movement, its early development was driven by academic research into urban wildlife. Initially perceived as radical, the movement's view of conservation being inextricably linked with other human activity has now become mainstream in conservation thought. Considerable research effort is now directed at urban conservation biology. The Society for Conservation Biology originated in 1985.
By 1992, most of the countries of the world had become committed to the principles of conservation of biological diversity with the Convention on Biological Diversity; subsequently many countries began programmes of Biodiversity Action Plans to identify and conserve threatened species within their borders, as well as protect associated habitats. The late 1990s saw increasing professionalism in the sector, with the maturing of organisations such as the Institute of Ecology and Environmental Management and the Society for the Environment.
Since 2000, the concept of landscape scale conservation has risen to prominence, with less emphasis being given to single-species or even single-habitat focused actions. Instead an ecosystem approach is advocated by most mainstream conservationists, although concerns have been expressed by those working to protect some high-profile species.
Ecology has clarified the workings of the biosphere; i.e., the complex interrelationships among humans, other species, and the physical environment. The burgeoning human population and associated agriculture, industry, and the ensuing pollution, have demonstrated how easily ecological relationships can be disrupted.
Concepts and foundations
Measuring extinction rates
Extinction rates are measured in a variety of ways. Conservation biologists measure and apply statistical measures of fossil records, rates of habitat loss, and a multitude of other variables such as loss of biodiversity as a function of the rate of habitat loss and site occupancy to obtain such estimates. The Theory of Island Biogeography is possibly the most significant contribution toward the scientific understanding of both the process and how to measure the rate of species extinction. The current background extinction rate is estimated to be one species every few years. Actual extinction rates are estimated to be orders of magnitudes higher. While this is important, it's worth noting that there are no models in existence that account for the complexity of unpredictable factors like species movement, a non-analog climate, changing species interactions, evolutionary rates on finer time scales, and many other stochastic variables.
The measure of ongoing species loss is made more complex by the fact that most of the Earth's species have not been described or evaluated. Estimates vary greatly on how many species actually exist (estimated range: 3,600,000–111,700,000) to how many have received a species binomial (estimated range: 1.5–8 million). Less than 1% of all species that have been described beyond simply noting its existence. From these figures, the IUCN reports that 23% of vertebrates, 5% of invertebrates and 70% of plants that have been evaluated are designated as endangered or threatened. Better knowledge is being constructed by The Plant List for actual numbers of species.
Systematic conservation planning
Systematic conservation planning is an effective way to seek and identify efficient and effective types of reserve design to capture or sustain the highest priority biodiversity values and to work with communities in support of local ecosystems. Margules and Pressey identify six interlinked stages in the systematic planning approach:
Compile data on the biodiversity of the planning region
Identify conservation goals for the planning region
Review existing conservation areas
Select additional conservation areas
Implement conservation actions
Maintain the required values of conservation areas
Conservation biologists regularly prepare detailed conservation plans for grant proposals or to effectively coordinate their plan of action and to identify best management practices (e.g.). Systematic strategies generally employ the services of Geographic Information Systems to assist in the decision-making process. The SLOSS debate is often considered in planning.
Conservation physiology: a mechanistic approach to conservation
Conservation physiology was defined by Steven J. Cooke and colleagues as:An integrative scientific discipline applying physiological concepts, tools, and knowledge to characterizing biological diversity and its ecological implications; understanding and predicting how organisms, populations, and ecosystems respond to environmental change and stressors; and solving conservation problems across the broad range of taxa (i.e. including microbes, plants, and animals). Physiology is considered in the broadest possible terms to include functional and mechanistic responses at all scales, and conservation includes the development and refinement of strategies to rebuild populations, restore ecosystems, inform conservation policy, generate decision-support tools, and manage natural resources.Conservation physiology is particularly relevant to practitioners in that it has the potential to generate cause-and-effect relationships and reveal the factors that contribute to population declines.
Conservation biology as a profession
The Society for Conservation Biology is a global community of conservation professionals dedicated to advancing the science and practice of conserving biodiversity. Conservation biology as a discipline reaches beyond biology, into subjects such as philosophy, law, economics, humanities, arts, anthropology, and education. Within biology, conservation genetics and evolution are immense fields unto themselves, but these disciplines are of prime importance to the practice and profession of conservation biology.
Conservationists introduce bias when they support policies using qualitative description, such as habitat degradation, or healthy ecosystems. Conservation biologists advocate for reasoned and sensible management of natural resources and do so with a disclosed combination of science, reason, logic, and values in their conservation management plans. This sort of advocacy is similar to the medical profession advocating for healthy lifestyle options, both are beneficial to human well-being yet remain scientific in their approach.
There is a movement in conservation biology suggesting a new form of leadership is needed to mobilize conservation biology into a more effective discipline that is able to communicate the full scope of the problem to society at large. The movement proposes an adaptive leadership approach that parallels an adaptive management approach. The concept is based on a new philosophy or leadership theory steering away from historical notions of power, authority, and dominance. Adaptive conservation leadership is reflective and more equitable as it applies to any member of society who can mobilize others toward meaningful change using communication techniques that are inspiring, purposeful, and collegial. Adaptive conservation leadership and mentoring programs are being implemented by conservation biologists through organizations such as the Aldo Leopold Leadership Program.
Approaches
Conservation may be classified as either in-situ conservation, which is protecting an endangered species in its natural habitat, or ex-situ conservation, which occurs outside the natural habitat. In-situ conservation involves protecting or restoring the habitat. Ex-situ conservation, on the other hand, involves protection outside of an organism's natural habitat, such as on reservations or in gene banks, in circumstances where viable populations may not be present in the natural habitat.
The conservation of habitats like forest, water or soil in its natural state is crucial for any species depending in it to thrive. Instead of making the whole new environment looking alike the original habitat of wild animals is less effective than preserving the original habitats. An approach in Nepal named reforestation campaign has helped increase the density and area covered by the original forests which proved to be better than creating entirely new environment after original one is let to lost. Old Forests Store More Carbon than Young Ones as proved by latest researches, so it is more crucial to protect the old ones. The reforestation campaign launched by Himalayan Adventure Therapy in Nepal basically visits the old forests in periodic basis which are vulnerable to loss of density and the area covered due to unplanned urbanization activities. Then they plant the new saplings of same tree families of that existing forest in the areas where the old forest has been lost and also plant those saplings to the barren areas connected to the forest. This maintains the density and area covered by the forest.
Also, non-interference may be used, which is termed a preservationist method. Preservationists advocate for giving areas of nature and species a protected existence that halts interference from the humans. In this regard, conservationists differ from preservationists in the social dimension, as conservation biology engages society and seeks equitable solutions for both society and ecosystems. Some preservationists emphasize the potential of biodiversity in a world without humans.
Ecological monitoring in conservation
Ecological monitoring is the systematic collection of data relevant to the ecology of a species or habitat at repeating intervals with defined methods. Long-term monitoring for environmental and ecological metrics is an important part of any successful conservation initiative. Unfortunately, long-term data for many species and habitats is not available in many cases. A lack of historical data on species populations, habitats, and ecosystems means that any current or future conservation work will have to make assumptions to determine if the work is having any effect on the population or ecosystem health. Ecological monitoring can provide early warning signals of deleterious effects (from human activities or natural changes in an environment) on an ecosystem and its species. In order for signs of negative trends in ecosystem or species health to be detected, monitoring methods must be carried out at appropriate time intervals, and the metric must be able to capture the trend of the population or habitat as a whole.
Long-term monitoring can include the continued measuring of many biological, ecological, and environmental metrics including annual breeding success, population size estimates, water quality, biodiversity (which can be measured in many way, i.e. Shannon Index), and many other methods. When determining which metrics to monitor for a conservation project, it is important to understand how an ecosystem functions and what role different species and abiotic factors have within the system. It is important to have a precise reason for why ecological monitoring is implemented; within the context of conservation, this reasoning is often to track changes before, during, or after conservation measures are put in place to help a species or habitat recover from degradation and/or maintain integrity.
Another benefit of ecological monitoring is the hard evidence it provides scientists to use for advising policy makers and funding bodies about conservation efforts. Not only is ecological monitoring data important for convincing politicians, funders, and the public why a conservation program is important to implement, but also to keep them convinced that a program should be continued to be supported.
There is plenty of debate on how conservation resources can be used most efficiently; even within ecological monitoring, there is debate on which metrics that money, time and personnel should be dedicated to for the best chance of making a positive impact. One specific general discussion topic is whether monitoring should happen where there is little human impact (to understand a system that has not been degraded by humans), where there is human impact (so the effects from humans can be investigated), or where there is data deserts and little is known about the habitats' and communities' response to human perturbations.
The concept of bioindicators / indicator species can be applied to ecological monitoring as a way to investigate how pollution is affecting an ecosystem. Species like amphibians and birds are highly susceptible to pollutants in their environment due to their behaviours and physiological features that cause them to absorb pollutants at a faster rate than other species. Amphibians spend parts of their time in the water and on land, making them susceptible to changes in both environments. They also have very permeable skin that allows them to breath and intake water, which means they also take any air or water-soluble pollutants in as well. Birds often cover a wide range in habitat types annually, and also generally revisit the same nesting site each year. This makes it easier for researchers to track ecological effects at both an individual and a population level for the species.
Many conservation researchers believe that having a long-term ecological monitoring program should be a priority for conservation projects, protected areas, and regions where environmental harm mitigation is used.
Ethics and values
Conservation biologists are interdisciplinary researchers that practice ethics in the biological and social sciences. Chan states that conservationists must advocate for biodiversity and can do so in a scientifically ethical manner by not promoting simultaneous advocacy against other competing values.
A conservationist may be inspired by the resource conservation ethic, which seeks to identify what measures will deliver "the greatest good for the greatest number of people for the longest time." In contrast, some conservation biologists argue that nature has an intrinsic value that is independent of anthropocentric usefulness or utilitarianism. Aldo Leopold was a classical thinker and writer on such conservation ethics whose philosophy, ethics and writings are still valued and revisited by modern conservation biologists.
Conservation priorities
The International Union for Conservation of Nature (IUCN) has organized a global assortment of scientists and research stations across the planet to monitor the changing state of nature in an effort to tackle the extinction crisis. The IUCN provides annual updates on the status of species conservation through its Red List. The IUCN Red List serves as an international conservation tool to identify those species most in need of conservation attention and by providing a global index on the status of biodiversity. More than the dramatic rates of species loss, however, conservation scientists note that the sixth mass extinction is a biodiversity crisis requiring far more action than a priority focus on rare, endemic or endangered species. Concerns for biodiversity loss covers a broader conservation mandate that looks at ecological processes, such as migration, and a holistic examination of biodiversity at levels beyond the species, including genetic, population and ecosystem diversity. Extensive, systematic, and rapid rates of biodiversity loss threatens the sustained well-being of humanity by limiting supply of ecosystem services that are otherwise regenerated by the complex and evolving holistic network of genetic and ecosystem diversity. While the conservation status of species is employed extensively in conservation management, some scientists highlight that it is the common species that are the primary source of exploitation and habitat alteration by humanity. Moreover, common species are often undervalued despite their role as the primary source of ecosystem services.
While most in the community of conservation science "stress the importance" of sustaining biodiversity, there is debate on how to prioritize genes, species, or ecosystems, which are all components of biodiversity (e.g. Bowen, 1999). While the predominant approach to date has been to focus efforts on endangered species by conserving biodiversity hotspots, some scientists (e.g) and conservation organizations, such as the Nature Conservancy, argue that it is more cost-effective, logical, and socially relevant to invest in biodiversity coldspots. The costs of discovering, naming, and mapping out the distribution of every species, they argue, is an ill-advised conservation venture. They reason it is better to understand the significance of the ecological roles of species.
Biodiversity hotspots and coldspots are a way of recognizing that the spatial concentration of genes, species, and ecosystems is not uniformly distributed on the Earth's surface. For example, "... 44% of all species of vascular plants and 35% of all species in four vertebrate groups are confined to 25 hotspots comprising only 1.4% of the land surface of the Earth."
Those arguing in favor of setting priorities for coldspots point out that there are other measures to consider beyond biodiversity. They point out that emphasizing hotspots downplays the importance of the social and ecological connections to vast areas of the Earth's ecosystems where biomass, not biodiversity, reigns supreme. It is estimated that 36% of the Earth's surface, encompassing 38.9% of the worlds vertebrates, lacks the endemic species to qualify as biodiversity hotspot. Moreover, measures show that maximizing protections for biodiversity does not capture ecosystem services any better than targeting randomly chosen regions. Population level biodiversity (mostly in coldspots) are disappearing at a rate that is ten times that at the species level. The level of importance in addressing biomass versus endemism as a concern for conservation biology is highlighted in literature measuring the level of threat to global ecosystem carbon stocks that do not necessarily reside in areas of endemism. A hotspot priority approach would not invest so heavily in places such as steppes, the Serengeti, the Arctic, or taiga. These areas contribute a great abundance of population (not species) level biodiversity and ecosystem services, including cultural value and planetary nutrient cycling.
Those in favor of the hotspot approach point out that species are irreplaceable components of the global ecosystem, they are concentrated in places that are most threatened, and should therefore receive maximal strategic protections. This is a hotspot approach because the priority is set to target species level concerns over population level or biomass. Species richness and genetic biodiversity contributes to and engenders ecosystem stability, ecosystem processes, evolutionary adaptability, and biomass. Both sides agree, however, that conserving biodiversity is necessary to reduce the extinction rate and identify an inherent value in nature; the debate hinges on how to prioritize limited conservation resources in the most cost-effective way.
Economic values and natural capital
Conservation biologists have started to collaborate with leading global economists to determine how to measure the wealth and services of nature and to make these values apparent in global market transactions. This system of accounting is called natural capital and would, for example, register the value of an ecosystem before it is cleared to make way for development. The WWF publishes its Living Planet Report and provides a global index of biodiversity by monitoring approximately 5,000 populations in 1,686 species of vertebrate (mammals, birds, fish, reptiles, and amphibians) and report on the trends in much the same way that the stock market is tracked.
This method of measuring the global economic benefit of nature has been endorsed by the G8+5 leaders and the European Commission. Nature sustains many ecosystem services that benefit humanity. Many of the Earth's ecosystem services are public goods without a market and therefore no price or value. When the stock market registers a financial crisis, traders on Wall Street are not in the business of trading stocks for much of the planet's living natural capital stored in ecosystems. There is no natural stock market with investment portfolios into sea horses, amphibians, insects, and other creatures that provide a sustainable supply of ecosystem services that are valuable to society. The ecological footprint of society has exceeded the bio-regenerative capacity limits of the planet's ecosystems by about 30 percent, which is the same percentage of vertebrate populations that have registered decline from 1970 through 2005.
The inherent natural economy plays an essential role in sustaining humanity, including the regulation of global atmospheric chemistry, pollinating crops, pest control, cycling soil nutrients, purifying our water supply, supplying medicines and health benefits, and unquantifiable quality of life improvements. There is a relationship, a correlation, between markets and natural capital, and social income inequity and biodiversity loss. This means that there are greater rates of biodiversity loss in places where the inequity of wealth is greatest
Although a direct market comparison of natural capital is likely insufficient in terms of human value, one measure of ecosystem services suggests the contribution amounts to trillions of dollars yearly. For example, one segment of North American forests has been assigned an annual value of 250 billion dollars; as another example, honey bee pollination is estimated to provide between 10 and 18 billion dollars of value yearly. The value of ecosystem services on one New Zealand island has been imputed to be as great as the GDP of that region. This planetary wealth is being lost at an incredible rate as the demands of human society is exceeding the bio-regenerative capacity of the Earth. While biodiversity and ecosystems are resilient, the danger of losing them is that humans cannot recreate many ecosystem functions through technological innovation.
Strategic species concepts
Keystone species
Some species, called a keystone species form a central supporting hub unique to their ecosystem. The loss of such a species results in a collapse in ecosystem function, as well as the loss of coexisting species. Keystone species are usually predators due to their ability to control the population of prey in their ecosystem. The importance of a keystone species was shown by the extinction of the Steller's sea cow (Hydrodamalis gigas) through its interaction with sea otters, sea urchins, and kelp. Kelp beds grow and form nurseries in shallow waters to shelter creatures that support the food chain. Sea urchins feed on kelp, while sea otters feed on sea urchins. With the rapid decline of sea otters due to overhunting, sea urchin populations grazed unrestricted on the kelp beds and the ecosystem collapsed. Left unchecked, the urchins destroyed the shallow water kelp communities that supported the Steller's sea cow's diet and hastened their demise. The sea otter was thought to be a keystone species because the coexistence of many ecological associates in the kelp beds relied upon otters for their survival. However this was later questioned by Turvey and Risley, who showed that hunting alone would have driven the Steller's sea cow extinct.
Indicator species
An indicator species has a narrow set of ecological requirements, therefore they become useful targets for observing the health of an ecosystem. Some animals, such as amphibians with their semi-permeable skin and linkages to wetlands, have an acute sensitivity to environmental harm and thus may serve as a miner's canary. Indicator species are monitored in an effort to capture environmental degradation through pollution or some other link to proximate human activities. Monitoring an indicator species is a measure to determine if there is a significant environmental impact that can serve to advise or modify practice, such as through different forest silviculture treatments and management scenarios, or to measure the degree of harm that a pesticide may impart on the health of an ecosystem.
Government regulators, consultants, or NGOs regularly monitor indicator species, however, there are limitations coupled with many practical considerations that must be followed for the approach to be effective. It is generally recommended that multiple indicators (genes, populations, species, communities, and landscape) be monitored for effective conservation measurement that prevents harm to the complex, and often unpredictable, response from ecosystem dynamics (Noss, 1997).
Umbrella and flagship species
An example of an umbrella species is the monarch butterfly, because of its lengthy migrations and aesthetic value. The monarch migrates across North America, covering multiple ecosystems and so requires a large area to exist. Any protections afforded to the monarch butterfly will at the same time umbrella many other species and habitats. An umbrella species is often used as flagship species, which are species, such as the giant panda, the blue whale, the tiger, the mountain gorilla and the monarch butterfly, that capture the public's attention and attract support for conservation measures. Paradoxically, however, conservation bias towards flagship species sometimes threatens other species of chief concern.
Context and trends
Conservation biologists study trends and process from the paleontological past to the ecological present as they gain an understanding of the context related to species extinction. It is generally accepted that there have been five major global mass extinctions that register in Earth's history. These include: the Ordovician (440 mya), Devonian (370 mya), Permian–Triassic (245 mya), Triassic–Jurassic (200 mya), and Cretaceous–Paleogene extinction event (66 mya) extinction spasms. Within the last 10,000 years, human influence over the Earth's ecosystems has been so extensive that scientists have difficulty estimating the number of species lost; that is to say the rates of deforestation, reef destruction, wetland draining and other human acts are proceeding much faster than human assessment of species. The latest Living Planet Report by the World Wide Fund for Nature estimates that we have exceeded the bio-regenerative capacity of the planet, requiring 1.6 Earths to support the demands placed on our natural resources.
Holocene extinction
Conservation biologists are dealing with and have published evidence from all corners of the planet indicating that humanity may be causing the sixth and fastest planetary extinction event. It has been suggested that an unprecedented number of species is becoming extinct in what is known as the Holocene extinction event. The global extinction rate may be approximately 1,000 times higher than the natural background extinction rate. It is estimated that two-thirds of all mammal genera and one-half of all mammal species weighing at least have gone extinct in the last 50,000 years. The Global Amphibian Assessment reports that amphibians are declining on a global scale faster than any other vertebrate group, with over 32% of all surviving species being threatened with extinction. The surviving populations are in continual decline in 43% of those that are threatened. Since the mid-1980s the actual rates of extinction have exceeded 211 times rates measured from the fossil record. However, "The current amphibian extinction rate may range from 25,039 to 45,474 times the background extinction rate for amphibians." The global extinction trend occurs in every major vertebrate group that is being monitored. For example, 23% of all mammals and 12% of all birds are Red Listed by the International Union for Conservation of Nature (IUCN), meaning they too are threatened with extinction. Even though extinction is natural, the decline in species is happening at such an incredible rate that evolution can simply not match, therefore, leading to the greatest continual mass extinction on Earth. Humans have dominated the planet and our high consumption of resources, along with the pollution generated is affecting the environments in which other species live. There are a wide variety of species that humans are working to protect such as the Hawaiian Crow and the Whooping Crane of Texas. People can also take action on preserving species by advocating and voting for global and national policies that improve climate, under the concepts of climate mitigation and climate restoration. The Earth's oceans demand particular attention as climate change continues to alter pH levels, making it uninhabitable for organisms with shells which dissolve as a result.
Status of oceans and reefs
Global assessments of coral reefs of the world continue to report drastic and rapid rates of decline. By 2000, 27% of the world's coral reef ecosystems had effectively collapsed. The largest period of decline occurred in a dramatic "bleaching" event in 1998, where approximately 16% of all the coral reefs in the world disappeared in less than a year. Coral bleaching is caused by a mixture of environmental stresses, including increases in ocean temperatures and acidity, causing both the release of symbiotic algae and death of corals. Decline and extinction risk in coral reef biodiversity has risen dramatically in the past ten years. The loss of coral reefs, which are predicted to go extinct in the next century, threatens the balance of global biodiversity, will have huge economic impacts, and endangers food security for hundreds of millions of people. Conservation biology plays an important role in international agreements covering the world's oceans and other issues pertaining to biodiversity.
The oceans are threatened by acidification due to an increase in CO2 levels. This is a most serious threat to societies relying heavily upon oceanic natural resources. A concern is that the majority of all marine species will not be able to evolve or acclimate in response to the changes in the ocean chemistry.
The prospects of averting mass extinction seems unlikely when "90% of all of the large (average approximately ≥50 kg), open ocean tuna, billfishes, and sharks in the ocean" are reportedly gone. Given the scientific review of current trends, the ocean is predicted to have few surviving multi-cellular organisms with only microbes left to dominate marine ecosystems.
Groups other than vertebrates
Serious concerns also being raised about taxonomic groups that do not receive the same degree of social attention or attract funds as the vertebrates. These include fungal (including lichen-forming species), invertebrate (particularly insect) and plant communities where the vast majority of biodiversity is represented. Conservation of fungi and conservation of insects, in particular, are both of pivotal importance for conservation biology. As mycorrhizal symbionts, and as decomposers and recyclers, fungi are essential for sustainability of forests. The value of insects in the biosphere is enormous because they outnumber all other living groups in measure of species richness. The greatest bulk of biomass on land is found in plants, which is sustained by insect relations. This great ecological value of insects is countered by a society that often reacts negatively toward these aesthetically 'unpleasant' creatures.
One area of concern in the insect world that has caught the public eye is the mysterious case of missing honey bees (Apis mellifera). Honey bees provide an indispensable ecological services through their acts of pollination supporting a huge variety of agriculture crops. The use of honey and wax have become vastly used throughout the world. The sudden disappearance of bees leaving empty hives or colony collapse disorder (CCD) is not uncommon. However, in 16-month period from 2006 through 2007, 29% of 577 beekeepers across the United States reported CCD losses in up to 76% of their colonies. This sudden demographic loss in bee numbers is placing a strain on the agricultural sector. The cause behind the massive declines is puzzling scientists. Pests, pesticides, and global warming are all being considered as possible causes.
Another highlight that links conservation biology to insects, forests, and climate change is the mountain pine beetle (Dendroctonus ponderosae) epidemic of British Columbia, Canada, which has infested of forested land since 1999. An action plan has been prepared by the Government of British Columbia to address this problem.
Conservation biology of parasites
A large proportion of parasite species are threatened by extinction. A few of them are being eradicated as pests of humans or domestic animals; however, most of them are harmless. Parasites also make up a significant amount of global biodiversity, given that they make up a large proportion of all species on earth, making them of increasingly prevalent conservation interest. Threats include the decline or fragmentation of host populations, or the extinction of host species. Parasites are intricately woven into ecosystems and food webs, thereby occupying valuable roles in ecosystem structure and function.
Threats to biodiversity
Today, many threats to biodiversity exist. An acronym that can be used to express the top threats of present-day H.I.P.P.O stands for Habitat Loss, Invasive Species, Pollution, Human Population, and Overharvesting. The primary threats to biodiversity are habitat destruction (such as deforestation, agricultural expansion, urban development), and overexploitation (such as wildlife trade).Habitat fragmentation also poses challenges, because the global network of protected areas only covers 11.5% of the Earth's surface. A significant consequence of fragmentation and lack of linked protected areas is the reduction of animal migration on a global scale. Considering that billions of tonnes of biomass are responsible for nutrient cycling across the earth, the reduction of migration is a serious matter for conservation biology.
However, human activities need not necessarily cause irreparable harm to the biosphere. With conservation management and planning for biodiversity at all levels, from genes to ecosystems, there are examples where humans mutually coexist in a sustainable way with nature. Even with the current threats to biodiversity there are ways we can improve the current condition and start anew.
Many of the threats to biodiversity, including disease and climate change, are reaching inside borders of protected areas, leaving them 'not-so protected' (e.g. Yellowstone National Park). Climate change, for example, is often cited as a serious threat in this regard, because there is a feedback loop between species extinction and the release of carbon dioxide into the atmosphere. Ecosystems store and cycle large amounts of carbon which regulates global conditions. In present day, there have been major climate shifts with temperature changes making survival of some species difficult. The effects of global warming add a catastrophic threat toward a mass extinction of global biological diversity. Numerous more species are predicted to face unprecedented levels of extinction risk due to population increase, climate change and economic development in the future. Conservationists have claimed that not all the species can be saved, and they have to decide which their efforts should be used to protect. This concept is known as the Conservation Triage. The extinction threat is estimated to range from 15 to 37 percent of all species by 2050, or 50 percent of all species over the next 50 years. The current extinction rate is 100–100,000 times more rapid today than the last several billion years.
See also
Applied ecology
Bird observatory
Conservation-reliant species
Ecological extinction
Gene pool
Genetic erosion
Genetic pollution
In-situ conservation
Indigenous peoples: environmental benefits
List of basic biology topics
List of biological websites
List of biology topics
List of conservation organisations
List of conservation topics
Mutualisms and conservation
Natural environment
Nature conservation
Nature conservation organizations by country
Protected area
Regional Red List
Renewable resource
Restoration ecology
Tyranny of small decisions
Water conservation
Welfare biology
Wildlife disease
Wildlife management
World Conservation Monitoring Centre
References
Further reading
Scientific literature
Textbooks
A free textbook for download.
A free textbook for download.
General non-fiction
Periodicals
Animal Conservation
Biological Conservation
Conservation , a quarterly magazine of the Society for Conservation Biology
Conservation and Society
Conservation Biology, a peer-reviewed journal of the Society for Conservation Biology
Conservation Letters
Diversity and Distributions
Ecology and Society
Training manuals
External links
Conservation Biology Institute (CBI)
United Nations Environment Programme – World Conservation Monitoring Centre (UNEP-WCMC)
The Center for Biodiversity and Conservation – American Museum of Natural History
Dictionary of the History of Ideas
Conservationevidence.com – Free access to conservation studies
Landscape ecology
Habitat
Philosophy of biology | 0.791473 | 0.992454 | 0.7855 |
Paleobiology | Paleobiology (or palaeobiology) is an interdisciplinary field that combines the methods and findings found in both the earth sciences and the life sciences. Paleobiology is not to be confused with geobiology, which focuses more on the interactions between the biosphere and the physical Earth.
Paleobiological research uses biological field research of current biota and of fossils millions of years old to answer questions about the molecular evolution and the evolutionary history of life. In this scientific quest, macrofossils, microfossils and trace fossils are typically analyzed. However, the 21st-century biochemical analysis of DNA and RNA samples offers much promise, as does the biometric construction of phylogenetic trees.
An investigator in this field is known as a paleobiologist.
Important research areas
Paleobotany applies the principles and methods of paleobiology to flora, especially green land plants, but also including the fungi and seaweeds (algae). See also mycology, phycology and dendrochronology.
Paleozoology uses the methods and principles of paleobiology to understand fauna, both vertebrates and invertebrates. See also vertebrate and invertebrate paleontology, as well as paleoanthropology.
Micropaleontology applies paleobiologic principles and methods to archaea, bacteria, protists and microscopic pollen/spores. See also microfossils and palynology.
Paleovirology examines the evolutionary history of viruses on paleobiological timescales.
Paleobiochemistry uses the methods and principles of organic chemistry to detect and analyze molecular-level evidence of ancient life, both microscopic and macroscopic.
Paleoecology examines past ecosystems, climates, and geographies so as to better comprehend prehistoric life.
Taphonomy analyzes the post-mortem history (for example, decay and decomposition) of an individual organism in order to gain insight on the behavior, death and environment of the fossilized organism.
Paleoichnology analyzes the tracks, borings, trails, burrows, impressions, and other trace fossils left by ancient organisms in order to gain insight into their behavior and ecology.
Stratigraphic paleobiology studies long-term secular changes, as well as the (short-term) bed-by-bed sequence of changes, in organismal characteristics and behaviors. See also stratification, sedimentary rocks and the geologic time scale.
Evolutionary developmental paleobiology examines the evolutionary aspects of the modes and trajectories of growth and development in the evolution of life – clades both extinct and extant. See also adaptive radiation, cladistics, evolutionary biology, developmental biology and phylogenetic tree.
Paleobiologists
The founder or "father" of modern paleobiology was Baron Franz Nopcsa (1877 to 1933), a Hungarian scientist trained at the University of Vienna. He initially termed the discipline "paleophysiology".
However, credit for coining the word paleobiology itself should go to Professor Charles Schuchert. He proposed the term in 1904 so as to initiate "a broad new science" joining "traditional paleontology with the evidence and insights of geology and isotopic chemistry."
On the other hand, Charles Doolittle Walcott, a Smithsonian adventurer, has been cited as the "founder of Precambrian paleobiology". Although best known as the discoverer of the mid-Cambrian Burgess shale animal fossils, in 1883 this American curator found the "first Precambrian fossil cells known to science" – a stromatolite reef then known as Cryptozoon algae. In 1899 he discovered the first acritarch fossil cells, a Precambrian algal phytoplankton he named Chuaria. Lastly, in 1914, Walcott reported "minute cells and chains of cell-like bodies" belonging to Precambrian purple bacteria.
Later 20th-century paleobiologists have also figured prominently in finding Archaean and Proterozoic eon microfossils: In 1954, Stanley A. Tyler and Elso S. Barghoorn described 2.1 billion-year-old cyanobacteria and fungi-like microflora at their Gunflint Chert fossil site. Eleven years later, Barghoorn and J. William Schopf reported finely-preserved Precambrian microflora at their Bitter Springs site of the Amadeus Basin, Central Australia.
In 1993, Schopf discovered O2-producing blue-green bacteria at his 3.5 billion-year-old Apex Chert site in Pilbara Craton, Marble Bar, in the northwestern part of Western Australia. So paleobiologists were at last homing in on the origins of the Precambrian "Oxygen catastrophe".
During the early part of the 21st-century, two paleobiologists Anjali Goswami and Thomas Halliday, studied the evolution of mammaliaforms during the Mesozoic and Cenozoic eras (between 299 million to 12,000 years ago). Additionally, they uncovered and studied the morphological disparity and rapid evolutionary rates of living organisms near the end and in the aftermath of the Cretaceous mass extinction (145 million to 66 million years ago).
Paleobiologic journals
Acta Palaeontologica Polonica
Biology and Geology
Historical Biology
PALAIOS
Palaeogeography, Palaeoclimatology, Palaeoecology
Paleobiology (journal)
Paleoceanography
Paleobiology in the general press
Books written for the general public on this topic include the following:
The Rise and Reign of the Mammals: A New History, from the Shadow of the Dinosaurs to Us written by Steve Brusatte
Otherlands: A Journey Through Earth's Extinct Worlds written by Thomas Halliday
Introduction to Paleobiology and the Fossil Record – 22 April 2020 by Michael J. Benton (Author), David A. T. Harper (Author)
See also
History of biology
History of paleontology
History of invertebrate paleozoology
Molecular paleontology
Taxonomy of commonly fossilised invertebrates
Treatise on Invertebrate Paleontology
Footnotes
Derek E.G. Briggs and Peter R. Crowther, eds. (2003). Palaeobiology II. Malden, Massachusetts: Blackwell Publishing. and . The second edition of an acclaimed British textbook.
Robert L. Carroll (1998). Patterns and Processes of Vertebrate Evolution. Cambridge Paleobiology Series. Cambridge, England: Cambridge University Press. and . Applies paleobiology to the adaptive radiation of fishes and quadrupeds.
Matthew T. Carrano, Timothy Gaudin, Richard Blob, and John Wible, eds. (2006). Amniote Paleobiology: Perspectives on the Evolution of Mammals, Birds and Reptiles. Chicago: University of Chicago Press. and . This new book describes paleobiological research into land vertebrates of the Mesozoic and Cenozoic eras.
Robert B. Eckhardt (2000). Human Paleobiology. Cambridge Studies in Biology and Evolutionary Anthropology. Cambridge, England: Cambridge University Press. and . This book connects paleoanthropology and archeology to the field of paleobiology.
Douglas H. Erwin (2006). Extinction: How Life on Earth Nearly Ended 250 Million Years Ago. Princeton: Princeton University Press. . An investigation by a paleobiologist into the many theories as to what happened during the catastrophic Permian-Triassic transition.
Brian Keith Hall and Wendy M. Olson, eds. (2003). Keywords and Concepts in Evolutionary Biology. Cambridge, Massachusetts: Harvard University Press. and .
David Jablonski, Douglas H. Erwin, and Jere H. Lipps (1996). Evolutionary Paleobiology. Chicago: University of Chicago Press, 492 pages. and . A fine American textbook.
Masatoshi Nei and Sudhir Kumar (2000). Molecular Evolution and Phylogenetics. Oxford, England: Oxford University Press. and . This text links DNA/RNA analysis to the evolutionary "tree of life" in paleobiology.
Donald R. Prothero (2004). Bringing Fossils to Life: An Introduction to Paleobiology. New York: McGraw Hill. and . An acclaimed book for the novice fossil-hunter and young adults.
Mark Ridley, ed. (2004). Evolution. Oxford, England: Oxford University Press. and . An anthology of analytical studies in paleobiology.
Raymond Rogers, David Eberth, and Tony Fiorillo (2007). Bonebeds: Genesis, Analysis and Paleobiological Significance. Chicago: University of Chicago Press. and . A new book regarding the fossils of vertebrates, especially tetrapods on land during the Mesozoic and Cenozoic eras.
Thomas J. M. Schopf, ed. (1972). Models in Paleobiology. San Francisco: Freeman, Cooper. and . A much-cited, seminal classic in the field discussing methodology and quantitative analysis.
Thomas J.M. Schopf (1980). Paleoceanography. Cambridge, Massachusetts: Harvard University Press. and . A later book by the noted paleobiologist. This text discusses ancient marine ecology.
J. William Schopf (2001). Cradle of Life: The Discovery of Earth's Earliest Fossils. Princeton: Princeton University Press. . The use of biochemical and ultramicroscopic analysis to analyze microfossils of bacteria and archaea.
Paul Selden and John Nudds (2005). Evolution of Fossil Ecosystems. Chicago: University of Chicago Press. and . A recent analysis and discussion of paleoecology.
David Sepkoski. Rereading the Fossil Record: The Growth of Paleobiology as an Evolutionary Discipline (University of Chicago Press; 2012) 432 pages; A history since the mid-19th century, with a focus on the "revolutionary" era of the 1970s and early 1980s and the work of Stephen Jay Gould and David Raup.
Paul Tasch (1980). Paleobiology of the Invertebrates. New York: John Wiley & Sons. and . Applies statistics to the evolution of sponges, cnidarians, worms, brachiopods, bryozoa, mollusks, and arthropods.
Shuhai Xiao and Alan J. Kaufman, eds. (2006). Neoproterozoic Geobiology and Paleobiology. New York: Springer Science+Business Media. . This new book describes research into the fossils of the earliest multicellular animals and plants, especially the Ediacaran period invertebrates and algae.
Bernard Ziegler and R. O. Muir (1983). Introduction to Palaeobiology. Chichester, England: E. Horwood. and . A classic, British introductory textbook.
External links
Paleobiology website of the National Museum of Natural History (Smithsonian) in Washington, D.C. (archived 11 March 2007)
The Paleobiology Database
Developmental biology
Evolutionary biology
Subfields of paleontology | 0.802678 | 0.978565 | 0.785472 |
Chemosynthesis | In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic compounds (e.g., hydrogen gas, hydrogen sulfide) or ferrous ions as a source of energy, rather than sunlight, as in photosynthesis. Chemoautotrophs, organisms that obtain carbon from carbon dioxide through chemosynthesis, are phylogenetically diverse. Groups that include conspicuous or biogeochemically important taxa include the sulfur-oxidizing Gammaproteobacteria, the Campylobacterota, the Aquificota, the methanogenic archaea, and the neutrophilic iron-oxidizing bacteria.
Many microorganisms in dark regions of the oceans use chemosynthesis to produce biomass from single-carbon molecules. Two categories can be distinguished. In the rare sites where hydrogen molecules (H2) are available, the energy available from the reaction between CO2 and H2 (leading to production of methane, CH4) can be large enough to drive the production of biomass. Alternatively, in most oceanic environments, energy for chemosynthesis derives from reactions in which substances such as hydrogen sulfide or ammonia are oxidized. This may occur with or without the presence of oxygen.
Many chemosynthetic microorganisms are consumed by other organisms in the ocean, and symbiotic associations between chemosynthesizers and respiring heterotrophs are quite common. Large populations of animals can be supported by chemosynthetic secondary production at hydrothermal vents, methane clathrates, cold seeps, whale falls, and isolated cave water.
It has been hypothesized that anaerobic chemosynthesis may support life below the surface of Mars, Jupiter's moon Europa, and other planets. Chemosynthesis may have also been the first type of metabolism that evolved on Earth, leading the way for cellular respiration and photosynthesis to develop later.
Hydrogen sulfide chemosynthesis process
Giant tube worms use bacteria in their trophosome to fix carbon dioxide (using hydrogen sulfide as their energy source) and produce sugars and amino acids.
Some reactions produce sulfur:
hydrogen sulfide chemosynthesis:
18H2S + 6CO2 + 3O2 → C6H12O6 (carbohydrate) + 12H2O + 18S
Instead of releasing oxygen gas while fixing carbon dioxide as in photosynthesis, hydrogen sulfide chemosynthesis produces solid globules of sulfur in the process. In bacteria capable of chemoautotrophy (a form a chemosynthesis), such as purple sulfur bacteria, yellow globules of sulfur are present and visible in the cytoplasm.
Discovery
In 1890, Sergei Winogradsky proposed a novel type of life process called "anorgoxydant". His discovery suggested that some microbes could live solely on inorganic matter and emerged during his physiological research in the 1880s in Strasbourg and Zürich on sulfur, iron, and nitrogen bacteria.
In 1897, Wilhelm Pfeffer coined the term "chemosynthesis" for the energy production by oxidation of inorganic substances, in association with autotrophic carbon dioxide assimilation—what would be named today as chemolithoautotrophy. Later, the term would be expanded to include also chemoorganoautotrophs, which are organisms that use organic energy substrates in order to assimilate carbon dioxide. Thus, chemosynthesis can be seen as a synonym of chemoautotrophy.
The term "chemotrophy", less restrictive, would be introduced in the 1940s by André Lwoff for the production of energy by the oxidation of electron donors, organic or not, associated with auto- or heterotrophy.
Hydrothermal vents
The suggestion of Winogradsky was confirmed nearly 90 years later, when hydrothermal ocean vents were predicted to exist in the 1970s. The hot springs and strange creatures were discovered by Alvin, the world's first deep-sea submersible, in 1977 at the Galapagos Rift. At about the same time, then-graduate student Colleen Cavanaugh proposed chemosynthetic bacteria that oxidize sulfides or elemental sulfur as a mechanism by which tube worms could survive near hydrothermal vents. Cavanaugh later managed to confirm that this was indeed the method by which the worms could thrive, and is generally credited with the discovery of chemosynthesis.
A 2004 television series hosted by Bill Nye named chemosynthesis as one of the 100 greatest scientific discoveries of all time.
Oceanic crust
In 2013, researchers reported their discovery of bacteria living in the rock of the oceanic crust below the thick layers of sediment, and apart from the hydrothermal vents that form along the edges of the tectonic plates. Preliminary findings are that these bacteria subsist on the hydrogen produced by chemical reduction of olivine by seawater circulating in the small veins that permeate the basalt that comprises oceanic crust. The bacteria synthesize methane by combining hydrogen and carbon dioxide.
Chemosynthesis as an innovative area for continuing research
Despite the fact that the process of chemosynthesis has been known for more than a hundred years, its significance and importance are still relevant today in the transformation of chemical elements in biogeochemical cycles. Today, the vital processes of nitrifying bacteria, which lead to the oxidation of ammonia to nitric acid, require scientific substantiation and additional research. The ability of bacteria to convert inorganic substances into organic ones suggests that chemosynthetics can accumulate valuable resources for human needs.
Chemosynthetic communities in different environments are important biological systems in terms of their ecology, evolution and biogeography, as well as their potential as indicators of the availability of permanent hydrocarbon- based energy sources. In the process of chemosynthesis, bacteria produce organic matter where photosynthesis is impossible. Isolation of thermophilic sulfate-reducing bacteria Thermodesulfovibrio yellowstonii and other types of chemosynthetics provides prospects for further research. Thus, the importance of chemosynthesis remains relevant for use in innovative technologies, conservation of ecosystems, human life in general. Sergey Winogradsky helped discover the phenomenon of chemosynthesis.
See also
Primary nutritional groups
Autotroph
Heterotroph
Photosynthesis
Movile Cave
References
External links
Chemosynthetic Communities in the Gulf of Mexico
Biological processes
Metabolism
Environmental microbiology
Ecosystems | 0.790035 | 0.993883 | 0.785202 |
Food web | A food web is the natural interconnection of food chains and a graphical representation of what-eats-what in an ecological community. Position in the food web, or trophic level, is used in ecology to broadly classify organisms as autotrophs or heterotrophs. This is a non-binary classification; some organisms (such as carnivorous plants) occupy the role of mixotrophs, or autotrophs that additionally obtain organic matter from non-atmospheric sources.
The linkages in a food web illustrate the feeding pathways, such as where heterotrophs obtain organic matter by feeding on autotrophs and other heterotrophs. The food web is a simplified illustration of the various methods of feeding that link an ecosystem into a unified system of exchange. There are different kinds of consumer–resource interactions that can be roughly divided into herbivory, carnivory, scavenging, and parasitism. Some of the organic matter eaten by heterotrophs, such as sugars, provides energy. Autotrophs and heterotrophs come in all sizes, from microscopic to many tonnes - from cyanobacteria to giant redwoods, and from viruses and bdellovibrio to blue whales.
Charles Elton pioneered the concept of food cycles, food chains, and food size in his classical 1927 book "Animal Ecology"; Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Elton organized species into functional groups, which was the basis for Raymond Lindeman's classic and landmark paper in 1942 on trophic dynamics. Lindeman emphasized the important role of decomposer organisms in a trophic system of classification. The notion of a food web has a historical foothold in the writings of Charles Darwin and his terminology, including an "entangled bank", "web of life", "web of complex relations", and in reference to the decomposition actions of earthworms he talked about "the continued movement of the particles of earth". Even earlier, in 1768 John Bruckner described nature as "one continued web of life".
Food webs are limited representations of real ecosystems as they necessarily aggregate many species into trophic species, which are functional groups of species that have the same predators and prey in a food web. Ecologists use these simplifications in quantitative (or mathematical representation) models of trophic or consumer-resource systems dynamics. Using these models they can measure and test for generalized patterns in the structure of real food web networks. Ecologists have identified non-random properties in the topological structure of food webs. Published examples that are used in meta analysis are of variable quality with omissions. However, the number of empirical studies on community webs is on the rise and the mathematical treatment of food webs using network theory had identified patterns that are common to all. Scaling laws, for example, predict a relationship between the topology of food web predator-prey linkages and levels of species richness.
Taxonomy of a food web
Links in food webs map the feeding connections (who eats whom) in an ecological community. Food cycle is an obsolete term that is synonymous with food web. Ecologists can broadly group all life forms into one of two trophic layers, the autotrophs and the heterotrophs. Autotrophs produce more biomass energy, either chemically without the sun's energy or by capturing the sun's energy in photosynthesis, than they use during metabolic respiration. Heterotrophs consume rather than produce biomass energy as they metabolize, grow, and add to levels of secondary production. A food web depicts a collection of polyphagous heterotrophic consumers that network and cycle the flow of energy and nutrients from a productive base of self-feeding autotrophs.
The base or basal species in a food web are those species without prey and can include autotrophs or saprophytic detritivores (i.e., the community of decomposers in soil, biofilms, and periphyton). Feeding connections in the web are called trophic links. The number of trophic links per consumer is a measure of food web connectance. Food chains are nested within the trophic links of food webs. Food chains are linear (noncyclic) feeding pathways that trace monophagous consumers from a base species up to the top consumer, which is usually a larger predatory carnivore.
Linkages connect to nodes in a food web, which are aggregates of biological taxa called trophic species. Trophic species are functional groups that have the same predators and prey in a food web. Common examples of an aggregated node in a food web might include parasites, microbes, decomposers, saprotrophs, consumers, or predators, each containing many species in a web that can otherwise be connected to other trophic species.
Trophic levels
Food webs have trophic levels and positions. Basal species, such as plants, form the first level and are the resource limited species that feed on no other living creature in the web. Basal species can be autotrophs or detritivores, including "decomposing organic material and its associated microorganisms which we defined as detritus, micro-inorganic material and associated microorganisms (MIP), and vascular plant material." Most autotrophs capture the sun's energy in chlorophyll, but some autotrophs (the chemolithotrophs) obtain energy by the chemical oxidation of inorganic compounds and can grow in dark environments, such as the sulfur bacterium Thiobacillus, which lives in hot sulfur springs. The top level has top (or apex) predators which no other species kills directly for its food resource needs. The intermediate levels are filled with omnivores that feed on more than one trophic level and cause energy to flow through a number of food pathways starting from a basal species.
In the simplest scheme, the first trophic level (level 1) is plants, then herbivores (level 2), and then carnivores (level 3). The trophic level is equal to one more than the chain length, which is the number of links connecting to the base. The base of the food chain (primary producers or detritivores) is set at zero. Ecologists identify feeding relations and organize species into trophic species through extensive gut content analysis of different species. The technique has been improved through the use of stable isotopes to better trace energy flow through the web. It was once thought that omnivory was rare, but recent evidence suggests otherwise. This realization has made trophic classifications more complex.
Trophic dynamics and multitrophic interactions
The trophic level concept was introduced in a historical landmark paper on trophic dynamics in 1942 by Raymond L. Lindeman. The basis of trophic dynamics is the transfer of energy from one part of the ecosystem to another. The trophic dynamic concept has served as a useful quantitative heuristic, but it has several major limitations including the precision by which an organism can be allocated to a specific trophic level. Omnivores, for example, are not restricted to any single level. Nonetheless, recent research has found that discrete trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."
A central question in the trophic dynamic literature is the nature of control and regulation over resources and production. Ecologists use simplified one trophic position food chain models (producer, carnivore, decomposer). Using these models, ecologists have tested various types of ecological control mechanisms. For example, herbivores generally have an abundance of vegetative resources, which meant that their populations were largely controlled or regulated by predators. This is known as the top-down hypothesis or 'green-world' hypothesis. Alternatively to the top-down hypothesis, not all plant material is edible and the nutritional quality or antiherbivore defenses of plants (structural and chemical) suggests a bottom-up form of regulation or control. Recent studies have concluded that both "top-down" and "bottom-up" forces can influence community structure and the strength of the influence is environmentally context dependent. These complex multitrophic interactions involve more than two trophic levels in a food web. For example, such interactions have been discovered in the context of arbuscular mycorrhizal fungi and aphid herbivores that utilize the same plant species.
Another example of a multitrophic interaction is a trophic cascade, in which predators help to increase plant growth and prevent overgrazing by suppressing herbivores. Links in a food-web illustrate direct trophic relations among species, but there are also indirect effects that can alter the abundance, distribution, or biomass in the trophic levels. For example, predators eating herbivores indirectly influence the control and regulation of primary production in plants. Although the predators do not eat the plants directly, they regulate the population of herbivores that are directly linked to plant trophism. The net effect of direct and indirect relations is called trophic cascades. Trophic cascades are separated into species-level cascades, where only a subset of the food-web dynamic is impacted by a change in population numbers, and community-level cascades, where a change in population numbers has a dramatic effect on the entire food-web, such as the distribution of plant biomass.
The field of chemical ecology has elucidated multitrophic interactions that entail the transfer of defensive compounds across multiple trophic levels. For example, certain plant species in the Castilleja and Plantago genera have been found to produce defensive compounds called iridoid glycosides that are sequestered in the tissues of the Taylor's checkerspot butterfly larvae that have developed a tolerance for these compounds and are able to consume the foliage of these plants. These sequestered iridoid glycosides then confer chemical protection against bird predators to the butterfly larvae. Another example of this sort of multitrophic interaction in plants is the transfer of defensive alkaloids produced by endophytes living within a grass host to a hemiparasitic plant that is also using the grass as a host.
Energy flow and biomass
Food webs depict energy flow via trophic linkages. Energy flow is directional, which contrasts against the cyclic flows of material through the food web systems. Energy flow "typically includes production, consumption, assimilation, non-assimilation losses (feces), and respiration (maintenance costs)." In a very general sense, energy flow (E) can be defined as the sum of metabolic production (P) and respiration (R), such that E=P+R.
Biomass represents stored energy. However, concentration and quality of nutrients and energy is variable. Many plant fibers, for example, are indigestible to many herbivores leaving grazer community food webs more nutrient limited than detrital food webs where bacteria are able to access and release the nutrient and energy stores. "Organisms usually extract energy in the form of carbohydrates, lipids, and proteins. These polymers have a dual role as supplies of energy as well as building blocks; the part that functions as energy supply results in the production of nutrients (and carbon dioxide, water, and heat). Excretion of nutrients is, therefore, basic to metabolism." The units in energy flow webs are typically a measure mass or energy per m2 per unit time. Different consumers are going to have different metabolic assimilation efficiencies in their diets. Each trophic level transforms energy into biomass. Energy flow diagrams illustrate the rates and efficiency of transfer from one trophic level into another and up through the hierarchy.
It is the case that the biomass of each trophic level decreases from the base of the chain to the top. This is because energy is lost to the environment with each transfer as entropy increases. About eighty to ninety percent of the energy is expended for the organism's life processes or is lost as heat or waste. Only about ten to twenty percent of the organism's energy is generally passed to the next organism. The amount can be less than one percent in animals consuming less digestible plants, and it can be as high as forty percent in zooplankton consuming phytoplankton. Graphic representations of the biomass or productivity at each tropic level are called ecological pyramids or trophic pyramids. The transfer of energy from primary producers to top consumers can also be characterized by energy flow diagrams.
Food chain
A common metric used to quantify food web trophic structure is food chain length. Food chain length is another way of describing food webs as a measure of the number of species encountered as energy or nutrients move from the plants to top predators. There are different ways of calculating food chain length depending on what parameters of the food web dynamic are being considered: connectance, energy, or interaction. In its simplest form, the length of a chain is the number of links between a trophic consumer and the base of the web. The mean chain length of an entire web is the arithmetic average of the lengths of all chains in a food web.
In a simple predator-prey example, a deer is one step removed from the plants it eats (chain length = 1) and a wolf that eats the deer is two steps removed from the plants (chain length = 2). The relative amount or strength of influence that these parameters have on the food web address questions about:
the identity or existence of a few dominant species (called strong interactors or keystone species)
the total number of species and food-chain length (including many weak interactors) and
how community structure, function and stability is determined.
Ecological pyramids
In a pyramid of numbers, the number of consumers at each level decreases significantly, so that a single top consumer, (e.g., a polar bear or a human), will be supported by a much larger number of separate producers. There is usually a maximum of four or five links in a food chain, although food chains in aquatic ecosystems are more often longer than those on land. Eventually, all the energy in a food chain is dispersed as heat.
Ecological pyramids place the primary producers at the base. They can depict different numerical properties of ecosystems, including numbers of individuals per unit of area, biomass (g/m2), and energy (k cal m−2 yr−1). The emergent pyramidal arrangement of trophic levels with amounts of energy transfer decreasing as species become further removed from the source of production is one of several patterns that is repeated amongst the planets ecosystems. The size of each level in the pyramid generally represents biomass, which can be measured as the dry weight of an organism. Autotrophs may have the highest global proportion of biomass, but they are closely rivaled or surpassed by microbes.
Pyramid structure can vary across ecosystems and across time. In some instances biomass pyramids can be inverted. This pattern is often identified in aquatic and coral reef ecosystems. The pattern of biomass inversion is attributed to different sizes of producers. Aquatic communities are often dominated by producers that are smaller than the consumers that have high growth rates. Aquatic producers, such as planktonic algae or aquatic plants, lack the large accumulation of secondary growth as exists in the woody trees of terrestrial ecosystems. However, they are able to reproduce quickly enough to support a larger biomass of grazers. This inverts the pyramid. Primary consumers have longer lifespans and slower growth rates that accumulates more biomass than the producers they consume. Phytoplankton live just a few days, whereas the zooplankton eating the phytoplankton live for several weeks and the fish eating the zooplankton live for several consecutive years. Aquatic predators also tend to have a lower death rate than the smaller consumers, which contributes to the inverted pyramidal pattern. Population structure, migration rates, and environmental refuge for prey are other possible causes for pyramids with biomass inverted. Energy pyramids, however, will always have an upright pyramid shape if all sources of food energy are included and this is dictated by the second law of thermodynamics.
Material flux and recycling
Many of the Earth's elements and minerals (or mineral nutrients) are contained within the tissues and diets of organisms. Hence, mineral and nutrient cycles trace food web energy pathways. Ecologists employ stoichiometry to analyze the ratios of the main elements found in all organisms: carbon (C), nitrogen (N), phosphorus (P). There is a large transitional difference between many terrestrial and aquatic systems as C:P and C:N ratios are much higher in terrestrial systems while N:P ratios are equal between the two systems. Mineral nutrients are the material resources that organisms need for growth, development, and vitality. Food webs depict the pathways of mineral nutrient cycling as they flow through organisms. Most of the primary production in an ecosystem is not consumed, but is recycled by detritus back into useful nutrients. Many of the Earth's microorganisms are involved in the formation of minerals in a process called biomineralization. Bacteria that live in detrital sediments create and cycle nutrients and biominerals. Food web models and nutrient cycles have traditionally been treated separately, but there is a strong functional connection between the two in terms of stability, flux, sources, sinks, and recycling of mineral nutrients.
Kinds of food webs
Food webs are necessarily aggregated and only illustrate a tiny portion of the complexity of real ecosystems. For example, the number of species on the planet are likely in the general order of 107, over 95% of these species consist of microbes and invertebrates, and relatively few have been named or classified by taxonomists. It is explicitly understood that natural systems are 'sloppy' and that food web trophic positions simplify the complexity of real systems that sometimes overemphasize many rare interactions. Most studies focus on the larger influences where the bulk of energy transfer occurs. "These omissions and problems are causes for concern, but on present evidence do not present insurmountable difficulties."
There are different kinds or categories of food webs:
Source web - one or more node(s), all of their predators, all the food these predators eat, and so on.
Sink web - one or more node(s), all of their prey, all the food that these prey eat, and so on.
Community (or connectedness) web - a group of nodes and all the connections of who eats whom.
Energy flow web - quantified fluxes of energy between nodes along links between a resource and a consumer.
Paleoecological web - a web that reconstructs ecosystems from the fossil record.
Functional web - emphasizes the functional significance of certain connections having strong interaction strength and greater bearing on community organization, more so than energy flow pathways. Functional webs have compartments, which are sub-groups in the larger network where there are different densities and strengths of interaction. Functional webs emphasize that "the importance of each population in maintaining the integrity of a community is reflected in its influence on the growth rates of other populations."
Within these categories, food webs can be further organized according to the different kinds of ecosystems being investigated. For example, human food webs, agricultural food webs, detrital food webs, marine food webs, aquatic food webs, soil food webs, Arctic (or polar) food webs, terrestrial food webs, and microbial food webs. These characterizations stem from the ecosystem concept, which assumes that the phenomena under investigation (interactions and feedback loops) are sufficient to explain patterns within boundaries, such as the edge of a forest, an island, a shoreline, or some other pronounced physical characteristic.
Detrital web
In a detrital web, plant and animal matter is broken down by decomposers, e.g., bacteria and fungi, and moves to detritivores and then carnivores. There are often relationships between the detrital web and the grazing web. Mushrooms produced by decomposers in the detrital web become a food source for deer, squirrels, and mice in the grazing web. Earthworms eaten by robins are detritivores consuming decaying leaves.
"Detritus can be broadly defined as any form of non-living organic matter, including different types of plant tissue (e.g. leaf litter, dead wood, aquatic macrophytes, algae), animal tissue (carrion), dead microbes, faeces (manure, dung, faecal pellets, guano, frass), as well as products secreted, excreted or exuded from organisms (e.g. extra-cellular polymers, nectar, root exudates and leachates, dissolved organic matter, extra-cellular matrix, mucilage). The relative importance of these forms of detritus, in terms of origin, size and chemical composition, varies across ecosystems."
Quantitative food webs
Ecologists collect data on trophic levels and food webs to statistically model and mathematically calculate parameters, such as those used in other kinds of network analysis (e.g., graph theory), to study emergent patterns and properties shared among ecosystems. There are different ecological dimensions that can be mapped to create more complicated food webs, including: species composition (type of species), richness (number of species), biomass (the dry weight of plants and animals), productivity (rates of conversion of energy and nutrients into growth), and stability (food webs over time). A food web diagram illustrating species composition shows how change in a single species can directly and indirectly influence many others. Microcosm studies are used to simplify food web research into semi-isolated units such as small springs, decaying logs, and laboratory experiments using organisms that reproduce quickly, such as daphnia feeding on algae grown under controlled environments in jars of water.
While the complexity of real food webs connections are difficult to decipher, ecologists have found mathematical models on networks an invaluable tool for gaining insight into the structure, stability, and laws of food web behaviours relative to observable outcomes. "Food web theory centers around the idea of connectance." Quantitative formulas simplify the complexity of food web structure. The number of trophic links (tL), for example, is converted into a connectance value:
,
where, S(S-1)/2 is the maximum number of binary connections among S species. "Connectance (C) is the fraction of all possible links that are realized (L/S2) and represents a standard measure of food web complexity..." The distance (d) between every species pair in a web is averaged to compute the mean distance between all nodes in a web (D) and multiplied by the total number of links (L) to obtain link-density (LD), which is influenced by scale-dependent variables such as species richness. These formulas are the basis for comparing and investigating the nature of non-random patterns in the structure of food web networks among many different types of ecosystems.
Scaling laws, complexity, chaos, and pattern correlates are common features attributed to food web structure.
Complexity and stability
Food webs are extremely complex. Complexity is a term that conveys the mental intractability of understanding all possible higher-order effects in a food web. Sometimes in food web terminology, complexity is defined as product of the number of species and connectance., though there have been criticisms of this definition and other proposed methods for measuring network complexity. Connectance is "the fraction of all possible links that are realized in a network". These concepts were derived and stimulated through the suggestion that complexity leads to stability in food webs, such as increasing the number of trophic levels in more species rich ecosystems. This hypothesis was challenged through mathematical models suggesting otherwise, but subsequent studies have shown that the premise holds in real systems.
At different levels in the hierarchy of life, such as the stability of a food web, "the same overall structure is maintained in spite of an ongoing flow and change of components." The farther a living system (e.g., ecosystem) sways from equilibrium, the greater its complexity. Complexity has multiple meanings in the life sciences and in the public sphere that confuse its application as a precise term for analytical purposes in science. Complexity in the life sciences (or biocomplexity) is defined by the "properties emerging from the interplay of behavioral, biological, physical, and social interactions that affect, sustain, or are modified by living organisms, including humans".
Several concepts have emerged from the study of complexity in food webs. Complexity explains many principals pertaining to self-organization, non-linearity, interaction, cybernetic feedback, discontinuity, emergence, and stability in food webs. Nestedness, for example, is defined as "a pattern of interaction in which specialists interact with species that form perfect subsets of the species with which generalists interact", "—that is, the diet of the most specialized species is a subset of the diet of the next more generalized species, and its diet a subset of the next more generalized, and so on." Until recently, it was thought that food webs had little nested structure, but empirical evidence shows that many published webs have nested subwebs in their assembly.
Food webs are complex networks. As networks, they exhibit similar structural properties and mathematical laws that have been used to describe other complex systems, such as small world and scale free properties. The small world attribute refers to the many loosely connected nodes, non-random dense clustering of a few nodes (i.e., trophic or keystone species in ecology), and small path length compared to a regular lattice. "Ecological networks, especially mutualistic networks, are generally very heterogeneous, consisting of areas with sparse links among species and distinct areas of tightly linked species. These regions of high link density are often referred to as cliques, hubs, compartments, cohesive sub-groups, or modules...Within food webs, especially in aquatic systems, nestedness appears to be related to body size because the diets of smaller predators tend to be nested subsets of those of larger predators (Woodward & Warren 2007; YvonDurocher et al. 2008), and phylogenetic constraints, whereby related taxa are nested based on their common evolutionary history, are also evident (Cattin et al. 2004)." "Compartments in food webs are subgroups of taxa in which many strong interactions occur within the subgroups and few weak interactions occur between the subgroups. Theoretically, compartments increase the stability in networks, such as food webs."
Food webs are also complex in the way that they change in scale, seasonally, and geographically. The components of food webs, including organisms and mineral nutrients, cross the thresholds of ecosystem boundaries. This has led to the concept or area of study known as cross-boundary subsidy. "This leads to anomalies, such as food web calculations determining that an ecosystem can support one half of a top carnivore, without specifying which end." Nonetheless, real differences in structure and function have been identified when comparing different kinds of ecological food webs, such as terrestrial vs. aquatic food webs.
History of food webs
Food webs serve as a framework to help ecologists organize the complex network of interactions among species observed in nature and around the world. One of the earliest descriptions of a food chain was described by a medieval Afro-Arab scholar named Al-Jahiz: "All animals, in short, cannot exist without food, neither can the hunting animal escape being hunted in his turn." The earliest graphical depiction of a food web was by Lorenzo Camerano in 1880, followed independently by those of Pierce and colleagues in 1912 and Victor Shelford in 1913. Two food webs about herring were produced by Victor Summerhayes and Charles Elton and Alister Hardy in 1923 and 1924. Charles Elton subsequently pioneered the concept of food cycles, food chains, and food size in his classical 1927 book "Animal Ecology"; Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. After Charles Elton's use of food webs in his 1927 synthesis, they became a central concept in the field of ecology. Elton organized species into functional groups, which formed the basis for the trophic system of classification in Raymond Lindeman's classic and landmark paper in 1942 on trophic dynamics. The notion of a food web has a historical foothold in the writings of Charles Darwin and his terminology, including an "entangled bank", "web of life", "web of complex relations", and in reference to the decomposition actions of earthworms he talked about "the continued movement of the particles of earth". Even earlier, in 1768 John Bruckner described nature as "one continued web of life".
Interest in food webs increased after Robert Paine's experimental and descriptive study of intertidal shores suggesting that food web complexity was key to maintaining species diversity and ecological stability. Many theoretical ecologists, including Sir Robert May and Stuart Pimm, were prompted by this discovery and others to examine the mathematical properties of food webs.
See also
References
Further reading
Trophic ecology | 0.787728 | 0.996787 | 0.785197 |
Functional genomics | Functional genomics is a field of molecular biology that attempts to describe gene (and protein) functions and interactions. Functional genomics make use of the vast data generated by genomic and transcriptomic projects (such as genome sequencing projects and RNA sequencing). Functional genomics focuses on the dynamic aspects such as gene transcription, translation, regulation of gene expression and protein–protein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "candidate-gene" approach.
Definition and goals
In order to understand functional genomics it is important to first define function. In their paper Graur et al. define function in two possible ways. These are "selected effect" and "causal role". The "selected effect" function refers to the function for which a trait (DNA, RNA, protein etc.) is selected for. The "causal role" function refers to the function that a trait is sufficient and necessary for. Functional genomics usually tests the "causal role" definition of function.
The goal of functional genomics is to understand the function of genes or proteins, eventually all components of a genome. The term functional genomics is often used to refer to the many technical approaches to study an organism's genes and proteins, including the "biochemical, cellular, and/or physiological properties of each and every gene product" while some authors include the study of nongenic elements in their definition. Functional genomics may also include studies of natural genetic variation over time (such as an organism's development) or space (such as its body regions), as well as functional disruptions such as mutations.
The promise of functional genomics is to generate and synthesize genomic and proteomic knowledge into an understanding of the dynamic properties of an organism. This could potentially provide a more complete picture of how the genome specifies function compared to studies of single genes. Integration of functional genomics data is often a part of systems biology approaches.
Techniques and applications
Functional genomics includes function-related aspects of the genome itself such as mutation and polymorphism (such as single nucleotide polymorphism (SNP) analysis), as well as the measurement of molecular activities. The latter comprise a number of "-omics" such as transcriptomics (gene expression), proteomics (protein production), and metabolomics. Functional genomics uses mostly multiplex techniques to measure the abundance of many or all gene products such as mRNAs or proteins within a biological sample. A more focused functional genomics approach might test the function of all variants of one gene and quantify the effects of mutants by using sequencing as a readout of activity. Together these measurement modalities endeavor to quantitate the various biological processes and improve our understanding of gene and protein functions and interactions.
At the DNA level
Genetic interaction mapping
Systematic pairwise deletion of genes or inhibition of gene expression can be used to identify genes with related function, even if they do not interact physically. Epistasis refers to the fact that effects for two different gene knockouts may not be additive; that is, the phenotype that results when two genes are inhibited may be different from the sum of the effects of single knockouts.
DNA/Protein interactions
Proteins formed by the translation of the mRNA (messenger RNA, a coded information from DNA for protein synthesis) play a major role in regulating gene expression. To understand how they regulate gene expression it is necessary to identify DNA sequences that they interact with. Techniques have been developed to identify sites of DNA-protein interactions. These include ChIP-sequencing, CUT&RUN sequencing and Calling Cards.
DNA accessibility assays
Assays have been developed to identify regions of the genome that are accessible. These regions of accessible chromatin are candidate regulatory regions. These assays include ATAC-seq, DNase-Seq and FAIRE-Seq.
At the RNA level
Microarrays
Microarrays measure the amount of mRNA in a sample that corresponds to a given gene or probe DNA sequence. Probe sequences are immobilized on a solid surface and allowed to hybridize with fluorescently labeled "target" mRNA. The intensity of fluorescence of a spot is proportional to the amount of target sequence that has hybridized to that spot and therefore to the abundance of that mRNA sequence in the sample. Microarrays allow for the identification of candidate genes involved in a given process based on variation between transcript levels for different conditions and shared expression patterns with genes of known function.
SAGE
Serial analysis of gene expression (SAGE) is an alternate method of analysis based on RNA sequencing rather than hybridization. SAGE relies on the sequencing of 10–17 base pair tags which are unique to each gene. These tags are produced from poly-A mRNA and ligated end-to-end before sequencing. SAGE gives an unbiased measurement of the number of transcripts per cell, since it does not depend on prior knowledge of what transcripts to study (as microarrays do).
RNA sequencing
RNA sequencing has taken over microarray and SAGE technology in recent years, as noted in 2016, and has become the most efficient way to study transcription and gene expression. This is typically done by next-generation sequencing.
A subset of sequenced RNAs are small RNAs, a class of non-coding RNA molecules that are key regulators of transcriptional and post-transcriptional gene silencing, or RNA silencing. Next-generation sequencing is the gold standard tool for non-coding RNA discovery, profiling and expression analysis.
Massively Parallel Reporter Assays (MPRAs)
Massively parallel reporter assays is a technology to test the cis-regulatory activity of DNA sequences. MPRAs use a plasmid with a synthetic cis-regulatory element upstream of a promoter driving a synthetic gene such as Green Fluorescent Protein. A library of cis-regulatory elements is usually tested using MPRAs, a library can contain from hundreds to thousands of cis-regulatory elements. The cis-regulatory activity of the elements is assayed by using the downstream reporter activity. The activity of all the library members is assayed in parallel using barcodes for each cis-regulatory element. One limitation of MPRAs is that the activity is assayed on a plasmid and may not capture all aspects of gene regulation observed in the genome.
STARR-seq
STARR-seq is a technique similar to MPRAs to assay enhancer activity of randomly sheared genomic fragments. In the original publication, randomly sheared fragments of the Drosophila genome were placed downstream of a minimal promoter. Candidate enhancers amongst the randomly sheared fragments will transcribe themselves using the minimal promoter. By using sequencing as a readout and controlling for input amounts of each sequence the strength of putative enhancers are assayed by this method.
Perturb-seq
Perturb-seq couples CRISPR mediated gene knockdowns with single-cell gene expression. Linear models are used to calculate the effect of the knockdown of a single gene on the expression of multiple genes.
At the protein level
Yeast two-hybrid system
A yeast two-hybrid screening (Y2H) tests a "bait" protein against many potential interacting proteins ("prey") to identify physical protein–protein interactions. This system is based on a transcription factor, originally GAL4, whose separate DNA-binding and transcription activation domains are both required in order for the protein to cause transcription of a reporter gene. In a Y2H screen, the "bait" protein is fused to the binding domain of GAL4, and a library of potential "prey" (interacting) proteins is recombinantly expressed in a vector with the activation domain. In vivo interaction of bait and prey proteins in a yeast cell brings the activation and binding domains of GAL4 close enough together to result in expression of a reporter gene. It is also possible to systematically test a library of bait proteins against a library of prey proteins to identify all possible interactions in a cell.
MS and AP/MS
Mass spectrometry (MS) can identify proteins and their relative levels, hence it can be used to study protein expression. When used in combination with affinity purification, mass spectrometry (AP/MS) can be used to study protein complexes, that is, which proteins interact with one another in complexes and in which ratios. In order to purify protein complexes, usually a "bait" protein is tagged with a specific protein or peptide that can be used to pull out the complex from a complex mix. The purification is usually done using an antibody or a compound that binds to the fusion part. The proteins are then digested into short peptide fragments and mass spectrometry is used to identify the proteins based on the mass-to-charge ratios of those fragments.
Deep mutational scanning
In deep mutational scanning, every possible amino acid change in a given protein is first synthesized. The activity of each of these protein variants is assayed in parallel using barcodes for each variant. By comparing the activity to the wild-type protein, the effect of each mutation is identified. While it is possible to assay every possible single amino-acid change due to combinatorics two or more concurrent mutations are hard to test. Deep mutational scanning experiments have also been used to infer protein structure and protein-protein interactions. Deep Mutational Scanning is an example of a multiplexed assays of variant effect (MAVEs), a family of methods that involve mutagenesis of a DNA-encoded protein or regulatory element followed by a multiplexed assay for some aspect of function. MAVEs enable the generation of ‘variant effect maps’ characterizing aspects of the function of every possible single nucleotide change in a gene or functional element of interest.
Mutagenesis and phenotyping
An important functional feature of genes is the phenotype caused by mutations. Mutants can be produced by random mutations or by directed mutagenesis, including site-directed mutagenesis, deleting complete genes, or other techniques.
Knock-outs (gene deletions)
Gene function can be investigated by systematically "knocking out" genes one by one. This is done by either deletion or disruption of function (such as by insertional mutagenesis) and the resulting organisms are screened for phenotypes that provide clues to the function of the disrupted gene. Knock-outs have been produced for whole genomes, i.e. by deleting all genes in a genome. For essential genes, this is not possible, so other techniques are used, e.g. deleting a gene while expressing the gene from a plasmid, using an inducible promoter, so that the level of gene product can be changed at will (and thus a "functional" deletion achieved).
Site-directed mutagenesis
Site-directed mutagenesis is used to mutate specific bases (and thus amino acids). This is critical to investigate the function of specific amino acids in a protein, e.g. in the active site of an enzyme.
RNAi
RNA interference (RNAi) methods can be used to transiently silence or knockdown gene expression using ~20 base-pair double-stranded RNA typically delivered by transfection of synthetic ~20-mer short-interfering RNA molecules (siRNAs) or by virally encoded short-hairpin RNAs (shRNAs). RNAi screens, typically performed in cell culture-based assays or experimental organisms (such as C. elegans) can be used to systematically disrupt nearly every gene in a genome or subsets of genes (sub-genomes); possible functions of disrupted genes can be assigned based on observed phenotypes.
CRISPR screens
CRISPR-Cas9 has been used to delete genes in a multiplexed manner in cell-lines. Quantifying the amount of guide-RNAs for each gene before and after the experiment can point towards essential genes. If a guide-RNA disrupts an essential gene it will lead to the loss of that cell and hence there will be a depletion of that particular guide-RNA after the screen. In a recent CRISPR-cas9 experiment in mammalian cell-lines, around 2000 genes were found to be essential in multiple cell-lines. Some of these genes were essential in only one cell-line. Most of genes are part of multi-protein complexes. This approach can be used to identify synthetic lethality by using the appropriate genetic background. CRISPRi and CRISPRa enable loss-of-function and gain-of-function screens in a similar manner. CRISPRi identified ~2100 essential genes in the K562 cell-line. CRISPR deletion screens have also been used to identify potential regulatory elements of a gene. For example, a technique called ScanDel was published which attempted this approach. The authors deleted regions outside a gene of interest(HPRT1 involved in a Mendelian disorder) in an attempt to identify regulatory elements of this gene. Gassperini et al. did not identify any distal regulatory elements for HPRT1 using this approach, however such approaches can be extended to other genes of interest.
Functional annotations for genes
Genome annotation
Putative genes can be identified by scanning a genome for regions likely to encode proteins, based on characteristics such as long open reading frames, transcriptional initiation sequences, and polyadenylation sites. A sequence identified as a putative gene must be confirmed by further evidence, such as similarity to cDNA or EST sequences from the same organism, similarity of the predicted protein sequence to known proteins, association with promoter sequences, or evidence that mutating the sequence produces an observable phenotype.
Rosetta stone approach
The Rosetta stone approach is a computational method for de-novo protein function prediction. It is based on the hypothesis that some proteins involved in a given physiological process may exist as two separate genes in one organism and as a single gene in another. Genomes are scanned for sequences that are independent in one organism and in a single open reading frame in another. If two genes have fused, it is predicted that they have similar biological functions that make such co-regulation advantageous.
Bioinformatics methods for Functional genomics
Because of the large quantity of data produced by these techniques and the desire to find biologically meaningful patterns, bioinformatics is crucial to analysis of functional genomics data. Examples of techniques in this class are data clustering or principal component analysis for unsupervised machine learning (class detection) as well as artificial neural networks or support vector machines for supervised machine learning (class prediction, classification). Functional enrichment analysis is used to determine the extent of over- or under-expression (positive- or negative- regulators in case of RNAi screens) of functional categories relative to a background sets. Gene ontology based enrichment analysis are provided by DAVID and gene set enrichment analysis (GSEA), pathway based analysis by Ingenuity and Pathway studio and protein complex based analysis by COMPLEAT.
New computational methods have been developed for understanding the results of a deep mutational scanning experiment. 'phydms' compares the result of a deep mutational scanning experiment to a phylogenetic tree. This allows the user to infer if the selection process in nature applies similar constraints on a protein as the results of the deep mutational scan indicate. This may allow an experimenter to choose between different experimental conditions based on how well they reflect nature. Deep mutational scanning has also been used to infer protein-protein interactions. The authors used a thermodynamic model to predict the effects of mutations in different parts of a dimer. Deep mutational structure can also be used to infer protein structure. Strong positive epistasis between two mutations in a deep mutational scan can be indicative of two parts of the protein that are close to each other in 3-D space. This information can then be used to infer protein structure. A proof of principle of this approach was shown by two groups using the protein GB1.
Results from MPRA experiments have required machine learning approaches to interpret the data. A gapped k-mer SVM model has been used to infer the kmers that are enriched within cis-regulatory sequences with high activity compared to sequences with lower activity. These models provide high predictive power. Deep learning and random forest approaches have also been used to interpret the results of these high-dimensional experiments. These models are beginning to help develop a better understanding of non-coding DNA function towards gene-regulation.
Consortium projects
The ENCODE project
The ENCODE (Encyclopedia of DNA elements) project is an in-depth analysis of the human genome whose goal is to identify all the functional elements of genomic DNA, in both coding and non-coding regions. Important results include evidence from genomic tiling arrays that most nucleotides are transcribed as coding transcripts, non-coding RNAs, or random transcripts, the discovery of additional transcriptional regulatory sites, further elucidation of chromatin-modifying mechanisms.
The Genotype-Tissue Expression (GTEx) project
The GTEx project is a human genetics project aimed at understanding the role of genetic variation in shaping variation in the transcriptome across tissues. The project has collected a variety of tissue samples (> 50 different tissues) from more than 700 post-mortem donors. This has resulted in the collection of >11,000 samples. GTEx has helped understand the tissue-sharing and tissue-specificity of eQTLs. The genomic resource was developed to "enrich our understanding of how differences in our DNA sequence contribute to health and disease."
The Atlas of Variant Effects Alliance
The Atlas of Variant Effects Alliance (AVE), founded in 2020, is an international consortium aiming to catalog the impact of all possible genetic variants for disease-related functional genomics by creating variant effect maps that reveal the function of every possible single nucleotide change in a gene or regulatory element. AVE is funded in part through the Brotman Baty Institute at the University of Washington and the National Human Genome Research Institute, via funding from the Center of Excellence in Genome Science grant (NHGRI RM1HG010461).
See also
Systems biology
Structural genomics
Comparative genomics
Pharmacogenomics
MGED Society
Epigenetics
Bioinformatics
Epistasis and functional genomics
Synthetic viability
Protein function prediction
References
External links
European Science Foundation Programme on Frontiers of Functional Genomics
MUGEN NoE — Integrated Functional Genomics in Mutant Mouse Models
Nature insights: functional genomics
ENCODE
Molecular biology
Genomics | 0.80293 | 0.977853 | 0.785148 |
Biosynthesis | Biosynthesis, i.e., chemical synthesis occurring in biological contexts, is a term most often referring to multi-step, enzyme-catalyzed processes where chemical substances absorbed as nutrients (or previously converted through biosynthesis) serve as enzyme substrates, with conversion by the living organism either into simpler or more complex products. Examples of biosynthetic pathways include those for the production of amino acids, lipid membrane components, and nucleotides, but also for the production of all classes of biological macromolecules, and of acetyl-coenzyme A, adenosine triphosphate, nicotinamide adenine dinucleotide and other key intermediate and transactional molecules needed for metabolism. Thus, in biosynthesis, any of an array of compounds, from simple to complex, are converted into other compounds, and so it includes both the catabolism and anabolism (building up and breaking down) of complex molecules (including macromolecules). Biosynthetic processes are often represented via charts of metabolic pathways. A particular biosynthetic pathway may be located within a single cellular organelle (e.g., mitochondrial fatty acid synthesis pathways), while others involve enzymes that are located across an array of cellular organelles and structures (e.g., the biosynthesis of glycosylated cell surface proteins).
Elements of biosynthesis
Elements of biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavourable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the simplest sense, the reactions that occur in biosynthesis have the following format:
Reactant ->[][enzyme] Product
Some variations of this basic equation which will be discussed later in more detail are:
Simple compounds which are converted into other compounds, usually as part of a multiple step reaction pathway. Two examples of this type of reaction occur during the formation of nucleic acids and the charging of tRNA prior to translation. For some of these steps, chemical energy is required:
{Precursor~molecule} + ATP <=> {product~AMP} + PP_i
Simple compounds that are converted into other compounds with the assistance of cofactors. For example, the synthesis of phospholipids requires acetyl CoA, while the synthesis of another membrane component, sphingolipids, requires NADH and FADH for the formation the sphingosine backbone. The general equation for these examples is:
{Precursor~molecule} + Cofactor ->[][enzyme] macromolecule
Simple compounds that join to create a macromolecule. For example, fatty acids join to form phospholipids. In turn, phospholipids and cholesterol interact noncovalently in order to form the lipid bilayer. This reaction may be depicted as follows:
{Molecule~1} + Molecule~2 -> macromolecule
Lipid
Many intricate macromolecules are synthesized in a pattern of simple, repeated structures. For example, the simplest structures of lipids are fatty acids. Fatty acids are hydrocarbon derivatives; they contain a carboxyl group "head" and a hydrocarbon chain "tail". These fatty acids create larger components, which in turn incorporate noncovalent interactions to form the lipid bilayer.
Fatty acid chains are found in two major components of membrane lipids: phospholipids and sphingolipids. A third major membrane component, cholesterol, does not contain these fatty acid units.
Eukaryotic phospholipids
The foundation of all biomembranes consists of a bilayer structure of phospholipids. The phospholipid molecule is amphipathic; it contains a hydrophilic polar head and a hydrophobic nonpolar tail. The phospholipid heads interact with each other and aqueous media, while the hydrocarbon tails orient themselves in the center, away from water. These latter interactions drive the bilayer structure that acts as a barrier for ions and molecules.
There are various types of phospholipids; consequently, their synthesis pathways differ. However, the first step in phospholipid synthesis involves the formation of phosphatidate or diacylglycerol 3-phosphate at the endoplasmic reticulum and outer mitochondrial membrane. The synthesis pathway is found below:
The pathway starts with glycerol 3-phosphate, which gets converted to lysophosphatidate via the addition of a fatty acid chain provided by acyl coenzyme A. Then, lysophosphatidate is converted to phosphatidate via the addition of another fatty acid chain contributed by a second acyl CoA; all of these steps are catalyzed by the glycerol phosphate acyltransferase enzyme. Phospholipid synthesis continues in the endoplasmic reticulum, and the biosynthesis pathway diverges depending on the components of the particular phospholipid.
Sphingolipids
Like phospholipids, these fatty acid derivatives have a polar head and nonpolar tails. Unlike phospholipids, sphingolipids have a sphingosine backbone. Sphingolipids exist in eukaryotic cells and are particularly abundant in the central nervous system. For example, sphingomyelin is part of the myelin sheath of nerve fibers.
Sphingolipids are formed from ceramides that consist of a fatty acid chain attached to the amino group of a sphingosine backbone. These ceramides are synthesized from the acylation of sphingosine. The biosynthetic pathway for sphingosine is found below:
As the image denotes, during sphingosine synthesis, palmitoyl CoA and serine undergo a condensation reaction which results in the formation of 3-dehydrosphinganine. This product is then reduced to form dihydrospingosine, which is converted to sphingosine via the oxidation reaction by FAD.
Cholesterol
This lipid belongs to a class of molecules called sterols. Sterols have four fused rings and a hydroxyl group. Cholesterol is a particularly important molecule. Not only does it serve as a component of lipid membranes, it is also a precursor to several steroid hormones, including cortisol, testosterone, and estrogen.
Cholesterol is synthesized from acetyl CoA. The pathway is shown below:
More generally, this synthesis occurs in three stages, with the first stage taking place in the cytoplasm and the second and third stages occurring in the endoplasmic reticulum. The stages are as follows:
1. The synthesis of isopentenyl pyrophosphate, the "building block" of cholesterol
2. The formation of squalene via the condensation of six molecules of isopentenyl phosphate
3. The conversion of squalene into cholesterol via several enzymatic reactions
Nucleotides
The biosynthesis of nucleotides involves enzyme-catalyzed reactions that convert substrates into more complex products. Nucleotides are the building blocks of DNA and RNA. Nucleotides are composed of a five-membered ring formed from ribose sugar in RNA, and deoxyribose sugar in DNA; these sugars are linked to a purine or pyrimidine base with a glycosidic bond and a phosphate group at the 5' location of the sugar.
Purine nucleotides
The DNA nucleotides adenosine and guanosine consist of a purine base attached to a ribose sugar with a glycosidic bond. In the case of RNA nucleotides deoxyadenosine and deoxyguanosine, the purine bases are attached to a deoxyribose sugar with a glycosidic bond. The purine bases on DNA and RNA nucleotides are synthesized in a twelve-step reaction mechanism present in most single-celled organisms. Higher eukaryotes employ a similar reaction mechanism in ten reaction steps. Purine bases are synthesized by converting phosphoribosyl pyrophosphate (PRPP) to inosine monophosphate (IMP), which is the first key intermediate in purine base biosynthesis. Further enzymatic modification of IMP produces the adenosine and guanosine bases of nucleotides.
The first step in purine biosynthesis is a condensation reaction, performed by glutamine-PRPP amidotransferase. This enzyme transfers the amino group from glutamine to PRPP, forming 5-phosphoribosylamine. The following step requires the activation of glycine by the addition of a phosphate group from ATP.
GAR synthetase performs the condensation of activated glycine onto PRPP, forming glycineamide ribonucleotide (GAR).
GAR transformylase adds a formyl group onto the amino group of GAR, forming formylglycinamide ribonucleotide (FGAR).
FGAR amidotransferase catalyzes the addition of a nitrogen group to FGAR, forming formylglycinamidine ribonucleotide (FGAM).
FGAM cyclase catalyzes ring closure, which involves removal of a water molecule, forming the 5-membered imidazole ring 5-aminoimidazole ribonucleotide (AIR).
N5-CAIR synthetase transfers a carboxyl group, forming the intermediate N5-carboxyaminoimidazole ribonucleotide (N5-CAIR).
N5-CAIR mutase rearranges the carboxyl functional group and transfers it onto the imidazole ring, forming carboxyamino- imidazole ribonucleotide (CAIR). The two step mechanism of CAIR formation from AIR is mostly found in single celled organisms. Higher eukaryotes contain the enzyme AIR carboxylase, which transfers a carboxyl group directly to AIR imidazole ring, forming CAIR.
SAICAR synthetase forms a peptide bond between aspartate and the added carboxyl group of the imidazole ring, forming N-succinyl-5-aminoimidazole-4-carboxamide ribonucleotide (SAICAR).
SAICAR lyase removes the carbon skeleton of the added aspartate, leaving the amino group and forming 5-aminoimidazole-4-carboxamide ribonucleotide (AICAR).
AICAR transformylase transfers a carbonyl group to AICAR, forming N-formylaminoimidazole- 4-carboxamide ribonucleotide (FAICAR).
The final step involves the enzyme IMP synthase, which performs the purine ring closure and forms the inosine monophosphate intermediate.
Pyrimidine nucleotides
Other DNA and RNA nucleotide bases that are linked to the ribose sugar via a glycosidic bond are thymine, cytosine and uracil (which is only found in RNA).
Uridine monophosphate biosynthesis involves an enzyme that is located in the mitochondrial inner membrane and multifunctional enzymes that are located in the cytosol.
The first step involves the enzyme carbamoyl phosphate synthase combining glutamine with CO2 in an ATP dependent reaction to form carbamoyl phosphate.
Aspartate carbamoyltransferase condenses carbamoyl phosphate with aspartate to form uridosuccinate.
Dihydroorotase performs ring closure, a reaction that loses water, to form dihydroorotate.
Dihydroorotate dehydrogenase, located within the mitochondrial inner membrane, oxidizes dihydroorotate to orotate.
Orotate phosphoribosyl hydrolase (OMP pyrophosphorylase) condenses orotate with PRPP to form orotidine-5'-phosphate.
OMP decarboxylase catalyzes the conversion of orotidine-5'-phosphate to UMP.
After the uridine nucleotide base is synthesized, the other bases, cytosine and thymine are synthesized. Cytosine biosynthesis is a two-step reaction which involves the conversion of UMP to UTP. Phosphate addition to UMP is catalyzed by a kinase enzyme. The enzyme CTP synthase catalyzes the next reaction step: the conversion of UTP to CTP by transferring an amino group from glutamine to uridine; this forms the cytosine base of CTP. The mechanism, which depicts the reaction UTP + ATP + glutamine ⇔ CTP + ADP + glutamate, is below:
Cytosine is a nucleotide that is present in both DNA and RNA. However, uracil is only found in RNA. Therefore, after UTP is synthesized, it is must be converted into a deoxy form to be incorporated into DNA. This conversion involves the enzyme ribonucleoside triphosphate reductase. This reaction that removes the 2'-OH of the ribose sugar to generate deoxyribose is not affected by the bases attached to the sugar. This non-specificity allows ribonucleoside triphosphate reductase to convert all nucleotide triphosphates to deoxyribonucleotide by a similar mechanism.
In contrast to uracil, thymine bases are found mostly in DNA, not RNA. Cells do not normally contain thymine bases that are linked to ribose sugars in RNA, thus indicating that cells only synthesize deoxyribose-linked thymine. The enzyme thymidylate synthetase is responsible for synthesizing thymine residues from dUMP to dTMP. This reaction transfers a methyl group onto the uracil base of dUMP to generate dTMP. The thymidylate synthase reaction, dUMP + 5,10-methylenetetrahydrofolate ⇔ dTMP + dihydrofolate, is shown to the right.
DNA
Although there are differences between eukaryotic and prokaryotic DNA synthesis, the following section denotes key characteristics of DNA replication shared by both organisms.
DNA is composed of nucleotides that are joined by phosphodiester bonds. DNA synthesis, which takes place in the nucleus, is a semiconservative process, which means that the resulting DNA molecule contains an original strand from the parent structure and a new strand. DNA synthesis is catalyzed by a family of DNA polymerases that require four deoxynucleoside triphosphates, a template strand, and a primer with a free 3'OH in which to incorporate nucleotides.
In order for DNA replication to occur, a replication fork is created by enzymes called helicases which unwind the DNA helix. Topoisomerases at the replication fork remove supercoils caused by DNA unwinding, and single-stranded DNA binding proteins maintain the two single-stranded DNA templates stabilized prior to replication.
DNA synthesis is initiated by the RNA polymerase primase, which makes an RNA primer with a free 3'OH. This primer is attached to the single-stranded DNA template, and DNA polymerase elongates the chain by incorporating nucleotides; DNA polymerase also proofreads the newly synthesized DNA strand.
During the polymerization reaction catalyzed by DNA polymerase, a nucleophilic attack occurs by the 3'OH of the growing chain on the innermost phosphorus atom of a deoxynucleoside triphosphate; this yields the formation of a phosphodiester bridge that attaches a new nucleotide and releases pyrophosphate.
Two types of strands are created simultaneously during replication: the leading strand, which is synthesized continuously and grows towards the replication fork, and the lagging strand, which is made discontinuously in Okazaki fragments and grows away from the replication fork. Okazaki fragments are covalently joined by DNA ligase to form a continuous strand.
Then, to complete DNA replication, RNA primers are removed, and the resulting gaps are replaced with DNA and joined via DNA ligase.
Amino acids
A protein is a polymer that is composed from amino acids that are linked by peptide bonds. There are more than 300 amino acids found in nature of which only twenty two, known as the proteinogenic amino acids, are the building blocks for protein. Only green plants and most microbes are able to synthesize all of the 20 standard amino acids that are needed by all living species. Mammals can only synthesize ten of the twenty standard amino acids. The other amino acids, valine, methionine, leucine, isoleucine, phenylalanine, lysine, threonine and tryptophan for adults and histidine, and arginine for babies are obtained through diet.
Amino acid basic structure
The general structure of the standard amino acids includes a primary amino group, a carboxyl group and the functional group attached to the α-carbon. The different amino acids are identified by the functional group. As a result of the three different groups attached to the α-carbon, amino acids are asymmetrical molecules. For all standard amino acids, except glycine, the α-carbon is a chiral center. In the case of glycine, the α-carbon has two hydrogen atoms, thus adding symmetry to this molecule. With the exception of proline, all of the amino acids found in life have the L-isoform conformation. Proline has a functional group on the α-carbon that forms a ring with the amino group.
Nitrogen source
One major step in amino acid biosynthesis involves incorporating a nitrogen group onto the α-carbon. In cells, there are two major pathways of incorporating nitrogen groups. One pathway involves the enzyme glutamine oxoglutarate aminotransferase (GOGAT) which removes the amide amino group of glutamine and transfers it onto 2-oxoglutarate, producing two glutamate molecules. In this catalysis reaction, glutamine serves as the nitrogen source. An image illustrating this reaction is found to the right.
The other pathway for incorporating nitrogen onto the α-carbon of amino acids involves the enzyme glutamate dehydrogenase (GDH). GDH is able to transfer ammonia onto 2-oxoglutarate and form glutamate. Furthermore, the enzyme glutamine synthetase (GS) is able to transfer ammonia onto glutamate and synthesize glutamine, replenishing glutamine.
The glutamate family of amino acids
The glutamate family of amino acids includes the amino acids that derive from the amino acid glutamate. This family includes: glutamate, glutamine, proline, and arginine. This family also includes the amino acid lysine, which is derived from α-ketoglutarate.
The biosynthesis of glutamate and glutamine is a key step in the nitrogen assimilation discussed above. The enzymes GOGAT and GDH catalyze the nitrogen assimilation reactions.
In bacteria, the enzyme glutamate 5-kinase initiates the biosynthesis of proline by transferring a phosphate group from ATP onto glutamate. The next reaction is catalyzed by the enzyme pyrroline-5-carboxylate synthase (P5CS), which catalyzes the reduction of the ϒ-carboxyl group of L-glutamate 5-phosphate. This results in the formation of glutamate semialdehyde, which spontaneously cyclizes to pyrroline-5-carboxylate. Pyrroline-5-carboxylate is further reduced by the enzyme pyrroline-5-carboxylate reductase (P5CR) to yield a proline amino acid.
In the first step of arginine biosynthesis in bacteria, glutamate is acetylated by transferring the acetyl group from acetyl-CoA at the N-α position; this prevents spontaneous cyclization. The enzyme N-acetylglutamate synthase (glutamate N-acetyltransferase) is responsible for catalyzing the acetylation step. Subsequent steps are catalyzed by the enzymes N-acetylglutamate kinase, N-acetyl-gamma-glutamyl-phosphate reductase, and acetylornithine/succinyldiamino pimelate aminotransferase and yield the N-acetyl-L-ornithine. The acetyl group of acetylornithine is removed by the enzyme acetylornithinase (AO) or ornithine acetyltransferase (OAT), and this yields ornithine. Then, the enzymes citrulline and argininosuccinate convert ornithine to arginine.
There are two distinct lysine biosynthetic pathways: the diaminopimelic acid pathway and the α-aminoadipate pathway. The most common of the two synthetic pathways is the diaminopimelic acid pathway; it consists of several enzymatic reactions that add carbon groups to aspartate to yield lysine:
Aspartate kinase initiates the diaminopimelic acid pathway by phosphorylating aspartate and producing aspartyl phosphate.
Aspartate semialdehyde dehydrogenase catalyzes the NADPH-dependent reduction of aspartyl phosphate to yield aspartate semialdehyde.
4-hydroxy-tetrahydrodipicolinate synthase adds a pyruvate group to the β-aspartyl-4-semialdehyde, and a water molecule is removed. This causes cyclization and gives rise to (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate.
4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate by NADPH to yield Δ'-piperideine-2,6-dicarboxylate (2,3,4,5-tetrahydrodipicolinate) and H2O.
Tetrahydrodipicolinate acyltransferase catalyzes the acetylation reaction that results in ring opening and yields N-acetyl α-amino-ε-ketopimelate.
N-succinyl-α-amino-ε-ketopimelate-glutamate aminotransaminase catalyzes the transamination reaction that removes the keto group of N-acetyl α-amino-ε-ketopimelate and replaces it with an amino group to yield N-succinyl-L-diaminopimelate.
N-acyldiaminopimelate deacylase catalyzes the deacylation of N-succinyl-L-diaminopimelate to yield L,L-diaminopimelate.
DAP epimerase catalyzes the conversion of L,L-diaminopimelate to the meso form of L,L-diaminopimelate.
DAP decarboxylase catalyzes the removal of the carboxyl group, yielding L-lysine.
The serine family of amino acids
The serine family of amino acid includes: serine, cysteine, and glycine. Most microorganisms and plants obtain the sulfur for synthesizing methionine from the amino acid cysteine. Furthermore, the conversion of serine to glycine provides the carbons needed for the biosynthesis of the methionine and histidine.
During serine biosynthesis, the enzyme phosphoglycerate dehydrogenase catalyzes the initial reaction that oxidizes 3-phospho-D-glycerate to yield 3-phosphonooxypyruvate. The following reaction is catalyzed by the enzyme phosphoserine aminotransferase, which transfers an amino group from glutamate onto 3-phosphonooxypyruvate to yield L-phosphoserine. The final step is catalyzed by the enzyme phosphoserine phosphatase, which dephosphorylates L-phosphoserine to yield L-serine.
There are two known pathways for the biosynthesis of glycine. Organisms that use ethanol and acetate as the major carbon source utilize the glyconeogenic pathway to synthesize glycine. The other pathway of glycine biosynthesis is known as the glycolytic pathway. This pathway converts serine synthesized from the intermediates of glycolysis to glycine. In the glycolytic pathway, the enzyme serine hydroxymethyltransferase catalyzes the cleavage of serine to yield glycine and transfers the cleaved carbon group of serine onto tetrahydrofolate, forming 5,10-methylene-tetrahydrofolate.
Cysteine biosynthesis is a two-step reaction that involves the incorporation of inorganic sulfur. In microorganisms and plants, the enzyme serine acetyltransferase catalyzes the transfer of acetyl group from acetyl-CoA onto L-serine to yield O-acetyl-L-serine. The following reaction step, catalyzed by the enzyme O-acetyl serine (thiol) lyase, replaces the acetyl group of O-acetyl-L-serine with sulfide to yield cysteine.
The aspartate family of amino acids
The aspartate family of amino acids includes: threonine, lysine, methionine, isoleucine, and aspartate. Lysine and isoleucine are considered part of the aspartate family even though part of their carbon skeleton is derived from pyruvate. In the case of methionine, the methyl carbon is derived from serine and the sulfur group, but in most organisms, it is derived from cysteine.
The biosynthesis of aspartate is a one step reaction that is catalyzed by a single enzyme. The enzyme aspartate aminotransferase catalyzes the transfer of an amino group from aspartate onto α-ketoglutarate to yield glutamate and oxaloacetate. Asparagine is synthesized by an ATP-dependent addition of an amino group onto aspartate; asparagine synthetase catalyzes the addition of nitrogen from glutamine or soluble ammonia to aspartate to yield asparagine.
The diaminopimelic acid biosynthetic pathway of lysine belongs to the aspartate family of amino acids. This pathway involves nine enzyme-catalyzed reactions that convert aspartate to lysine.
Aspartate kinase catalyzes the initial step in the diaminopimelic acid pathway by transferring a phosphoryl from ATP onto the carboxylate group of aspartate, which yields aspartyl-β-phosphate.
Aspartate-semialdehyde dehydrogenase catalyzes the reduction reaction by dephosphorylation of aspartyl-β-phosphate to yield aspartate-β-semialdehyde.
Dihydrodipicolinate synthase catalyzes the condensation reaction of aspartate-β-semialdehyde with pyruvate to yield dihydrodipicolinic acid.
4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of dihydrodipicolinic acid to yield tetrahydrodipicolinic acid.
Tetrahydrodipicolinate N-succinyltransferase catalyzes the transfer of a succinyl group from succinyl-CoA on to tetrahydrodipicolinic acid to yield N-succinyl-L-2,6-diaminoheptanedioate.
N-succinyldiaminopimelate aminotransferase catalyzes the transfer of an amino group from glutamate onto N-succinyl-L-2,6-diaminoheptanedioate to yield N-succinyl-L,L-diaminopimelic acid.
Succinyl-diaminopimelate desuccinylase catalyzes the removal of acyl group from N-succinyl-L,L-diaminopimelic acid to yield L,L-diaminopimelic acid.
Diaminopimelate epimerase catalyzes the inversion of the α-carbon of L,L-diaminopimelic acid to yield meso-diaminopimelic acid.
Siaminopimelate decarboxylase catalyzes the final step in lysine biosynthesis that removes the carbon dioxide group from meso-diaminopimelic acid to yield L-lysine.
Proteins
Protein synthesis occurs via a process called translation. During translation, genetic material called mRNA is read by ribosomes to generate a protein polypeptide chain. This process requires transfer RNA (tRNA) which serves as an adaptor by binding amino acids on one end and interacting with mRNA at the other end; the latter pairing between the tRNA and mRNA ensures that the correct amino acid is added to the chain. Protein synthesis occurs in three phases: initiation, elongation, and termination. Prokaryotic (archaeal and bacterial) translation differs from eukaryotic translation; however, this section will mostly focus on the commonalities between the two organisms.
Additional background
Before translation can begin, the process of binding a specific amino acid to its corresponding tRNA must occur. This reaction, called tRNA charging, is catalyzed by aminoacyl tRNA synthetase. A specific tRNA synthetase is responsible for recognizing and charging a particular amino acid. Furthermore, this enzyme has special discriminator regions to ensure the correct binding between tRNA and its cognate amino acid. The first step for joining an amino acid to its corresponding tRNA is the formation of aminoacyl-AMP:
{Amino~acid} + ATP <=> {aminoacyl-AMP} + PP_i
This is followed by the transfer of the aminoacyl group from aminoacyl-AMP to a tRNA molecule. The resulting molecule is aminoacyl-tRNA:
{Aminoacyl-AMP} + tRNA <=> {aminoacyl-tRNA} + AMP
The combination of these two steps, both of which are catalyzed by aminoacyl tRNA synthetase, produces a charged tRNA that is ready to add amino acids to the growing polypeptide chain.
In addition to binding an amino acid, tRNA has a three nucleotide unit called an anticodon that base pairs with specific nucleotide triplets on the mRNA called codons; codons encode a specific amino acid. This interaction is possible thanks to the ribosome, which serves as the site for protein synthesis. The ribosome possesses three tRNA binding sites: the aminoacyl site (A site), the peptidyl site (P site), and the exit site (E site).
There are numerous codons within an mRNA transcript, and it is very common for an amino acid to be specified by more than one codon; this phenomenon is called degeneracy. In all, there are 64 codons, 61 of each code for one of the 20 amino acids, while the remaining codons specify chain termination.
Translation in steps
As previously mentioned, translation occurs in three phases: initiation, elongation, and termination.
Step 1: Initiation
The completion of the initiation phase is dependent on the following three events:
1. The recruitment of the ribosome to mRNA
2. The binding of a charged initiator tRNA into the P site of the ribosome
3. The proper alignment of the ribosome with mRNA's start codon
Step 2: Elongation
Following initiation, the polypeptide chain is extended via anticodon:codon interactions, with the ribosome adding amino acids to the polypeptide chain one at a time. The following steps must occur to ensure the correct addition of amino acids:
1. The binding of the correct tRNA into the A site of the ribosome
2. The formation of a peptide bond between the tRNA in the A site and the polypeptide chain attached to the tRNA in the P site
3. Translocation or advancement of the tRNA-mRNA complex by three nucleotides
Translocation "kicks off" the tRNA at the E site and shifts the tRNA from the A site into the P site, leaving the A site free for an incoming tRNA to add another amino acid.
Step 3: Termination
The last stage of translation occurs when a stop codon enters the A site. Then, the following steps occur:
1. The recognition of codons by release factors, which causes the hydrolysis of the polypeptide chain from the tRNA located in the P site
2. The release of the polypeptide chain
3. The dissociation and "recycling" of the ribosome for future translation processes
A summary table of the key players in translation is found below:
Diseases associated with macromolecule deficiency
Errors in biosynthetic pathways can have deleterious consequences including the malformation of macromolecules or the underproduction of functional molecules. Below are examples that illustrate the disruptions that occur due to these inefficiencies.
Familial hypercholesterolemia: this disorder is characterized by the absence of functional receptors for LDL. Deficiencies in the formation of LDL receptors may cause faulty receptors which disrupt the endocytic pathway, inhibiting the entry of LDL into the liver and other cells. This causes a buildup of LDL in the blood plasma, which results in atherosclerotic plaques that narrow arteries and increase the risk of heart attacks.
Lesch–Nyhan syndrome: this genetic disease is characterized by self- mutilation, mental deficiency, and gout. It is caused by the absence of hypoxanthine-guanine phosphoribosyltransferase, which is a necessary enzyme for purine nucleotide formation. The lack of enzyme reduces the level of necessary nucleotides and causes the accumulation of biosynthesis intermediates, which results in the aforementioned unusual behavior.
Severe combined immunodeficiency (SCID): SCID is characterized by a loss of T cells. Shortage of these immune system components increases the susceptibility to infectious agents because the affected individuals cannot develop immunological memory. This immunological disorder results from a deficiency in adenosine deaminase activity, which causes a buildup of dATP. These dATP molecules then inhibit ribonucleotide reductase, which prevents of DNA synthesis.
Huntington's disease: this neurological disease is caused from errors that occur during DNA synthesis. These errors or mutations lead to the expression of a mutant huntingtin protein, which contains repetitive glutamine residues that are encoded by expanding CAG trinucleotide repeats in the gene. Huntington's disease is characterized by neuronal loss and gliosis. Symptoms of the disease include: movement disorder, cognitive decline, and behavioral disorder.
See also
Lipids
Phospholipid bilayer
Nucleotides
DNA
DNA replication
Proteinogenic amino acid
Codon table
Prostaglandin
Porphyrins
Chlorophylls and bacteriochlorophylls
Vitamin B12
References
Biochemical reactions
Metabolism | 0.792513 | 0.990682 | 0.785128 |
Biocentrism (ethics) | Biocentrism (from Greek βίος bios, "life" and κέντρον kentron, "center"), in a political and ecological sense, as well as literally, is an ethical point of view that extends inherent value to all living things. It is an understanding of how the earth works, particularly as it relates to its biosphere or biodiversity. It stands in contrast to anthropocentrism, which centers on the value of humans. The related ecocentrism extends inherent value to the whole of nature.
Advocates of biocentrism often promote the preservation of biodiversity, animal rights, and environmental protection. The term has also been employed by advocates of "left biocentrism", which combines deep ecology with an "anti-industrial and anti-capitalist" position (according to David Orton et al.).
Definition
In the simplest of terms as well as form, biocentrism is just the belief that all living organisms, regardless of species, complexity, or traits, individually possess equal value and the same exact right to live.
Usually, the term biocentrism encompasses all environmental ethics that "extend the status of moral object from human beings to all living things in nature". Biocentric ethics calls for a rethinking of the relationship between humans and nature. It states that nature does not exist simply to be used or consumed by humans, but that humans are simply one species amongst many, and that because we are part of an ecosystem, any actions which negatively affect the living systems of which we are a part adversely affect us as well, whether or not we maintain a biocentric worldview. Biocentrists observe that all species have inherent value, and that humans are not "superior" to other species in a moral or ethical sense.
The four main pillars of a biocentric outlook are:
Humans and all other species are members of Earth's community.
All species are part of a system of interdependence.
All living organisms pursue their own "good" in their own ways.
Human beings are not inherently superior to other living things.
The most important of these four main pillars is likely the idea that human beings are not inherently superior to other living things. People have divergent views on many specific aspects of almost everything. Not all biocentrists even subscribe to the abstract concept of value, which is why heavy emphasis is placed on the fourth pillar.
Relationship with animals and environment
Biocentrism views individual species as parts of the living biosphere. It observes the consequences of reducing biodiversity on both small and large scales and points to the inherent value all species have to the environment.
The environment is seen for what it is; the biosphere within which we live and depend on the maintaining of its diversity for our health. From these observations the ethical points are raised.
History and development
Biocentric ethics differs from classical and traditional ethical thinking. Rather than focusing on strict moral rules, as in Classical ethics, it focuses on attitudes and character. In contrast with traditional ethics, it is nonhierarchical and gives priority to the natural world rather than to humankind exclusively.
Biocentric ethics includes Albert Schweitzer's ethics of "Reverence for Life", Peter Singer's ethics of Animal Liberation and Paul W. Taylor's ethics of biocentric egalitarianism.
Albert Schweitzer's "reverence for life" principle was a precursor of modern biocentric ethics. In contrast with traditional ethics, the ethics of "reverence for life" denies any distinction between "high and low" or "valuable and less valuable" life forms, dismissing such categorization as arbitrary and subjective. Conventional ethics concerned itself exclusively with human beings—that is to say, morality applied only to interpersonal relationships—whereas Schweitzer's ethical philosophy introduced a "depth, energy, and function that differ[s] from the ethics that merely involved humans". "Reverence for life" was a "new ethics, because it is not only an extension of ethics, but also a transformation of the nature of ethics".
Similarly, Peter Singer argues that non-human animals deserve the same equality of consideration that we extend to human beings. His argument is roughly as follows:
Membership in the species Homo sapiens is the only criterion of moral importance that includes all humans and excludes all non-humans.
Using membership in the species Homo sapiens as a criterion of moral importance is completely arbitrary.
Of the remaining criteria we might consider, only sentience is a plausible criterion of moral importance.
Using sentience as a criterion of moral importance entails that we extend the same basic moral consideration (i.e. "basic principle of equality") to other sentient creatures that we do to human beings.
Therefore, we ought to extend to animals the same equality of consideration that we extend to human beings.
Singer's work, while notable in the canon of environmental ethics, should not be considered as fully biocentric. Singer's ethics is extended from humans to nonhuman animals because the criterion for moral inclusion (sentience) is found in both humans and nonhuman animals, thus it would be arbitrary to deny it to nonhuman animals simply because they were not human. However, not all biological entities are sentient, consider: algae, plants and trees, fungi, lichens, mollusks, protozoa, for example. For an ethical theory to be biocentric, it must have a reason for extending ethical inclusion to the entire biosphere (as in Taylor and Schweitzer). The requirement for environmental ethics to move beyond sentience as criteria for inclusion in the moral realm is discussed in Tom Regan's 1981 paper "The Nature and Possibility of an Environmental Ethic".
Biocentrism is most commonly associated with the work of Paul W. Taylor, especially his book Respect for Nature: A Theory of Environmental Ethics (1986). Taylor maintains that biocentrism is an "attitude of respect for nature", whereby one attempts to make an effort to live one's life in a way that respects the welfare and inherent worth of all living creatures. Taylor states that:
Humans are members of a community of life along with all other species, and on equal terms.
This community consists of a system of interdependence between all members, both physically, and in terms of relationships with other species.
Every organism is a "teleological centre of life", that is, each organism has a purpose and a reason for being, which is inherently "good" or "valuable".
Humans are not inherently superior to other species.
Historian Donald Worster traces today's biocentric philosophies, which he sees as part of a recovery of a sense of kinship between man and nature, to the reaction by the British intelligencia of the Victorian era against the Christian ethic of dominion over nature. He has pointed to Charles Darwin as an important spokesman for the biocentric view in ecological thought and quotes from Darwin's Notebook on Transmutation of Species (1837): If we choose to let conjecture run wild, then animals, our fellow brethren in pain, diseases, death, suffering and famine—our slaves in the most laborious works, our companions in our amusement—they may partake of our origin in one common ancestor—we may be all netted together.
In 1859, Charles Darwin published his book On the Origin of Species. This publication sparked the beginning of biocentrist views by introducing evolution and "its removal of humans from their supernatural origins and
placement into the framework of natural laws".
The work of Aldo Leopold has also been associated with biocentrism. The essay "The Land Ethic" in Leopold's book Sand County Almanac (1949) points out that although throughout history women and slaves have been considered property, all people have now been granted rights and freedoms. Leopold notes that today land is still considered property as people once were. He asserts that ethics should be extended to the land as "an evolutionary possibility and an ecological necessity". He argues that while people's instincts encourage them to compete with others, their ethics encourage them to co-operate with others. He suggests that "the land ethic simply enlarges the boundaries of the community to include soils, waters, plants, and animals, or collectively: the land". In a sense this attitude would encourage humans to co-operate with the land rather than compete with it.
Outside of formal philosophical works biocentric thought is common among pre-colonial tribal peoples who knew no world other than the natural world.
In law
The paradigm of biocentrism and the values that it promotes are beginning to be used in law.
In recent years (as of 2011), cities in Maine, Pennsylvania, New Hampshire and Virginia have adopted laws that protect the rights of nature. The purpose of these laws is to prevent the degradation of nature, especially by corporations who may want to exploit natural resources and land space, and to also use the environment as a dumping ground for toxic waste.
The first country to include rights of nature in its constitution is Ecuador (see 2008 Constitution of Ecuador). Article 71 states that nature "has the right to integral respect for its existence and for the maintenance and regeneration of its life cycles, structure, functions and evolutionary processes".
In religion
Islam
In Islam:
In Islam, biocentric ethics stem from the belief that all of creation belongs to Allah (God), not humans, and to assume that non-human animals and plants exist merely to benefit humankind leads to environmental destruction and misuse. As all living organisms exist to praise God, human destruction of other living things prevents the earth's natural and subtle means of praising God. The Qur'an acknowledges that humans are not the only all-important creatures and emphasizes a respect for nature. Muhammad was once asked whether there would be a reward for those who show charity to nature and animals, to which he replied, "for charity shown to each creature with a wet heart [i.e. that is alive], there is a reward."
Hinduism
In Hinduism:
Hinduism contains many elements of biocentrism. In Hinduism, humans have no special authority over other creatures, and all living things have souls ('atman'). Brahman (God) is the "efficient cause" and Prakrti (nature), is the "material cause" of the universe. However, Brahman and Prakrti are not considered truly divided: "They are one in [sic] the same, or perhaps better stated, they are the one in the many and the many in the one."
However, while Hinduism does not give the same direct authority over nature that the Judeo-Christian-Islamic god grants, they are subject to a "higher and more authoritative responsibility for creation". The most important aspect of this is the doctrine of Ahimsa (non-violence). The Yājñavalkya Smṛti warns, "the wicked person who kills animals which are protected has to live in hell fire for the days equal to the number of hairs on the body of that animal". The essential aspect of this doctrine is the belief that the Supreme Being incarnates into the forms of various species. The Hindu belief in Saṃsāra (the cycle of life, death and rebirth) encompasses reincarnation into non-human forms. It is believed that one lives 8,400,000 lifetimes before one becomes a human. Each species is in this process of samsara until one attains moksha (liberation).
Another doctrinal source for the equal treatment of all life is found in the Rigveda. The Rigveda states that trees and plants possess divine healing properties. It is still popularly believed that every tree has a Vriksa-devata (a tree deity). Trees are ritually worshiped through prayer, offerings, and the sacred thread ceremony. The Vriksa-devata worshiped as manifestations of the Divine. Tree planting is considered a religious duty.
Jainism
In Jainism:
The Jaina tradition exists in tandem with Hinduism and shares many of its biocentric elements.
Ahimsa (non-violence), the central teaching of Jainism, means more than not hurting other humans. It means intending not to cause physical, mental or spiritual harm to any part of nature. In the words of Mahavira: 'You are that which you wish to harm.' Compassion is a pillar of non-violence. Jainism encourages people to practice an attitude of compassion towards all life.
The principle of interdependence is also very important in Jainism. This states that all of nature is bound together, and that "if one does not care for nature one does not care for oneself.".
Another essential Jain teaching is self-restraint. Jainism discourages wasting the gifts of nature, and encourages its practitioners to reduce their needs as far as possible. Gandhi, a great proponent of Jainism, once stated "There is enough in this world for human needs, but not for human wants."
Buddhism
In Buddhism:
The Buddha's teachings encourage people "to live simply, to cherish tranquility, to appreciate the natural cycle of life". Buddhism emphasizes that everything in the universe affects everything else. "Nature is an ecosystem in which trees affect climate, the soil, and the animals, just as the climate affects the trees, the soil, the animals and so on. The ocean, the sky, the air are all interrelated, and interdependent—water is life and air is life."
Although this holistic approach is more ecocentric than biocentric, it is also biocentric, as it maintains that all living things are important and that humans are not above other creatures or nature. Buddhism teaches that "once we treat nature as our friend, to cherish it, then we can see the need to change from the attitude of dominating nature to an attitude of working with nature—we are an intrinsic part of all existence rather than seeing ourselves as in control of it."
Christianity
Within the Catholic tradition of Christian thought, Pope Benedict XVI noted that "the Church’s magisterium expresses grave misgivings about notions of the environment inspired by ecocentrism and biocentrism". This, he stated, was because "such notions eliminate the difference of identity and worth between the human person and other living things. In the name of a supposedly egalitarian vision of the "dignity" of all living creatures, such notions end up abolishing the distinctiveness and superior role of human beings."
Criticism
Biocentrism has faced criticism for a number of reasons. Some of this criticism grows out of the concern that biocentrism is an anti-human paradigm and that it will not hesitate to sacrifice human well-being for the greater good. Biocentrism has also been criticized for its individualism; emphasizing too much on the importance of individual life and neglecting the importance of collective groups, such as an ecosystem.
A more complex form of criticism focuses on the contradictions of biocentrism. Opposed to anthropocentrism, which sees humans as having a higher status than other species, biocentrism puts humans on a par with the rest of nature, and not above it. In his essay A Critique of Anti-Anthropocentric Biocentrism Richard Watson suggests that if this is the case, then "Human ways—human culture—and human actions are
as natural as the ways in which any other species of animals behaves". He goes on to suggest that if humans must change their behavior to refrain from disturbing and damaging the natural environment, then that results in setting humans apart from other species and assigning more power to them. This then takes us back to the basic beliefs of anthropocentrism. Watson also claims that the extinction of species is "Nature's way" and that if humans were to instigate their own self-destruction by exploiting the rest of nature, then so be it. Therefore, he suggests that the real reason humans should reduce their destructive behavior in relation to other species is not because we are equals but because the destruction of other species will also result in our own destruction. This view also brings us back to an anthropocentric perspective.
See also
Anarcho-primitivism
Animal cognition
Biodiversity
Biophilia hypothesis
Biotic ethics
Deep ecology
Earth jurisprudence
Ecoauthoritarianism
Ecocentrism
Eco-nationalism
Environmental philosophy
Gaia hypothesis
Gaia philosophy
Green anarchism
Green conservatism
Green libertarianism
Intrinsic value (animal ethics)
Neo-luddite
Painism
Primitivism
Religion and environmentalism
Sentiocentrism
Speciesism
Stewardship (theology)
References
Further reading
Coghlan et al (2021). A bolder One Health: expanding the moral circle to optimize health for all. One Health Outlook.
Deep ecology
Environmental ethics | 0.791257 | 0.99224 | 0.785116 |
Environmental factor | An environmental factor, ecological factor or eco factor is any factor, abiotic or biotic, that influences living organisms. Abiotic factors include ambient temperature, amount of sunlight, air, soil, water and pH of the water soil in which an organism lives. Biotic factors would include the availability of food organisms and the presence of biological specificity, competitors, predators, and parasites.
Overall
An organism's genotype (e.g., in the zygote) translated into the adult phenotype through development during an organism's ontogeny, and subject to influences by many environmental effects. In this context, a phenotype (or phenotypic trait) can be viewed as any definable and measurable characteristic of an organism, such as its body mass or skin color.
Apart from the true monogenic genetic disorders, environmental factors may determine the development of disease in those genetically predisposed to a particular condition. Pollution, stress, physical and mental abuse, diet, exposure to toxins, pathogens, radiation and chemicals found in almost all personal-care products and household cleaners are common environmental factors that determine a large segment of non-hereditary disease.
If a disease process is concluded to be the result of a combination of genetic and environmental factor influences, its etiological origin can be referred to as having a multifactorial pattern.
Cancer is often related to environmental factors. Maintaining a healthy weight, eating a healthy diet, minimizing alcohol and eliminating smoking reduces the risk of developing the disease, according to researchers.
Environmental triggers for asthma and autism have been studied too.
Exposome
The exposome encompasses the set of human environmental (i.e. non-genetic) exposures from conception onwards, complementing the genome. The exposome was first proposed in 2005 by cancer epidemiologist Christopher Paul Wild in an article entitled "Complementing the genome with an "exposome": the outstanding challenge of environmental exposure measurement in molecular epidemiology". The concept of the exposome and how to assess it has led to lively discussions with varied views in 2010, 2012, 2014 and 2021.
In his 2005 article, Wild stated, "At its most complete, the exposome encompasses life-course environmental exposures (including lifestyle factors), from the prenatal period onwards." The concept was first proposed to draw attention to the need for better and more complete environmental exposure data for causal research, in order to balance the investment in genetics. According to Wild, even incomplete versions of the exposome could be useful to epidemiology. In 2012, Wild outlined methods, including personal sensors, biomarkers, and 'omics' technologies, to better define the exposome. He described three overlapping domains within the exposome:
a general external environment including the urban environment, education, climate factors, social capital, stress,
a specific external environment with specific contaminants, radiation, infections, lifestyle factors (e.g. tobacco, alcohol), diet, physical activity, etc.
an internal environment to include internal biological factors such as metabolic factors, hormones, gut microflora, inflammation, oxidative stress.
In late 2013, this definition was explained in greater depth in the first book on the exposome.
In 2014, the same author revised the definition to include the body's response with its endogenous metabolic processes which alter the processing of chemicals. More recently, evidenced by metabolic exposures in and around the time of pregnancy, the maternal metabolic exposome includes exposures such as maternal obesity/overweight and diabetes, and malnutrition, including high fat/high calorie diets, which are associated with poor fetal, infant and child growth, and increased incidence of obesity and other metabolic disorders in later life.
Measurement
For complex disorders, specific genetic causes appear to account for only 10-30% of the disease incidence, but there has been no standard or systematic way to measure the influence of environmental exposures. Some studies into the interaction of genetic and environmental factors in the incidence of diabetes have demonstrated that "environment-wide association studies" (EWAS, or exposome-wide association studies) may be feasible. However, it is not clear what data sets are most appropriate to represent the value of "E".
Research initiatives
As of 2016, it may not be possible to measure or model the full exposome, but several European projects have started to make first attempts.
In 2012, the European Commission awarded two large grants to pursue exposome-related research. The HELIX project at the Barcelona-based Centre for Research in Environmental Epidemiology was launched around 2014, and aimed to develop an early-life exposome. A second project, Exposomics, based at Imperial College London, launched in 2012, aimed to use smartphones utilising GPS and environmental sensors to assess exposures.
In late 2013, a major initiative called the "Health and Environment-Wide Associations based on Large Scale population Surveys" or HEALS, began. Touted as the largest environmental health-related study in Europe, HEALS proposes to adopt a paradigm defined by interactions between DNA sequence, epigenetic DNA modifications, gene expression, and environmental factors.
In December 2011, the US National Academy of Sciences hosted a meeting entitled "Emerging Technologies for Measuring Individual Exposomes." A Centers for Disease Control and Prevention overview, "Exposome and Exposomics", outlines the three priority areas for researching the occupational exposome as identified by the National Institute for Occupational Safety and Health. The National Institutes of Health (NIH) has invested in technologies supporting exposome-related research including biosensors, and supports research on gene–environment interactions.
Proposed Human Exposome Project (HEP)
The idea of a Human Exposome Project, analogous to the Human Genome Project, has been proposed and discussed in numerous scientific meetings, but as of 2017, no such project exists. Given the lack of clarity on how science would go about pursuing such a project, support has been lacking. Reports on the issue include:
a 2011 review on the exposome and exposure science by Paul Lioy and Stephen Rappaport, "Exposure science and the exposome: an opportunity for coherence in the environmental health sciences" in the journal Environmental Health Perspectives.
a 2012 report from the United States National Research Council "Exposure Science in the 21st Century: A Vision and A Strategy", outlining the challenges in systematic evaluations of the exposome.
Related fields
The concept of the exposome has contributed to the 2010 proposal of a new paradigm in disease phenotype, "the unique disease principle": Every individual has a unique disease process different from any other individual, considering uniqueness of the exposome and its unique influence on molecular pathologic processes including alterations in the interactome. This principle was first described in neoplastic diseases as "the unique tumor principle". Based on this unique disease principle, the interdisciplinary field of molecular pathological epidemiology (MPE) integrates molecular pathology and epidemiology.
Socioeconomic drivers
Global change is driven by many factors; however the five main drivers of global change are: population growth, economic growth, technological advances, attitudes, and institutions. These five main drivers of global change can stem from socioeconomic factors which in turn, these can be seen as drivers in their own regard. Socioeconomic drivers of climate change can be triggered by a social or economic demand for resources such as a demand for timber or a demand for agricultural crops. In tropical deforestation for instance, the main driver is economic opportunities that come the extraction of these resources and the conversion of this land to crop or rangelands. These drivers can be manifested at any level, from the global level demand for timber all the way to the household level.
An example of how socioeconomic drivers affect climate change can be seen in the soy bean trading between Brazil and China. The trading of soy beans from to Brazil and China has grown immensely in the past few decades. This growth in trade between these two countries is stimulated by socioeconomic drivers. Some of the socioeconomic drivers in play here are the rising demand for Brazilian soy beans in China, the increase in land use change for soy bean production in Brazil, and the importance of strengthening foreign trade between the two countries. All of these socioeconomic drivers have implications in climate change. For instance, an increase in the development for soy bean croplands in Brazil means there needs to be more and more land made available for this resource. This causes the general land cover of forest to be converted into croplands which in its own regard has an impact on the environment. This example of land use change driven by a demand of a resource, isn't only happening in Brazil with soy bean production.
Another example came from The Renewable Energy Directive 2009 Union when they mandated biofuel development for countries within their membership. With an international socioeconomic driver of increasing the production biofuels comes affects in land use in these countries. When agricultural cropland shift to bioenergy cropland the original crop supply decreases while the global market for this crop increases. This causes a cascading socioeconomic driver for the need for more agricultural croplands to support the growing demand. However, with the lack of available land from the crop substitution to biofuels, countries must look into areas further away to develop these original croplands. This causes spillover systems in countries where this new development takes place. For instance, African countries are converting savanna's into cropland and this all stems from the socioeconomic driver of wanting to develop biofuels. Furthermore, socioeconomic driver that cause land use change don't all occur at an international level. These drivers can be experienced all the way down to the household level. Crop substitution doesn't only come from biofuel shifts in agriculture, a big substitution came from Thailand when they switched the production of opium poppy plants to non-narcotic crops. This caused Thailand's agricultural sector to grow, but it caused global rippling effects (opium replacement).
For instance, in Wolong China, locals use forests as fuelwood to cook and heat their homes. So, the socioeconomic driver in play here is the local demand for timber to support subsistence in this area. With this driver, locals are depleting their supply for fuelwood so they have to keep moving further away to extract this resource. This movement and demand for timber is in turn contributing to the loss of pandas in this area because their ecosystem is getting destroyed.
However, when researching local trends the focus tends to be on outcomes instead of on how changes in the global drivers affect outcomes. With this being said, community level planning needs to be implemented when analyzing socioeconomic drivers of change.
In conclusion, one can see how socioeconomic drivers at any level play a role in the consequences of human actions on the environment. These drivers all have cascading effects on land, humans, resources, and the environment as a whole. With this being said, humans need to fully understand how their socioeconomic drivers can change the way we live. For instance, going back to the soy bean example, when the supply can't meet the demand for soy beans the global market for this crop increases which then in turn affects countries that rely on this crop for a food source. These affects can cause a higher price for soy beans at their stores and markets or it can cause an overall lack of availability for this crop in importing countries. With both of these outcomes, the household level is being affected by a national level socioeconomic driver of an increased demand for Brazilian soy beans in China. From just this one example alone, one can see how socioeconomic drivers influence changes at a national level that then lead to more global, regional, communal, and household level changes. The main concept to take away from this is the idea that everything is connected and that our roles and choices as humans have major driving forces that impact our world in numerous ways.
See also
Accidental injury
Ecophysiology
Envirome
Environmental disease
Environmental health
Epidemiology
Epidemiology of cancer
Exposure science
Heritability
Hygiene hypothesis
NIEHS
Occupational toxicology
Pollution
Public health
Quantitative genetics
Toxicology
References
External links
Environmental factor, NIEHS
EHP
Diseases and disorders
Environmental health | 0.790932 | 0.992586 | 0.785068 |
Biomechanics | Biomechanics is the study of the structure, function and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomechanics is a branch of biophysics.
Today computational mechanics goes far beyond pure mechanics, and involves other physical actions: chemistry, heat and mass transfer, electric and magnetic stimuli and many others.
Etymology
The word "biomechanics" (1899) and the related "biomechanical" (1856) come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms, particularly their movement and structure.
Subfields
Biofluid mechanics
Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluid problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modeled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails when considering forward flow within arterioles. At the microscopic scale, the effects of individual red blood cells become significant, and whole blood can no longer be modeled as a continuum. When the diameter of the blood vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and often can only pass in a single file. In this case, the inverse Fahraeus–Lindquist effect occurs and the wall shear stress increases.
An example of a gaseous biofluids problem is that of human respiration. Recently, respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices.
Biotribology
Biotribology is the study of friction, wear and lubrication of biological systems, especially human joints such as hips and knees. In general, these processes are studied in the context of contact mechanics and tribology.
Additional aspects of biotribology include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue-engineered cartilage.
Comparative biomechanics
Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans (as in physical anthropology) or into the functions, ecology and adaptations of the organisms themselves. Common areas of investigation are Animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion, has many manifestations, including running, jumping and flying. Locomotion requires energy to overcome friction, drag, inertia, and gravity, though which factor predominates varies with environment.
Comparative biomechanics overlaps strongly with many other fields, including ecology, neurobiology, developmental biology, ethology, and paleontology, to the extent of commonly publishing papers in the journals of these other fields. Comparative biomechanics is often applied in medicine (with regards to common model organisms such as mice and rats) as well as in biomimetics, which looks to nature for solutions to engineering problems.
Computational biomechanics
Computational biomechanics is the application of engineering computational tools, such as the Finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the Finite element method has become an established alternative to in vivo surgical assessment. One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modeling (or other discretization techniques) to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g., BioSpine) and SOniCS, as well as the SOFA, FEniCS frameworks and FEBio.
Computational biomechanics is an essential ingredient in surgical simulation, which is used for surgical planning, assistance, and training. In this case, numerical (discretization) methods are used to compute, as fast as possible, a system's response to boundary conditions such as forces, heat and mass transfer, and electrical and magnetic stimuli.
Continuum biomechanics
The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the microstructural details of the material. One of the most remarkable characteristics of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels.
Biomaterials are classified into two groups: hard and soft tissues. Mechanical deformation of hard tissues (like wood, shell and bone) may be analysed with the theory of linear elasticity. On the other hand, soft tissues (like skin, tendon, muscle, and cartilage) usually undergo large deformations, and thus, their analysis relies on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation.
Neuromechanics
Neuromechanics uses a biomechanical approach to better understand how the brain and nervous system interact to control the body. During motor tasks, motor units activate a set of muscles to perform a specific movement, which can be modified via motor adaptation and learning. In recent years, neuromechanical experiments have been enabled by combining motion capture tools with neural recordings.
Plant biomechanics
The application of biomechanical principles to plants, plant organs and cells has developed into the subfield of plant biomechanics. Application of biomechanics for plants ranges from studying the resilience of crops to environmental stress to development and morphogenesis at cell and tissue scale, overlapping with mechanobiology.
Sports biomechanics
In sports biomechanics, the laws of mechanics are applied to human movement in order to gain a greater understanding of athletic performance and to reduce sport injuries as well. It focuses on the application of the scientific principles of mechanical physics to understand movements of action of human bodies and sports implements such as cricket bat, hockey stick and javelin etc. Elements of mechanical engineering (e.g., strain gauges), electrical engineering (e.g., digital filtering), computer science (e.g., numerical methods), gait analysis (e.g., force platforms), and clinical neurophysiology (e.g., surface EMG) are common methods used in sports biomechanics.
Biomechanics in sports can be stated as the body's muscular, joint, and skeletal actions while executing a given task, skill, or technique. Understanding biomechanics relating to sports skills has the greatest implications on sports performance, rehabilitation and injury prevention, and sports mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best.
Vascular biomechanics
The main topics of the vascular biomechanics is the description of the mechanical behaviour of vascular tissues.
It is well known that cardiovascular disease is the leading cause of death worldwide. Vascular system in the human body is the main component that is supposed to maintain pressure and allow for blood flow and chemical exchanges. Studying the mechanical properties of these complex tissues improves the possibility of better understanding cardiovascular diseases and drastically improves personalized medicine.
Vascular tissues are inhomogeneous with a strongly non linear behaviour. Generally this study involves complex geometry with intricate load conditions and material properties. The correct description of these mechanisms is based on the study of physiology and biological interaction. Therefore is necessary to study wall mechanics and hemodynamics with their interaction.
It is also necessary to premise that the vascular wall is a dynamic structure in continuous evolution. This evolution directly follows the chemical and mechanical environment in which the tissues are immersed like Wall Shear Stress or biochemical signaling.
Immunomechanics
The emerging field of immunomechanics focuses on characterising mechanical properties of the immune cells and their functional relevance. Mechanics of immune cells can be characterised using various force spectroscopy approaches such as acoustic force spectroscopy and optical tweezers, and these measurements can be performed at physiological conditions (e.g. temperature). Furthermore, one can study the link between immune cell mechanics and immunometabolism and immune signalling. The term "immunomechanics" is some times interchangeably used with immune cell mechanobiology or cell mechanoimmunology.
Other applied subfields of biomechanics include
Allometry
Animal locomotion and Gait analysis
Biotribology
Biofluid mechanics
Cardiovascular biomechanics
Comparative biomechanics
Computational biomechanics
Ergonomy
Forensic Biomechanics
Human factors engineering and occupational biomechanics
Injury biomechanics
Implant (medicine), Orthotics and Prosthesis
Kinaesthetics
Kinesiology (kinetics + physiology)
Musculoskeletal and orthopedic biomechanics
Rehabilitation
Soft body dynamics
Sports biomechanics
History
Antiquity
Aristotle, a student of Plato, can be considered the first bio-mechanic because of his work with animal anatomy. Aristotle wrote the first book on the motion of animals, De Motu Animalium, or On the Movement of Animals. He saw animal's bodies as mechanical systems, pursued questions such as the physiological difference between imagining performing an action and actual performance. In another work, On the Parts of Animals, he provided an accurate description of how the ureter uses peristalsis to carry urine from the kidneys to the bladder.
With the rise of the Roman Empire, technology became more popular than philosophy and the next bio-mechanic arose. Galen (129 AD-210 AD), physician to Marcus Aurelius, wrote his famous work, On the Function of the Parts (about the human body). This would be the world's standard medical book for the next 1,400 years.
Renaissance
The next major biomechanic would not be around until the 1490s, with the studies of human anatomy and biomechanics by Leonardo da Vinci. He had a great understanding of science and mechanics and studied anatomy in a mechanics context. He analyzed muscle forces and movements and studied joint functions. These studies could be considered studies in the realm of biomechanics. Leonardo da Vinci studied anatomy in the context of mechanics. He analyzed muscle forces as acting along lines connecting origins and insertions, and studied joint function. Da Vinci is also known for mimicking some animal features in his machines. For example, he studied the flight of birds to find means by which humans could fly; and because horses were the principal source of mechanical power in that time, he studied their muscular systems to design machines that would better benefit from the forces applied by this animal.
In 1543, Galen's work, On the Function of the Parts was challenged by Andreas Vesalius at the age of 29. Vesalius published his own work called, On the Structure of the Human Body. In this work, Vesalius corrected many errors made by Galen, which would not be globally accepted for many centuries. With the death of Copernicus came a new desire to understand and learn about the world around people and how it works. On his deathbed, he published his work, On the Revolutions of the Heavenly Spheres. This work not only revolutionized science and physics, but also the development of mechanics and later bio-mechanics.
Galileo Galilei, the father of mechanics and part time biomechanic was born 21 years after the death of Copernicus. Over his years of science, Galileo made a lot of biomechanical aspects known. For example, he discovered that "animals' masses increase disproportionately to their size, and their bones must consequently also disproportionately increase in girth, adapting to loadbearing rather than mere size. The bending strength of a tubular structure such as a bone is increased relative to its weight by making it hollow and increasing its diameter. Marine animals can be larger than terrestrial animals because the water's buoyancy relieves their tissues of weight."
Galileo Galilei was interested in the strength of bones and suggested that bones are hollow because this affords maximum strength with minimum weight. He noted that animals' bone masses increased disproportionately to their size. Consequently, bones must also increase disproportionately in girth rather than mere size. This is because the bending strength of a tubular structure (such as a bone) is much more efficient relative to its weight. Mason suggests that this insight was one of the first grasps of the principles of biological optimization.
In the 17th century, Descartes suggested a philosophic system whereby all living systems, including the human body (but not the soul), are simply machines ruled by the same mechanical laws, an idea that did much to promote and sustain biomechanical study.
Industrial era
The next major bio-mechanic, Giovanni Alfonso Borelli, embraced Descartes' mechanical philosophy and studied walking, running, jumping, the flight of birds, the swimming of fish, and even the piston action of the heart within a mechanical framework. He could determine the position of the human center of gravity, calculate and measure inspired and expired air volumes, and he showed that inspiration is muscle-driven and expiration is due to tissue elasticity.
Borelli was the first to understand that "the levers of the musculature system magnify motion rather than force, so that muscles must produce much larger forces than those resisting the motion". Influenced by the work of Galileo, whom he personally knew, he had an intuitive understanding of static equilibrium in various joints of the human body well before Newton published the laws of motion. His work is often considered the most important in the history of bio-mechanics because he made so many new discoveries that opened the way for the future generations to continue his work and studies.
It was many years after Borelli before the field of bio-mechanics made any major leaps. After that time, more and more scientists took to learning about the human body and its functions. There are not many notable scientists from the 19th or 20th century in bio-mechanics because the field is far too vast now to attribute one thing to one person. However, the field is continuing to grow every year and continues to make advances in discovering more about the human body. Because the field became so popular, many institutions and labs have opened over the last century and people continue doing research. With the Creation of the American Society of Bio-mechanics in 1977, the field continues to grow and make many new discoveries.
In the 19th century Étienne-Jules Marey used cinematography to scientifically investigate locomotion. He opened the field of modern 'motion analysis' by being the first to correlate ground reaction forces with movement. In Germany, the brothers Ernst Heinrich Weber and Wilhelm Eduard Weber hypothesized a great deal about human gait, but it was Christian Wilhelm Braune who significantly advanced the science using recent advances in engineering mechanics. During the same period, the engineering mechanics of materials began to flourish in France and Germany under the demands of the Industrial Revolution. This led to the rebirth of bone biomechanics when the railroad engineer Karl Culmann and the anatomist Hermann von Meyer compared the stress patterns in a human femur with those in a similarly shaped crane. Inspired by this finding Julius Wolff proposed the famous Wolff's law of bone remodeling.
Applications
The study of biomechanics ranges from the inner workings of a cell to the movement and development of limbs, to the mechanical properties of soft tissue, and bones. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of bird and insect flight, the hydrodynamics of swimming in fish, and locomotion in general across all forms of life, from individual cells to whole organisms. With growing understanding of the physiological behavior of living tissues, researchers are able to advance the field of tissue engineering, as well as develop improved treatments for a wide array of pathologies including cancer.
Biomechanics is also applied to studying human musculoskeletal systems. Such research utilizes force platforms to study human ground reaction forces and infrared videography to capture the trajectories of markers attached to the human body to study human 3D motion. Research also applies electromyography to study muscle activation, investigating muscle responses to external forces and perturbations.
Biomechanics is widely used in orthopedic industry to design orthopedic implants for human joints, dental parts, external fixations and other medical purposes. Biotribology is a very important part of it. It is a study of the performance and function of biomaterials used for orthopedic implants. It plays a vital role to improve the design and produce successful biomaterials for medical and clinical purposes. One such example is in tissue engineered cartilage. The dynamic loading of joints considered as impact is discussed in detail by Emanuel Willert.
It is also tied to the field of engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Applied mechanics, most notably mechanical engineering disciplines such as continuum mechanics, mechanism analysis, structural analysis, kinematics and dynamics play prominent roles in the study of biomechanics.
Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements.
See also
Biomechatronics
Biomedical engineering
Cardiovascular System Dynamics Society
Evolutionary physiology
Forensic biomechanics
International Society of Biomechanics
List of biofluid mechanics research groups
Mechanics of human sexuality
OpenSim (simulation toolkit)
Physical oncology
References
Further reading
External links
Biomechanics and Movement Science Listserver (Biomch-L)
Biomechanics Links
A Genealogy of Biomechanics
Motor control | 0.788352 | 0.995541 | 0.784837 |
Earth science | Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history.
Geology
Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time.
Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks.
Earth's interior
Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction.
Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes.
Atmospheric science
Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change.
The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere.
Earth's magnetic field
Hydrology
Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere.
Ecology
Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature.
Physical geography
Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment.
Methodology
Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains).
A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history.
Earth's spheres
In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere.
The following fields of science are generally categorized within the Earth sciences:
Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology.
Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology.
Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity.
Geochemistry is defined as the study of the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the composition, structure, processes, and other physical aspects of the Earth. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry.
Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology.
Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from the study of other planets in the Solar System, Earth being its only planet teeming with life.
Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involves all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry."
Glaciology covers the icy parts of the Earth (or cryosphere).
Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics.
Earth science breakup
Atmosphere
Atmospheric chemistry
Geography
Climatology
Meteorology
Hydrometeorology
Paleoclimatology
Biosphere
Biogeochemistry
Biogeography
Ecology
Landscape ecology
Geoarchaeology
Geomicrobiology
Paleontology
Palynology
Micropaleontology
Hydrosphere
Hydrology
Hydrogeology
Limnology (freshwater science)
Oceanography (marine science)
Chemical oceanography
Physical oceanography
Biological oceanography (marine biology)
Geological oceanography (marine geology)
Paleoceanography
Lithosphere (geosphere)
Geology
Economic geology
Engineering geology
Environmental geology
Forensic geology
Historical geology
Quaternary geology
Planetary geology and planetary geography
Sedimentology
Stratigraphy
Structural geology
Geography
Human geography
Physical geography
Geochemistry
Geomorphology
Geophysics
Geochronology
Geodynamics (see also Tectonics)
Geomagnetism
Gravimetry (also part of Geodesy)
Seismology
Glaciology
Hydrogeology
Mineralogy
Crystallography
Gemology
Petrology
Petrophysics
Speleology
Volcanology
Pedosphere
Geography
Soil science
Edaphology
Pedology
Systems
Earth system science
Environmental science
Geography
Human geography
Physical geography
Gaia hypothesis
Systems ecology
Systems geology
Others
Geography
Cartography
Geoinformatics (GIScience)
Geostatistics
Geodesy and Surveying
Remote Sensing
Hydrography
Nanogeoscience
See also
American Geosciences Institute
Earth sciences graphics software
Four traditions of geography
Glossary of geology terms
List of Earth scientists
List of geoscience organizations
List of unsolved problems in geoscience
Making North America
National Association of Geoscience Teachers
Solid-earth science
Science tourism
Structure of the Earth
References
Sources
Further reading
Allaby M., 2008. Dictionary of Earth Sciences, Oxford University Press,
Korvin G., 1998. Fractal Models in the Earth Sciences, Elsvier,
Tarbuck E. J., Lutgens F. K., and Tasa D., 2002. Earth Science, Prentice Hall,
External links
Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center.
Geoethics in Planetary and Space Exploration.
Geology Buzz: Earth Science
Planetary science
Science-related lists | 0.786793 | 0.997397 | 0.784745 |
Bacillus subtilis | Bacillus subtilis, known also as the hay bacillus or grass bacillus, is a gram-positive, catalase-positive bacterium, found in soil and the gastrointestinal tract of ruminants, humans and marine sponges. As a member of the genus Bacillus, B. subtilis is rod-shaped, and can form a tough, protective endospore, allowing it to tolerate extreme environmental conditions. B. subtilis has historically been classified as an obligate aerobe, though evidence exists that it is a facultative anaerobe. B. subtilis is considered the best studied Gram-positive bacterium and a model organism to study bacterial chromosome replication and cell differentiation. It is one of the bacterial champions in secreted enzyme production and used on an industrial scale by biotechnology companies.
Description
Bacillus subtilis is a Gram-positive bacterium, rod-shaped and catalase-positive. It was originally named Vibrio subtilis by Christian Gottfried Ehrenberg, and renamed Bacillus subtilis by Ferdinand Cohn in 1872 (subtilis being the Latin for "fine, thin, slender"). B. subtilis cells are typically rod-shaped, and are about 4–10 micrometers (μm) long and 0.25–1.0 μm in diameter, with a cell volume of about 4.6 fL at stationary phase.
As with other members of the genus Bacillus, it can form an endospore, to survive extreme environmental conditions of temperature and desiccation. B. subtilis is a facultative anaerobe and had been considered as an obligate aerobe until 1998. B. subtilis is heavily flagellated, which gives it the ability to move quickly in liquids.
B. subtilis has proven highly amenable to genetic manipulation, and has become widely adopted as a model organism for laboratory studies, especially of sporulation, which is a simplified example of cellular differentiation. In terms of popularity as a laboratory model organism, B. subtilis is often considered as the Gram-positive equivalent of Escherichia coli, an extensively studied Gram-negative bacterium.
Characteristics
Colony, morphological, physiological, and biochemical characteristics of Bacillus subtilis are shown in the Table below.
Note: + = Positive, – =Negative
Habitat
This species is commonly found in the upper layers of the soil and B. subtilis is thought to be a normal gut commensal in humans. A 2009 study compared the density of spores found in soil (about 106 spores per gram) to that found in human feces (about 104 spores per gram). The number of spores found in the human gut was too high to be attributed solely to consumption through food contamination. In some bee habitats, B. subtilis appears in the gut flora of honey bees. B. subtilis can also be found in marine environments.
There is evidence that B. subtilis is saprophytic in nature. Studies have shown that the bacterium exhibits vegetative growth in soil rich in organic matter, and that spores were formed when nutrients were depleted. Additionally, B. subtilis has been shown to form biofilms on plant roots, which might explain why it is commonly found in gut microbiomes. Perhaps animals eating plants with B. subtilis biofilms can foster growth of the bacterium in their gastrointestinal tract. It has been shown that the entire lifecycle of B. subtilis can be completed in the gastrointestinal tract, which provides credence to the idea that the bacterium enters the gut via plant consumption and stays present as a result of its ability to grow in the gut.
Reproduction
Bacillus subtilis can divide symmetrically to make two daughter cells (binary fission), or asymmetrically, producing a single endospore that can remain viable for decades and is resistant to unfavourable environmental conditions such as drought, salinity, extreme pH, radiation, and solvents. The endospore is formed at times of nutritional stress and through the use of hydrolysis, allowing the organism to persist in the environment until conditions become favourable. Prior to the process of sporulation the cells might become motile by producing flagella, take up DNA from the environment, or produce antibiotics. These responses are viewed as attempts to seek out nutrients by seeking a more favourable environment, enabling the cell to make use of new beneficial genetic material or simply by killing off competition.
Under stressful conditions, such as nutrient deprivation, B. subtilis undergoes the process of sporulation. This process has been very well studied and has served as a model organism for studying sporulation.
Sporulation
Once B. subtilis commits to sporulation, the sigma factor sigma F is secreted. This factor promotes sporulation. A sporulation septum is formed and a chromosome is slowly moved into the forespore. When a third of one chromosome copy is in the forespore and the remaining two thirds is in the mother cell, the chromosome fragment in the forespore contains the locus for sigma F, which begins to be expressed in the forespore. In order to prevent sigma F expression in the mother cell, an anti-sigma factor, which is encoded by spoIIAB, is expressed. Any residual anti-sigma factor in the forespore (which would otherwise interfere with sporulation) is inhibited by an anti-anti-sigma factor, which is encoded by spoIIAA. SpoIIAA is located near the locus for the sigma factor, so it is consistently expressed in the forespore. Since the spoIIAB locus is not located near the sigma F and spoIIAA loci, it is expressed only in the mother cell and therefore repress sporulation in that cell, allowing sporulation to continue in the forespore. Residual spoIIAA in the mother cell represses spoIIAB, but spoIIAB is constantly replaced so it continues to inhibit sporulation. When the full chromosome localizes to the forespore, spoIIAB can repress sigma F. Therefore, the genetic asymmetry of the B. subtilis chromosome and expression of sigma F, spoIIAB and spoIIAA dictate spore formation in B. subtilis.
Chromosomal replication
Bacillus subtilis is a model organism used to study bacterial chromosome replication. Replication of the single circular chromosome initiates at a single locus, the origin (oriC). Replication proceeds bidirectionally and two replication forks progress in clockwise and counterclockwise directions along the chromosome. Chromosome replication is completed when the forks reach the terminus region, which is positioned opposite to the origin on the chromosome map. The terminus region contains several short DNA sequences (Ter sites) that promote replication arrest. Specific proteins mediate all the steps in DNA replication. Comparison between the proteins involved in chromosomal DNA replication in B. subtilis and in Escherichia coli reveals similarities and differences. Although the basic components promoting initiation, elongation, and termination of replication are well-conserved, some important differences can be found (such as one bacterium missing proteins essential in the other). These differences underline the diversity in the mechanisms and strategies that various bacterial species have adopted to carry out the duplication of their genomes.
Genome
Bacillus subtilis has about 4,100 genes. Of these, only 192 were shown to be indispensable; another 79 were predicted to be essential, as well. A vast majority of essential genes were categorized in relatively few domains of cell metabolism, with about half involved in information processing, one-fifth involved in the synthesis of cell envelope and the determination of cell shape and division, and one-tenth related to cell energetics.
The complete genome sequence of B. subtilis sub-strain QB928 has 4,146,839 DNA base pairs and 4,292 genes. The QB928 strain is widely used in genetic studies due to the presence of various markers [aroI(aroK)906 purE1 dal(alrA)1 trpC2].
Several noncoding RNAs have been characterized in the B. subtilis genome in 2009, including Bsr RNAs.
Microarray-based comparative genomic analyses have revealed that B. subtilis members show considerable genomic diversity.
FsrA is a small RNA found in Bacillus subtilis. It is an effector of the iron sparing response, and acts to down-regulate iron-containing proteins in times of poor iron bioavailability.
A promising fish probiotic, Bacillus subtilis strain WS1A, that possesses antimicrobial activity against Aeromonas veronii and suppressed motile Aeromonas septicemia in Labeo rohita. The de novo assembly resulted in an estimated chromosome size of 4,148,460 bp, with 4,288 open reading frames. B. subtilis strain WS1A genome contains many potential genes, such as those encoding proteins involved in the biosynthesis of riboflavin, vitamin B6, and amino acids (ilvD) and in carbon utilization (pta).
Transformation
Natural bacterial transformation involves the transfer of DNA from one bacterium to another through the surrounding medium. In B. subtilis the length of transferred DNA is greater than 1,271 kb (more than 1 million bases). The transferred DNA is likely double-stranded DNA and is often more than a third of the total chromosome length of 4,215 kb. It appears that about 7–9% of the recipient cells take up an entire chromosome.
In order for a recipient bacterium to bind, take up exogenous DNA from another bacterium of the same species and recombine it into its chromosome, it must enter a special physiological state called competence.
Competence in B. subtilis is induced toward the end of logarithmic growth, especially under conditions of amino-acid limitation. Under these stressful conditions of semistarvation, cells typically have just one copy of their chromosome and likely have increased DNA damage. To test whether transformation is an adaptive function for B. subtilis to repair its DNA damage, experiments were conducted using UV light as the damaging agent. These experiments led to the conclusion that competence, with uptake of DNA, is specifically induced by DNA-damaging conditions, and that transformation functions as a process for recombinational repair of DNA damage.
While the natural competent state is common within laboratory B. subtilis and field isolates, some industrially relevant strains, e.g. B. subtilis (natto), are reluctant to DNA uptake due to the presence of restriction modification systems that degrade exogenous DNA. B. subtilis (natto) mutants, which are defective in a type I restriction modification system endonuclease, are able to act as recipients of conjugative plasmids in mating experiments, paving the way for further genetic engineering of this particular B. subtilis strain.
By adopting Green Chemistry in the use of less hazardous materials, while saving cost, researchers have been mimicking nature's methods of synthesizing chemicals that can be useful for the food and drug industry, by "piggybacking molecules on shorts strands of DNA" before they are zipped together during their complementary base pairing between the two strands. Each strand will carry a particular molecule of interest that will undergo a specific chemical reaction simultaneously when the two corresponding strands of DNA pairs hold together like a zipper, allowing another molecule of interest, to react with one another in controlled and isolated reaction between those molecules being carried into these DNA complementary attachments. By using this method with certain bacteria that naturally follow a process replication in a multi-step fashion, the researchers can simultaneously carry on the interactions of these added molecules to interact with enzymes and other molecules used for a secondary reaction by treating it like a capsule, which is similar to how the bacteria performs its own DNA replication processes.
Uses
20th century
Cultures of B. subtilis were popular worldwide, before the introduction of antibiotics, as an immunostimulatory agent to aid treatment of gastrointestinal and urinary tract diseases. It was used throughout the 1950s as an alternative medicine, which upon digestion has been found to significantly stimulate broad-spectrum immune activity including activation of secretion of specific antibodies IgM, IgG and IgA and release of CpG dinucleotides inducing interferon IFN-α/IFNγ producing activity of leukocytes and cytokines important in the development of cytotoxicity towards tumor cells. It was marketed throughout America and Europe from 1946 as an immunostimulatory aid in the treatment of gut and urinary tract diseases such as Rotavirus and Shigellosis. In 1966, the U.S. Army dumped bacillus subtilis onto the grates of New York City subway stations for five days in order to observe people's reactions when coated by a strange dust. Due to its ability to survive, it is thought to still be present there.
The antibiotic bacitracin was first isolated from a variety of Bacillus licheniformis named "Tracy I" in 1945, then considered part of the B. subtilis species. It is still commercially manufactured by growing the variety in a container of liquid growth medium. Over time, the bacteria synthesizes bacitracin and secretes the antibiotic into the medium. The bacitracin is then extracted from the medium using chemical processes.
Since the 1960s B. subtilis has had a history as a test species in spaceflight experimentation. Its endospores can survive up to 6 years in space if coated by dust particles protecting it from solar UV rays. It has been used as an extremophile survival indicator in outer space such as Exobiology Radiation Assembly, EXOSTACK, and EXPOSE orbital missions.
Wild-type natural isolates of B. subtilis are difficult to work with compared to laboratory strains that have undergone domestication processes of mutagenesis and selection. These strains often have improved capabilities of transformation (uptake and integration of environmental DNA), growth, and loss of abilities needed "in the wild". And, while dozens of different strains fitting this description exist, the strain designated '168' is the most widely used. Strain 168 is a tryptophan auxotroph isolated after X-ray mutagenesis of B. subtilis Marburg strain and is widely used in research due to its high transformation efficiency.
Bacillus globigii, a closely related but phylogenetically distinct species now known as Bacillus atrophaeus was used as a biowarfare simulant during Project SHAD (aka Project 112). Subsequent genomic analysis showed that the strains used in those studies were products of deliberate enrichment for strains that exhibited abnormally high rates of sporulation.
A strain of B. subtilis formerly known as Bacillus natto is used in the commercial production of the Japanese food nattō, as well as the similar Korean food cheonggukjang.
21st century
As a model organism, B. subtilis is commonly used in laboratory studies directed at discovering the fundamental properties and characteristics of Gram-positive spore-forming bacteria. In particular, the basic principles and mechanisms underlying formation of the durable endospore have been deduced from studies of spore formation in B. subtilis.
Its surface-binding properties play a role in safe radionuclide waste [e.g. thorium (IV) and plutonium (IV)] disposal.
Due to its excellent fermentation properties, with high product yields (20 to 25 gram per litre) it is used to produce various enzymes, such as amylase and proteases.
B. subtilis is used as a soil inoculant in horticulture and agriculture.
It may provide some benefit to saffron growers by speeding corm growth and increasing stigma biomass yield.
It is used as an "indicator organism" during gas sterilization procedures, to ensure a sterilization cycle has completed successfully. Specifically B. subtilis endospores are used to verify that a cycle has reached spore-destroying conditions.
B. subtilis has been found to act as a useful bioproduct fungicide that prevents the growth of Monilinia vaccinii-corymbosi, a.k.a. the mummy berry fungus, without interfering with pollination or fruit qualities.
Both metabolically active and non-metabolically active B. subtilis cells have been shown to reduce gold (III) to gold (I) and gold (0) when oxygen is present. This biotic reduction plays a role in gold cycling in geological systems and could potentially be used to recover solid gold from said systems.
Novel and artificial substrains
Novel strains of B. subtilis that could use 4-fluorotryptophan (4FTrp) but not canonical tryptophan (Trp) for propagation were isolated. As Trp is only coded by a single codon, there is evidence that Trp can be displaced by 4FTrp in the genetic code. The experiments showed that the canonical genetic code can be mutable.
Recombinant strains pBE2C1 and pBE2C1AB were used in production of polyhydroxyalkanoates (PHA), and malt waste can be used as their carbon source for lower-cost PHA production.
It is used to produce hyaluronic acid, which is used in the joint-care sector in healthcare and cosmetics.
Monsanto has isolated a gene from B. subtilis that expresses cold shock protein B and spliced it into their drought-tolerant corn hybrid MON 87460, which was approved for sale in the US in November 2011.
A new strain has been modified to convert nectar into honey by secreting enzymes.
Safety
In other animals
Bacillus subtilis was reviewed by the US FDA Center for Veterinary Medicine and found to present no safety concerns when used in direct-fed microbial products, so the Association of American Feed Control Officials has listed it approved for use as an animal feed ingredient under Section 36.14 "Direct-fed Microorganisms".
The Canadian Food Inspection Agency Animal Health and Production Feed Section has classified Bacillus culture dehydrated approved feed ingredients as a silage additive under Schedule IV-Part 2-Class 8.6 and assigned the International Feed Ingredient number IFN 8-19-119.
On the other hand, several feed additives containing viable spores of B. subtilis have been positively evaluated by the European Food Safety Authority, regarding their safe use for weight gaining in animal production.
In humans
Bacillus subtilis spores can survive the extreme heat generated during cooking. Some B. subtilis strains are responsible for causing ropiness or rope spoilage – a sticky, stringy consistency caused by bacterial production of long-chain polysaccharides – in spoiled bread dough and baked goods. For a long time, bread ropiness was associated uniquely with B. subtilis species by biochemical tests. Molecular assays (randomly amplified polymorphic DNA PCR assay, denaturing gradient gel electrophoresis analysis, and sequencing of the V3 region of 16S ribosomal DNA) revealed greater Bacillus species variety in ropy breads, which all seems to have a positive amylase activity and high heat resistance.
B. subtilis CU1 (2 × 109 spores per day) was evaluated in a 16-week study (10 days administration of probiotic, followed by 18 days wash-out period per each month; repeated same procedure for total 4 months) to healthy subjects. B. subtilis CU1 was found to be safe and well tolerated in the subjects without any side effects.
Bacillus subtilis and substances derived from it have been evaluated by different authoritative bodies for their safe and beneficial use in food. In the United States, an opinion letter issued in the early 1960s by the Food and Drug Administration (FDA) designated some substances derived from microorganisms as generally recognized as safe (GRAS), including carbohydrase and protease enzymes from B. subtilis. The opinions were predicated on the use of nonpathogenic and nontoxicogenic strains of the respective organisms and on the use of current good manufacturing practices. The FDA stated that the enzymes derived from the B. subtilis strain were in common use in food prior to January 1, 1958, and that nontoxigenic and nonpathogenic strains of B. subtilis are widely available and have been safely used in a variety of food applications. This includes consumption of Japanese fermented soy bean, in the form of Natto, which is commonly consumed in Japan, and contains as many as 108 viable cells per gram. The fermented beans are recognized for their contribution to a healthy gut flora and vitamin K2 intake; during this long history of widespread use, natto has not been implicated in adverse events potentially attributable to the presence of B. subtilis. The natto product and the B. subtilis natto as its principal component are FOSHU (Foods for Specified Health Use) approved by the Japanese Ministry of Health, Labour, and Welfare as effective for preservation of health.
Bacillus subtilis has been granted "Qualified Presumption of Safety" status by the European Food Safety Authority.
See also
Adenylosuccinate lyase deficiency
Extremophile
Guthrie test
YlbH leader
References
External links
SubtiWiki "up-to-date information for all genes of Bacillus subtilis"
Bacillus subtilis Final Risk Assessment on EPA.gov. Archived from the original on 2015-09-09.
Bacillus subtilis genome browser
Type strain of Bacillus subtilis at BacDive - the Bacterial Diversity Metadatabase
subtilis
Bacteria described in 1872
Extremophiles
Food microbiology | 0.785817 | 0.99833 | 0.784505 |
Function (biology) | In evolutionary biology, function is the reason some object or process occurred in a system that evolved through natural selection. That reason is typically that it achieves some result, such as that chlorophyll helps to capture the energy of sunlight in photosynthesis. Hence, the organism that contains it is more likely to survive and reproduce, in other words the function increases the organism's fitness. A characteristic that assists in evolution is called an adaptation; other characteristics may be non-functional spandrels, though these in turn may later be co-opted by evolution to serve new functions.
In biology, function has been defined in many ways. In physiology, it is simply what an organ, tissue, cell or molecule does.
In the philosophy of biology, talk of function inevitably suggests some kind of teleological purpose, even though natural selection operates without any goal for the future. All the same, biologists often use teleological language as a shorthand for function. In contemporary philosophy of biology, there are three major accounts of function in the biological world: theories of causal role, selected effect, and goal contribution.
In pre-evolutionary biology
In physiology, a function is an activity or process carried out by a system in an organism, such as sensation or locomotion in an animal. This concept of function as opposed to form (respectively Aristotle's ergon and morphê) was central in biological explanations in classical antiquity. In more modern times it formed part of the 1830 Cuvier–Geoffroy debate, where Cuvier argued that an animal's structure was driven by its functional needs, while Geoffroy proposed that each animal's structure was modified from a common plan.
In evolutionary biology
Function can be defined in a variety of ways, including as adaptation, as contributing to evolutionary fitness, in animal behaviour, and, as discussed below, also as some kind of causal role or goal in the philosophy of biology.
Adaptation
A functional characteristic is known in evolutionary biology as an adaptation, and the research strategy for investigating whether a character is adaptive is known as adaptationism. Although assuming that a character is functional may be helpful in research, some characteristics of organisms are non-functional, formed as accidental spandrels, side effects of neighbouring functional systems.
Natural selection
From the point of view of natural selection, biological functions exist to contribute to fitness, increasing the chance that an organism will survive to reproduce. For example, the function of chlorophyll in a plant is to capture the energy of sunlight for photosynthesis, which contributes to evolutionary success.
In ethology
The ethologist Niko Tinbergen named four questions, based on Aristotle's Four Causes, that a biologist could ask to help explain a behaviour, though they have been generalised to a wider scope. 1) Mechanism: What mechanisms cause the animal to behave as it does? 2) Ontogeny: What developmental mechanisms in the animal's embryology (and its youth, if it learns) created the structures that cause the behaviour? 3) Function/adaptation: What is the evolutionary function of the behaviour? 4) Evolution: What is the phylogeny of the behaviour, or in other words, when did it first appear in the evolutionary history of the animal? The questions are interdependent, so that, for example, adaptive function is constrained by embryonic development.
In philosophy of biology
Function is not the same as purpose in the teleological sense, that is, possessing conscious mental intention to achieve a goal. In the philosophy of biology, evolution is a blind process which has no 'goal' for the future. For example, a tree does not grow flowers for any purpose, but does so simply because it has evolved to do so. To say 'a tree grows flowers to attract pollinators' would be incorrect if the 'to' implies purpose. A function describes what something does, not what its 'purpose' is. However, teleological language is often used by biologists as a shorthand way of describing function, even though its applicability is disputed.
In contemporary philosophy of biology, there are three major accounts of function in the biological world: theories of causal role, selected effect, and goal contribution.
Causal role
Causal role theories of biological function trace their origin back to a 1975 paper by Robert Cummins. Cummins defines the functional role of a component of a system to be the causal effect that the component has on the larger containing system. For example, the heart has the actual causal role of pumping blood in the circulatory system; therefore, the function of the heart is to pump blood. This account has been objected to on the grounds that it is too loose a notion of function. For example, the heart also has the causal effect of producing a sound, but we would not consider producing sound to be the function of the heart.
Selected effect
Selected effect theories of biological functions hold that the function of a biological trait is the function that the trait was selected for, as argued by Ruth Millikan. For example, the function of the heart is pumping blood, for that is the action for which the heart was selected for by evolution. In other words, pumping blood is the reason that the heart has evolved. This account has been criticized for being too restrictive a notion of function. It is not always clear which behavior has contributed to the selection of a trait, as biological traits can have functions, even if they have not been selected for. Beneficial mutations are initially not selected for, but they do have functions.
Goal contribution
Goal contribution theories seek to carve a middle ground between causal role and selected effect theories, as with Boorse (1977). Boorse defines the function of a biological trait to be the statistically typical causal contribution of that trait to survival and reproduction. So for example, zebra stripes were sometimes said to work by confusing predators. This role of zebra stripes would contribute to the survival and reproduction of zebras, and that is why confusing predators would be said to be the function of zebra stripes. Under this account, whether or not a particular causal role of a trait is its function depends on whether that causal role contributes to the survival and reproduction of that organism.
See also
Preadaptation
References
Evolutionary biology terminology | 0.801793 | 0.978369 | 0.784449 |
Population ecology | Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration.
The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics.
History
In the 1940s, ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, aúto, "self"; οίκος, oíkos, "household"; and λόγος, lógos, "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology.
Terminology
A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population's geographic range, which has limits that a species can tolerate (such as temperature).
Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates all play a significant role in growth rate. The maximum per capita growth rate for a population is known as the intrinsic rate of increase.
In a population, carrying capacity is known as the maximum population size of the species that the environment can sustain, which is determined by resources available. In many classic population models, r is represented as the intrinsic growth rate, where K is the carrying capacity, and N0 is the initial population size.
Population dynamics
The development of population ecology owes much to the mathematical models known as population dynamics, which were originally formulae derived from demography at the end of the 18th and beginning of 19th century.
The beginning of population dynamics is widely regarded as the work of Malthus, formulated as the Malthusian growth model. According to Malthus, assuming that the conditions (the environment) remain constant (ceteris paribus), a population will grow (or decline) exponentially. This principle provided the basis for the subsequent predictive theories, such as the demographic studies such as the work of Benjamin Gompertz and Pierre François Verhulst in the early 19th century, who refined and adjusted the Malthusian demographic model.
A more general model formulation was proposed by F. J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also Ludwig von Bertalanffy are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations.
Exponential vs. logistic growth
When describing growth models, there are two main types of models that are most commonly used: exponential and logistic growth.
When the per capita rate of increase takes the same positive value regardless of population size, the graph shows exponential growth. Exponential growth takes on the assumption that there is unlimited resources and no predation. An example of exponential population growth is that of the Monk Parakeets in the United States. Originally from South America, Monk Parakeets were either released or escaped from people who owned them. These birds experienced exponential growth from the years 1975-1994 and grew about 55 times their population size from 1975. This growth is likely due to reproduction within their population, as opposed to the addition of more birds from South America (Van Bael & Prudet-Jones 1996).
When the per capita rate of increase decreases as the population increases towards the maximum limit, or carrying capacity, the graph shows logistic growth. Environmental and social variables, along with many others, impact the carrying capacity of a population, meaning that it has the ability to change (Schacht 1980).
Fisheries and wildlife management
In fisheries and wildlife management, population is affected by three dynamic rate functions.
Natality or birth rate, often recruitment, which means reaching a certain size or reproductive stage. Usually refers to the age a fish can be caught and counted in nets.
Population growth rate, which measures the growth of individuals in size and length. More important in fisheries, where population is often measured in biomass.
Mortality, which includes harvest mortality and natural mortality. Natural mortality includes non-human predation, disease and old age.
If N1 is the number of individuals at time 1 then
where N0 is the number of individuals at time 0, B is the number of individuals born, D the number that died, I the number that immigrated, and E the number that emigrated between time 0 and time 1.
If we measure these rates over many time intervals, we can determine how a population's density changes over time. Immigration and emigration are present, but are usually not measured.
All of these are measured to determine the harvestable surplus, which is the number of individuals that can be harvested from a population without affecting long-term population stability or average population size. The harvest within the harvestable surplus is termed "compensatory" mortality, where the harvest deaths are substituted for the deaths that would have occurred naturally. Harvest above that level is termed "additive" mortality, because it adds to the number of deaths that would have occurred naturally. These terms are not necessarily judged as "good" and "bad," respectively, in population management. For example, a fish & game agency might aim to reduce the size of a deer population through additive mortality. Bucks might be targeted to increase buck competition, or does might be targeted to reduce reproduction and thus overall population size.
For the management of many fish and other wildlife populations, the goal is often to achieve the largest possible long-run sustainable harvest, also known as maximum sustainable yield (or MSY). Given a population dynamic model, such as any of the ones above, it is possible to calculate the population size that produces the largest harvestable surplus at equilibrium. While the use of population dynamic models along with statistics and optimization to set harvest limits for fish and game is controversial among some scientists, it has been shown to be more effective than the use of human judgment in computer experiments where both incorrect models and natural resource management students competed to maximize yield in two hypothetical fisheries. To give an example of a non-intuitive result, fisheries produce more fish when there is a nearby refuge from human predation in the form of a nature reserve, resulting in higher catches than if the whole area was open to fishing.
r/K selection
An important concept in population ecology is the r/K selection theory. For example, if an animal has the choice of producing one or a few offspring, or to put a lot of effort or little effort in offspring—these are all examples of trade-offs. In order for species to thrive, they must choose what is best for them, leading to a clear distinction between r and K selected species.
The first variable is r (the intrinsic rate of natural increase in population size, density independent) and the second variable is K (the carrying capacity of a population, density dependent).
It is important to understand the difference between density-independent factors when selecting the intrinsic rate and density-dependent for the selection of the carrying capacity. Carrying capacity is only found during a density-dependent population. Density-dependent factors influence the carrying capacity are predation, harvest, and genetics, so when selecting the carrying capacity it is important to understand to look at the predation or harvest rates that influence the population (Stewart 2004).
An r-selected species (e.g., many kinds of insects, such as aphids) is one that has high rates of fecundity, low levels of parental investment in the young, and high rates of mortality before individuals reach maturity. Evolution favors productivity in r-selected species.
In contrast, a K-selected species (such as humans) has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Evolution in K-selected species favors efficiency in the conversion of more resources into fewer offspring. K-selected species generally experience stronger competition, where populations generally live near carrying capacity. These species have heavy investment in offspring, resulting in longer lived organisms, and longer period of maturation. Offspring of K-selected species generally have a higher probability of survival, due to heavy parental care and nurturing.
Offspring Quality
The offspring fitness is mainly affected by the size and quality of that specific offspring [depending on the species]. Factors that contribute to the relative fitness of offspring size is either the resources the parents provide to their young or morphological traits that come from the parents. The overall success of the offspring after the initial birth or hatching is the survival of the young, the growth rate, and the birthing success of the offspring. There is found to be no effect of the young being raised by the natural parents or foster parents, the offspring need the proper resources to survive (Kristi 2010).
A study that was conducted on the egg size and offspring quality in birds found that, in summary, that the egg size contributes to the overall fitness of the offspring. This study shows the direct relationship to the survivorship curve Type I in that if the offspring is cared for during its early stages of life by a parent, it will die off later in life. However, if the offspring is not cared for by the parents due to an increase in egg quantity, then the survivorship curve will be similar to Type III, in that the offspring will die off early and will survive later in life.
Top-down and bottom-up controls
Top-down controls
In some populations, organisms in lower trophic levels are controlled by organisms at the top. This is known as top-down control.
For example, the presence of top carnivores keep herbivore populations in check. If there were no top carnivores in the ecosystem, then herbivore populations would rapidly increase, leading to all plants being eaten. This ecosystem would eventually collapse.
Bottom-up controls
Bottom-up controls, on the other hand, are driven by producers in the ecosystem. If plant populations change, then the population of all species would be impacted.
For example, if plant populations decreased significantly, the herbivore populations would decrease, which would lead to a carnivore population decreasing too. Therefore, if all of the plants disappeared, then the ecosystem would collapse. Another example would be if there were too many plants available, then two herbivore populations may compete for the same food. The competition would lead to an eventual removal of one population.
Do all ecosystems have to be either top-down or bottom-up?
An ecosystem does not have to be either top-down or bottom-up. There are occasions where an ecosystem could be bottom-up sometimes, such as a marine ecosystem, but then have periods of top-down control due to fishing.
Survivorship curves
Survivorship curves are graphs that show the distribution of survivors in a population according to age. Survivorship curves play an important role in comparing generations, populations, or even different species.
A Type I survivorship curve is characterized by the fact that death occurs in the later years of an organism's life (mostly mammals). In other words, most organisms reach the maximum expected lifespan and the life expectancy and the age of death go hand-in-hand (Demetrius 1978). Typically, Type I survivorship curves characterize K-selected species.
Type II survivorship shows that death at any age is equally probable. This means that the chances of death are not dependent on or affected by the age of that organism.
Type III curves indicate few surviving the younger years, but after a certain age, individuals are much more likely to survive. Type III survivorship typically characterizes r-selected species.
Metapopulation
Populations are also studied and conceptualized through the "metapopulation" concept. The metapopulation concept was introduced in 1969: "as a population of populations which go extinct locally and recolonize." Metapopulation ecology is a simplified model of the landscape into patches of varying levels of quality. Patches are either occupied or they are not. Migrants moving among the patches are structured into metapopulations either as sources or sinks. Source patches are productive sites that generate a seasonal supply of migrants to other patch locations. Sink patches are unproductive sites that only receive migrants. In metapopulation terminology there are emigrants (individuals that leave a patch) and immigrants (individuals that move into a patch). Metapopulation models examine patch dynamics over time to answer questions about spatial and demographic ecology. An important concept in metapopulation ecology is the rescue effect, where small patches of lower quality (i.e., sinks) are maintained by a seasonal influx of new immigrants. Metapopulation structure evolves from year to year, where some patches are sinks, such as dry years, and become sources when conditions are more favorable. Ecologists utilize a mixture of computer models and field studies to explain metapopulation structure.
Metapopulation ecology allows for ecologists to take in a wide range of factors when examining a metapopulation like genetics, the bottle-neck effect, and many more. Metapopulation data is extremely useful in understanding population dynamics as most species are not numerous and require specific resources from their habitats. In addition, metapopulation ecology allows for a deeper understanding of the effects of habitat loss, and can help to predict the future of a habitat. To elaborate, metapopulation ecology assumes that, before a habitat becomes uninhabitable, the species in it will emigrate out, or die off. This information is helpful to ecologists in determining what, if anything, can be done to aid a declining habitat. Overall, the information that metapopulation ecology provides is useful to ecologists in many ways (Hanski 1998).
Journals
The first journal publication of the Society of Population Ecology, titled Population Ecology (originally called Researches on Population Ecology) was released in 1952.
Scientific articles on population ecology can also be found in the Journal of Animal Ecology, Oikos and other journals.
See also
Density-dependent inhibition
Ecological overshoot
Irruptive growth
Lists of organisms by population
Overpopulation
Population density
Population distribution
Population dynamics
Population dynamics of fisheries
Population genetics
Population growth
Theoretical ecology
References
Further reading
Bibliography
Applied statistics
Ecology | 0.794274 | 0.987601 | 0.784426 |
Biogeochemical cycle | A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere.
For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients.
There are biogeochemical cycles for many other elements, such as for oxygen, hydrogen, phosphorus, calcium, iron, sulfur, mercury and selenium. There are also cycles for molecules, such as water and silica. In addition there are macroscopic cycles such as the rock cycle, and human-induced cycles for synthetic compounds such as for polychlorinated biphenyls (PCBs). In some cycles there are geological reservoirs where substances can remain or be sequestered for long periods of time.
Biogeochemical cycles involve the interaction of biological, geological, and chemical processes. Biological processes include the influence of microorganisms, which are critical drivers of biogeochemical cycling. Microorganisms have the ability to carry out wide ranges of metabolic processes essential for the cycling of nutrients and chemicals throughout global ecosystems. Without microorganisms many of these processes would not occur, with significant impact on the functioning of land and ocean ecosystems and the planet's biogeochemical cycles as a whole. Changes to cycles can impact human health. The cycles are interconnected and play important roles regulating climate, supporting the growth of plants, phytoplankton and other organisms, and maintaining the health of ecosystems generally. Human activities such as burning fossil fuels and using large amounts of fertilizer can disrupt cycles, contributing to climate change, pollution, and other environmental problems.
Overview
Energy flows directionally through ecosystems, entering as sunlight (or inorganic molecules for chemoautotrophs) and leaving as heat during the many transfers between trophic levels. However, the matter that makes up living organisms is conserved and recycled. The six most common elements associated with organic molecules — carbon, nitrogen, hydrogen, oxygen, phosphorus, and sulfur — take a variety of chemical forms and may exist for long periods in the atmosphere, on land, in water, or beneath the Earth's surface. Geologic processes, such as weathering, erosion, water drainage, and the subduction of the continental plates, all play a role in this recycling of materials. Because geology and chemistry have major roles in the study of this process, the recycling of inorganic matter between living organisms and their environment is called a biogeochemical cycle.
The six aforementioned elements are used by organisms in a variety of ways. Hydrogen and oxygen are found in water and organic molecules, both of which are essential to life. Carbon is found in all organic molecules, whereas nitrogen is an important component of nucleic acids and proteins. Phosphorus is used to make nucleic acids and the phospholipids that comprise biological membranes. Sulfur is critical to the three-dimensional shape of proteins. The cycling of these elements is interconnected. For example, the movement of water is critical for leaching sulfur and phosphorus into rivers which can then flow into oceans. Minerals cycle through the biosphere between the biotic and abiotic components and from one organism to another.
Ecological systems (ecosystems) have many biogeochemical cycles operating as a part of the system, for example, the water cycle, the carbon cycle, the nitrogen cycle, etc. All chemical elements occurring in organisms are part of biogeochemical cycles. In addition to being a part of living organisms, these chemical elements also cycle through abiotic factors of ecosystems such as water (hydrosphere), land (lithosphere), and/or the air (atmosphere).
The living factors of the planet can be referred to collectively as the biosphere. All the nutrients — such as carbon, nitrogen, oxygen, phosphorus, and sulfur — used in ecosystems by living organisms are a part of a closed system; therefore, these chemicals are recycled instead of being lost and replenished constantly such as in an open system.
The major parts of the biosphere are connected by the flow of chemical elements and compounds in biogeochemical cycles. In many of these cycles, the biota plays an important role. Matter from the Earth's interior is released by volcanoes. The atmosphere exchanges some compounds and elements rapidly with the biota and oceans. Exchanges of materials between rocks, soils, and the oceans are generally slower by comparison.
The flow of energy in an ecosystem is an open system; the Sun constantly gives the planet energy in the form of light while it is eventually used and lost in the form of heat throughout the trophic levels of a food web. Carbon is used to make carbohydrates, fats, and proteins, the major sources of food energy. These compounds are oxidized to release carbon dioxide, which can be captured by plants to make organic compounds. The chemical reaction is powered by the light energy of sunshine.
Sunlight is required to combine carbon with hydrogen and oxygen into an energy source, but ecosystems in the deep sea, where no sunlight can penetrate, obtain energy from sulfur. Hydrogen sulfide near hydrothermal vents can be utilized by organisms such as the giant tube worm. In the sulfur cycle, sulfur can be forever recycled as a source of energy. Energy can be released through the oxidation and reduction of sulfur compounds (e.g., oxidizing elemental sulfur to sulfite and then to sulfate).
Although the Earth constantly receives energy from the Sun, its chemical composition is essentially fixed, as the additional matter is only occasionally added by meteorites. Because this chemical composition is not replenished like energy, all processes that depend on these chemicals must be recycled. These cycles include both the living biosphere and the nonliving lithosphere, atmosphere, and hydrosphere.
Biogeochemical cycles can be contrasted with geochemical cycles. The latter deals only with crustal and subcrustal reservoirs even though some process from both overlap.
Compartments
Atmosphere
Hydrosphere
The global ocean covers more than 70% of the Earth's surface and is remarkably heterogeneous. Marine productive areas, and coastal ecosystems comprise a minor fraction of the ocean in terms of surface area, yet have an enormous impact on global biogeochemical cycles carried out by microbial communities, which represent 90% of the ocean's biomass. Work in recent years has largely focused on cycling of carbon and macronutrients such as nitrogen, phosphorus, and silicate: other important elements such as sulfur or trace elements have been less studied, reflecting associated technical and logistical issues. Increasingly, these marine areas, and the taxa that form their ecosystems, are subject to significant anthropogenic pressure, impacting marine life and recycling of energy and nutrients. A key example is that of cultural eutrophication, where agricultural runoff leads to nitrogen and phosphorus enrichment of coastal ecosystems, greatly increasing productivity resulting in algal blooms, deoxygenation of the water column and seabed, and increased greenhouse gas emissions, with direct local and global impacts on nitrogen and carbon cycles. However, the runoff of organic matter from the mainland to coastal ecosystems is just one of a series of pressing threats stressing microbial communities due to global change. Climate change has also resulted in changes in the cryosphere, as glaciers and permafrost melt, resulting in intensified marine stratification, while shifts of the redox-state in different biomes are rapidly reshaping microbial assemblages at an unprecedented rate.
Global change is, therefore, affecting key processes including primary productivity, CO2 and N2 fixation, organic matter respiration/remineralization, and the sinking and burial deposition of fixed CO2. In addition to this, oceans are experiencing an acidification process, with a change of ~0.1 pH units between the pre-industrial period and today, affecting carbonate/bicarbonate buffer chemistry. In turn, acidification has been reported to impact planktonic communities, principally through effects on calcifying taxa. There is also evidence for shifts in the production of key intermediary volatile products, some of which have marked greenhouse effects (e.g., N2O and CH4, reviewed by Breitburg in 2018, due to the increase in global temperature, ocean stratification and deoxygenation, driving as much as 25 to 50% of nitrogen loss from the ocean to the atmosphere in the so-called oxygen minimum zones or anoxic marine zones, driven by microbial processes. Other products, that are typically toxic for the marine nekton, including reduced sulfur species such as H2S, have a negative impact for marine resources like fisheries and coastal aquaculture. While global change has accelerated, there has been a parallel increase in awareness of the complexity of marine ecosystems, and especially the fundamental role of microbes as drivers of ecosystem functioning.
Lithosphere
Biosphere
Microorganisms drive much of the biogeochemical cycling in the earth system.
Reservoirs
The chemicals are sometimes held for long periods of time in one place. This place is called a reservoir, which, for example, includes such things as coal deposits that are storing carbon for a long period of time. When chemicals are held for only short periods of time, they are being held in exchange pools. Examples of exchange pools include plants and animals.
Plants and animals utilize carbon to produce carbohydrates, fats, and proteins, which can then be used to build their internal structures or to obtain energy. Plants and animals temporarily use carbon in their systems and then release it back into the air or surrounding medium. Generally, reservoirs are abiotic factors whereas exchange pools are biotic factors. Carbon is held for a relatively short time in plants and animals in comparison to coal deposits. The amount of time that a chemical is held in one place is called its residence time or turnover time (also called the renewal time or exit age).
Box models
Box models are widely used to model biogeochemical systems. Box models are simplified versions of complex systems, reducing them to boxes (or storage reservoirs) for chemical materials, linked by material fluxes (flows). Simple box models have a small number of boxes with properties, such as volume, that do not change with time. The boxes are assumed to behave as if they were mixed homogeneously. These models are often used to derive analytical formulas describing the dynamics and steady-state abundance of the chemical species involved.
The diagram at the right shows a basic one-box model. The reservoir contains the amount of material M under consideration, as defined by chemical, physical or biological properties. The source Q is the flux of material into the reservoir, and the sink S is the flux of material out of the reservoir. The budget is the check and balance of the sources and sinks affecting material turnover in a reservoir. The reservoir is in a steady state if Q = S, that is, if the sources balance the sinks and there is no change over time.
The residence or turnover time is the average time material spends resident in the reservoir. If the reservoir is in a steady state, this is the same as the time it takes to fill or drain the reservoir. Thus, if τ is the turnover time, then τ = M/S. The equation describing the rate of change of content in a reservoir is
When two or more reservoirs are connected, the material can be regarded as cycling between the reservoirs, and there can be predictable patterns to the cyclic flow. More complex multibox models are usually solved using numerical techniques.
The diagram on the left shows a simplified budget of ocean carbon flows. It is composed of three simple interconnected box models, one for the euphotic zone, one for the ocean interior or dark ocean, and one for ocean sediments. In the euphotic zone, net phytoplankton production is about 50 Pg C each year. About 10 Pg is exported to the ocean interior while the other 40 Pg is respired. Organic carbon degradation occurs as particles (marine snow) settle through the ocean interior. Only 2 Pg eventually arrives at the seafloor, while the other 8 Pg is respired in the dark ocean. In sediments, the time scale available for degradation increases by orders of magnitude with the result that 90% of the organic carbon delivered is degraded and only 0.2 Pg C yr−1 is eventually buried and transferred from the biosphere to the geosphere.
The diagram on the right shows a more complex model with many interacting boxes. Reservoir masses here represents carbon stocks, measured in Pg C. Carbon exchange fluxes, measured in Pg C yr−1, occur between the atmosphere and its two major sinks, the land and the ocean. The black numbers and arrows indicate the reservoir mass and exchange fluxes estimated for the year 1750, just before the Industrial Revolution. The red arrows (and associated numbers) indicate the annual flux changes due to anthropogenic activities, averaged over the 2000–2009 time period. They represent how the carbon cycle has changed since 1750. Red numbers in the reservoirs represent the cumulative changes in anthropogenic carbon since the start of the Industrial Period, 1750–2011.
Fast and slow cycles
There are fast and slow biogeochemical cycles. Fast cycle operate in the biosphere and slow cycles operate in rocks. Fast or biological cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
As an example, the fast carbon cycle is illustrated in the diagram below on the left. This cycle involves relatively short-term biogeochemical processes between the environment and living organisms in the biosphere. It includes movements of carbon between the atmosphere and terrestrial and marine ecosystems, as well as soils and seafloor sediments. The fast cycle includes annual cycles involving photosynthesis and decadal cycles involving vegetative growth and decomposition. The reactions of the fast carbon cycle to human activities will determine many of the more immediate impacts of climate change.
The slow cycle is illustrated in the diagram above on the right. It involves medium to long-term geochemical processes belonging to the rock cycle. The exchange between the ocean and atmosphere can take centuries, and the weathering of rocks can take millions of years. Carbon in the ocean precipitates to the ocean floor where it can form sedimentary rock and be subducted into the Earth's mantle. Mountain building processes result in the return of this geologic carbon to the Earth's surface. There the rocks are weathered and carbon is returned to the atmosphere by degassing and to the ocean by rivers. Other geologic carbon returns to the ocean through the hydrothermal emission of calcium ions. In a given year between 10 and 100 million tonnes of carbon moves around this slow cycle. This includes volcanoes returning geologic carbon directly to the atmosphere in the form of carbon dioxide. However, this is less than one percent of the carbon dioxide put into the atmosphere by burning fossil fuels.
Deep cycles
The terrestrial subsurface is the largest reservoir of carbon on earth, containing 14–135 Pg of carbon and 2–19% of all biomass. Microorganisms drive organic and inorganic compound transformations in this environment and thereby control biogeochemical cycles. Current knowledge of the microbial ecology of the subsurface is primarily based on 16S ribosomal RNA (rRNA) gene sequences. Recent estimates show that <8% of 16S rRNA sequences in public databases derive from subsurface organisms and only a small fraction of those are represented by genomes or isolates. Thus, there is remarkably little reliable information about microbial metabolism in the subsurface. Further, little is known about how organisms in subsurface ecosystems are metabolically interconnected. Some cultivation-based studies of syntrophic consortia and small-scale metagenomic analyses of natural communities suggest that organisms are linked via metabolic handoffs: the transfer of redox reaction products of one organism to another. However, no complex environments have been dissected completely enough to resolve the metabolic interaction networks that underpin them. This restricts the ability of biogeochemical models to capture key aspects of the carbon and other nutrient cycles. New approaches such as genome-resolved metagenomics, an approach that can yield a comprehensive set of draft and even complete genomes for organisms without the requirement for laboratory isolation have the potential to provide this critical level of understanding of biogeochemical processes.
Some examples
Some of the more well-known biogeochemical cycles are shown below:
Many biogeochemical cycles are currently being studied for the first time. Climate change and human impacts are drastically changing the speed, intensity, and balance of these relatively unknown cycles, which include:
the mercury cycle, and
the human-caused cycle of PCBs.
Biogeochemical cycles always involve active equilibrium states: a balance in the cycling of the element between compartments. However, overall balance may involve compartments distributed on a global scale.
As biogeochemical cycles describe the movements of substances on the entire globe, the study of these is inherently multidisciplinary. The carbon cycle may be related to research in ecology and atmospheric sciences. Biochemical dynamics would also be related to the fields of geology and pedology.
See also
Carbonate–silicate cycle
Ecological recycling
Great Acceleration
Hydrogen cycle
Redox gradient
References
Further reading
Schink, Bernhard; "Microbes: Masters of the Global Element Cycles" pp 33–58. "Metals, Microbes and Minerals: The Biogeochemical Side of Life", pp xiv + 341. Walter de Gruyter, Berlin. DOI 10.1515/9783110589771-002
Biogeography
Biosphere
Geochemistry | 0.787369 | 0.99623 | 0.784401 |
Biomimetics | Biomimetics or biomimicry is the emulation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" are derived from (bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field is bionics.
Nature has gone through evolution over the 3.8 billion years since life is estimated to have appeared on the Earth. It has evolved species with high performance using commonly found materials. Surfaces of solids interact with other surfaces and the environment and derive the properties of materials. Biological materials are highly organized from the molecular to the nano-, micro-, and macroscales, often in a hierarchical manner with intricate nanoarchitecture that ultimately makes up a myriad of different functional elements. Properties of materials and surfaces result from a complex interplay between surface structure and morphology and physical and chemical properties. Many materials, surfaces, and objects in general provide multifunctionality.
Various materials, structures, and devices have been fabricated for commercial interest by engineers, material scientists, chemists, and biologists, and for beauty, structure, and design by artists and architects. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy. Economic impact of bioinspired materials and surfaces is significant, on the order of several hundred billion dollars per year worldwide.
History
One of the early examples of biomimicry was the study of birds to enable human flight. Although never successful in creating a "flying machine", Leonardo da Vinci (1452–1519) was a keen observer of the anatomy and flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines". The Wright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight.
During the 1950s the American biophysicist and polymath Otto Schmitt developed the concept of "biomimetics". During his doctoral research he developed the Schmitt trigger by studying the nerves in squid, attempting to engineer a device that replicated the biological system of nerve propagation. He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view of biophysics at that time, a view he would come to call biomimetics.
In 1960 Jack E. Steele coined a similar term, bionics, at Wright-Patterson Air Force Base in Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues". During a later meeting in 1963 Schmitt stated,
In 1969, Schmitt used the term "biomimetic" in the title one of his papers, and by 1974 it had found its way into Webster's Dictionary. Bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation when Martin Caidin referenced Jack Steele and his work in the novel Cyborg which later resulted in the 1974 television series The Six Million Dollar Man and its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices". Because the term bionic took on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it.
The term biomimicry appeared as early as 1982. Biomimicry was popularized by scientist and author Janine Benyus in her 1997 book Biomimicry: Innovation Inspired by Nature. Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry.
One of the latest examples of biomimicry has been created by Johannes-Paul Fladerer and Ernst Kurzmann by the description of "managemANT". This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies. The potential long-term impacts of biomimicry were quantified in a 2013 Fermanian Business & Economic Institute Report commissioned by the San Diego Zoo. The findings demonstrated the potential economic and environmental benefits of biomimicry, which can be further seen in Johannes-Paul Fladerer and Ernst Kurzmann's "managemANT" approach. This approach utilizes the behavioral strategies of ants in economic and management strategies.
Bio-inspired technologies
Biomimetics could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes. Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system.
Locomotion
Aircraft wing design and flight techniques are being inspired by birds and bats. The aerodynamics of streamlined design of improved Japanese high speed train Shinkansen 500 Series were modelled after the beak of Kingfisher bird.
Biorobots based on the physiology and methods of locomotion of animals include BionicKangaroo which moves like a kangaroo, saving energy from one jump and transferring it to its next jump; Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces, and Pleobot, a shrimp-inspired robot to study metachronal swimming and the ecological impacts of this propulsive gait on the environment.
Biomimetic flying robots (BFRs)
BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments.
Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.
Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.
Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. The prototype by Phan and Park took inspiration from the rhinoceros beetle, so it can successfully continue flight even after a collision by deforming its hindwings.
Biomimetic architecture
Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection. The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth. Similarly, biomimetic architecture seeks solutions for building sustainability present in nature. While nature serves as a model, there are few examples of biomimetic architecture that aim to be nature positive.
The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle. In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy.
Characteristics
The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.
Biomorphic architecture, also referred to as bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.
Procedures
Within biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull). The boundary between the two approaches is blurry with the possibility of transition between the two, depending on each individual case. Biomimetic architecture is typically carried out in interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, material scientists, architects, designers, mathematicians and computer scientists.
In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system.
In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product.
Examples
Researchers studied the termite's ability to maintain virtually constant temperature and humidity in their termite mounds in Africa despite outside temperatures that vary from . Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence human building design. The Eastgate Centre, a mid-rise office complex in Harare, Zimbabwe, stays cool via a passive cooling architecture that uses only 10% of the energy of a conventional building of the same size.
Researchers in the Sapienza University of Rome were inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building's energy consumption was reduced by 15%.
A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap. This design of façade is able to induce air flow due to the Venturi effect and continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building's external wall surface and the air flowing over it was observed. The design is coupled with greening of the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants. The damp plant substrate further support the cooling effect.
Scientists in Shanghai University were able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) using sepiolite and calcium chloride with water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is a desiccant and improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations.
In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair. The arrangement of leaves on a plant has been adapted for better solar power collection.
Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flower Strelitzia reginae (known as bird-of-paradise flower) has inspired architects and scientists from the University of Freiburg and University of Stuttgart to create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin.
Other hingeless bioinspired systems include Flectofold. Flectofold has been inspired from the trapping system developed by the carnivorous plant Aldrovanda vesiculosa.
Structural materials
There is a great need for new structural materials that are light weight but offer exceptional combinations of stiffness, strength, and toughness.
Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion. In a classic design problem, strength and toughness are more likely to be mutually exclusive, i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span from nano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties. Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies. Bone, nacre (abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials. The exceptional resistance to fracture of bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales — nanoscale structure of protein molecules to macroscopic physiological scale. Nacre exhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar like structure with thick mineral layer (0.2–0.9 μm) of closely packed aragonite structures and thin organic matrix (~20 nm). While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials. Pavement cells, epidermal cells on the surface of plant leaves and petals, often form wavy interlocking patterns resembling jigsaw puzzle pieces and are shown to enhance the fracture toughness of leaves, key to plant survival. Their pattern, replicated in laser-engraved Poly(methyl methacrylate) samples, was also demonstrated to lead to increased fracture toughness. It is suggested that the arrangement and patterning of cells play a role in managing crack propagation in tissues.
Biomorphic mineralization is a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic.
Freeze casting (ice templating), an inexpensive method to mimic natural layered structures, was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/organic content. Various further studies also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases.
Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimic living tissues by printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries. Efforts are also taken up to mimic the design of nacre in artificial composite materials using fused deposition modelling and the helicoidal structures of stomatopod clubs in the fabrication of high performance carbon fiber-epoxy composites.
Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically assisted slip casting have also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research.
Spider silk is tougher than Kevlar used in bulletproof vests. Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes. The self-sharpening teeth of many animals have been copied to make better cutting tools.
New ceramics that exhibit giant electret hysteresis have also been realized.
Neuronal computers
Neuromorphic computers and sensors are electrical devices that copy the structure and function of biological neurons in order to compute. One example of this is the event camera in which only the
pixels that receive a new signal update to a new state. All other pixels do not update until a signal is received.
Self healing-materials
In some biological systems, self-healing occurs via chemical releases at the site of fracture, which initiate a systemic response to transport repairing agents to the fracture site. This promotes autonomic healing. To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin. Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed. A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable lightweight structures such as rubber boats or Tensairity constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike. Self-healing materials, polymers and composite materials capable of mending cracks have been produced based on biological materials.
The self-healing properties may also be achieved by the breaking and reforming of hydrogen bonds upon cyclical stress of the material.
Surfaces
Surfaces that recreate the properties of shark skin are intended to enable more efficient movement through water. Efforts have been made to produce fabric that emulates shark skin.
Surface tension biomimetics are being researched for technologies such as hydrophobic or hydrophilic coatings and microactuators.
Adhesion
Wet adhesion
Some amphibians, such as tree and torrent frogs and arboreal salamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface. Tire treads have also been inspired by the toe pads of tree frogs. 3D printed hierarchical surface models, inspired from tree and torrent frogs toe pad design, have been observed to produce better wet traction than conventional tire design.
Marine mussels can stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix of amino acid residues which has been adapted specifically for adhesive purposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes, and one-component adhesive systems with potential for employment in nanofabrication protocols. Other research has proposed adhesive glue from mussels.
Dry adhesion
Leg attachment pads of several animals, including many insects (e.g., beetles and flies), spiders and lizards (e.g., geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known as setae. Such biological examples have offered inspiration in order to produce climbing robots, boots and tape. Synthetic setae have also been developed for the production of dry adhesives.
Liquid repellency
Superliquiphobicity refers to a remarkable surface property where a solid surface exhibits an extreme aversion to liquids, causing droplets to bead up and roll off almost instantaneously upon contact. This behavior arises from intricate surface textures and interactions at the nanoscale, effectively preventing liquids from wetting or adhering to the surface. The term "superliquiphobic" is derived from "superhydrophobic," which describes surfaces highly resistant to water. Superliquiphobic surfaces go beyond water repellency and display repellent characteristics towards a wide range of liquids, including those with very low surface tension or containing surfactants.
Superliquiphobicity, a remarkable phenomenon, emerges when a solid surface possesses minute roughness, forming interfaces with droplets through wetting while altering contact angles. This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids.
The inspiration for crafting such surfaces draws from nature's ingenuity, prominently illustrated by the renowned "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles.
Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances. These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles. Researchers have successfully fabricated various re-entrant geometries, offering a pathway for practical applications in diverse fields. These surfaces find utility in self-cleaning, anti-icing, anti-fogging, antifouling, and more, presenting innovative solutions to challenges in biomedicine, desalination, and energy conversion.
In essence, superliquiphobicity, inspired by natural models like the lotus leaf, capitalizes on re-entrant geometries and surface properties to create interfaces that actively repel liquids. These surfaces hold immense promise across a range of applications, promising enhanced functionality and performance in various technological and industrial contexts.
Optics
Biomimetic materials are gaining increasing attention in the field of optics and photonics. There are still little known bioinspired or biomimetic products involving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is a current field of research.
Inspiration from fruits and plants
One source of biomimetic inspiration is from plants. Plants have proven to be concept generations for the following functions; re(action)-coupling, self (adaptability), self-repair, and energy-autonomy. As plants do not have a centralized decision making unit (i.e. a brain), most plants have a decentralized autonomous system in various organs and tissues of the plant. Therefore, they react to multiple stimulus such as light, heat, and humidity.
One example is the carnivorous plant species Dionaea muscipula (Venus flytrap). For the last 25 years, there has been research focus on the motion principles of the plant to develop AVFT (artificial Venus flytrap robots). Through the movement during prey capture, the plant inspired soft robotic motion systems. The fast snap buckling (within 100–300 ms) of the trap closure movement is initiated when prey triggers the hairs of the plant within a certain time (twice within 20 s). AVFT systems exist, in which the trap closure movements are actuated via magnetism, electricity, pressurized air, and temperature changes.
Another example of mimicking plants, is the Pollia condensata, also known as the marble berry. The chiral self-assembly of cellulose inspired by the Pollia condensata berry has been exploited to make optically active films. Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light. Pollia condensata is not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such as Margaritaria nobilis. These fruits show iridescent colors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance. The structural colours come from the organisation of cellulose chains in the fruit's epicarp, a part of the fruit skin. Each cell of the epicarp is made of a multilayered envelope that behaves like a Bragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handed circularly polarised light.
The fruit of Elaeocarpus angustifolius also show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures. Similar iridosomes have also been found in Delarbrea michieana fruits.
In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as in Selaginella willdenowii or within specialized intra-cellular organelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis. For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells.
Structural colours have also been found in several algae, such as in the red alga Chondrus crispus (Irish Moss).
Inspiration from animals
Structural coloration produces the rainbow colours of soap bubbles, butterfly wings and many beetle scales. Phase-separation has been used to fabricate ultra-white scattering membranes from polymethylmethacrylate, mimicking the beetle Cyphochilus. LED lights can be designed to mimic the patterns of scales on fireflies' abdomens, improving their efficiency.
Morpho butterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle. This effect can be mimicked by a variety of technologies. Lotus Cars claim to have developed a paint that mimics the Morpho butterfly's structural blue colour. In 2007, Qualcomm commercialised an interferometric modulator display technology, "Mirasol", using Morpho-like optical interference. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales.
Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducing lens flare. This imitates the structure of a moth's eye. Notable figures such as the Wright Brothers and Leonardo da Vinci attempted to replicate the flight observed in birds. In an effort to reduce aircraft noise researchers have looked to the leading edge of owl feathers, which have an array of small finlets or rachis adapted to disperse aerodynamic pressure and provide nearly silent flight to the bird.
Agricultural systems
Holistic planned grazing, using fencing and/or herders, seeks to restore grasslands by carefully planning movements of large herds of livestock to mimic the vast herds found in nature. The natural system being mimicked and used as a template is grazing animals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Its founder Allan Savory and some others have claimed potential in building soil, increasing biodiversity, and reversing desertification. However, many researchers have disputed Savory's claim. Studies have often found that the method increases desertification instead of reducing it.
Other uses
Some air conditioning systems use biomimicry in their fans to increase airflow while reducing power consumption.
Technologists like Jas Johl have speculated that the functionality of vacuole cells could be used to design highly adaptable security systems. "The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security." The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure—they also help the cell scale and grow. Johl argues these functions are necessary for any security system design. The 500 Series Shinkansen used biomimicry to reduce energy consumption and noise levels while increasing passenger comfort. With reference to space travel, NASA and other firms have sought to develop swarm-type space drones inspired by bee behavioural patterns, and oxtapod terrestrial drones designed with reference to desert spiders.
Other technologies
Protein folding has been used to control material formation for self-assembled functional nanostructures. Polar bear fur has inspired the design of thermal collectors and clothing. The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels.
The Bombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim.
Most viruses have an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across the pH range 2–10. Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots. Tubular virus particles such as the tobacco mosaic virus (TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production of platinum and gold nanotubes using TMV as a template. Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon, PbS, and CdS and could therefore serve as a useful carriers of material. A spherical plant virus called cowpea chlorotic mottle virus (CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized in Biomorphic mineralization for selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dot semiconductor nanoparticles through a series of pH washes. This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels.
See also
Artificial photosynthesis
Artificial enzyme
Bio-inspired computing
Bioinspiration & Biomimetics
Biomimetic synthesis
Carbon sequestration
Reverse engineering
Synthetic biology
References
Further reading
Benyus, J. M. (2001). Along Came a Spider. Sierra, 86(4), 46–47.
Hargroves, K. D. & Smith, M. H. (2006). Innovation inspired by nature Biomimicry. Ecos, (129), 27–28.
Marshall, A. (2009). Wild Design: The Ecomimicry Project, North Atlantic Books: Berkeley.
Passino, Kevin M. (2004). Biomimicry for Optimization, Control, and Automation. Springer.
Pyper, W. (2006). Emulating nature: The rise of industrial ecology. Ecos, (129), 22–26.
Smith, J. (2007). It's only natural. The Ecologist, 37(8), 52–55.
Thompson, D'Arcy W., On Growth and Form. Dover 1992 reprint of 1942 2nd ed. (1st ed., 1917).
Vogel, S. (2000). Cats' Paws and Catapults: Mechanical Worlds of Nature and People. Norton.
External links
Biomimetics MIT
Sex, Velcro and Biomimicry with Janine Benyus
Janine Benyus: Biomimicry in Action from TED 2009
Design by Nature - National Geographic
Michael Pawlyn: Using nature's genius in architecture from TED 2010
Robert Full shows how human engineers can learn from animals' tricks from TED 2002
The Fast Draw: Biomimicry from CBS News
Evolutionary biology
Biotechnology
Bioinformatics
Biological engineering
Biophysics
Industrial ecology
Bionics
Water conservation
Renewable energy
Sustainable transport | 0.788171 | 0.995135 | 0.784336 |
Holism in science | Holism in science, holistic science, or methodological holism is an approach to research that emphasizes the study of complex systems. Systems are approached as coherent wholes whose component parts are best understood in context and in relation to both each other and to the whole. Holism typically stands in contrast with reductionism, which describes systems by dividing them into smaller components in order to understand them through their elemental properties.
The holism-individualism dichotomy is especially evident in conflicting interpretations of experimental findings across the social sciences, and reflects whether behavioural analysis begins at the systemic, macro-level (ie. derived from social relations) or the component micro-level (ie. derived from individual agents).
Overview
David Deutsch calls holism anti-reductionist and refers to the concept of thinking as the only legitimate way to think about science in as a series of emergent, or higher level phenomena. He argues that neither approach is purely correct.
Two aspects of Holism are:
The way of doing science, sometimes called "whole to parts", which focuses on observation of the specimen within its ecosystem first before breaking down to study any part of the specimen.
The idea that the scientist is not a passive observer of an external universe but rather a participant in the system.
Proponents claim that Holistic science is naturally suited to subjects such as ecology, biology, physics and the social sciences, where complex, non-linear interactions are the norm. These are systems where emergent properties arise at the level of the whole that cannot be predicted by focusing on the parts alone, which may make mainstream, reductionist science ill-equipped to provide understanding beyond a certain level. This principle of emergence in complex systems is often captured in the phrase ′the whole is greater than the sum of its parts′. Living organisms are an example: no knowledge of all the chemical and physical properties of matter can explain or predict the functioning of living organisms. The same happens in complex social human systems, where detailed understanding of individual behaviour cannot predict the behaviour of the group, which emerges at the level of the collective. The phenomenon of emergence may impose a theoretical limit on knowledge available through reductionist methodology, arguably making complex systems natural subjects for holistic approaches.
Science journalist John Horgan has expressed this view in the book The End of Science. He wrote that a certain pervasive model within holistic science, self-organized criticality, for example, "is not really a theory at all. Like punctuated equilibrium, self-organized criticality is merely a description, one of many, of the random fluctuations, the noise, permeating nature." By the theorists' own admissions, he said, such a model "can generate neither specific predictions about nature nor meaningful insights. What good is it, then?"
One of the reasons that holistic science attracts supporters is that it seems to offer a progressive, 'socio-ecological' view of the world, but Alan Marshall's book The Unity of Nature offers evidence to the contrary; suggesting holism in science is not 'ecological' or 'socially-responsive' at all, but regressive and repressive.
Examples in various fields of science
Physical science
Agriculture
Permaculture takes a systems level approach to agriculture and land management by attempting to copy what happens in the natural world. Holistic management integrates ecology and social sciences with food production. It was originally designed as a way to reverse desertification. Organic farming is sometimes considered a holistic approach.
In physics
Richard Healey offered a modal interpretation and used it to present a model account of the puzzling correlations which portrays them as resulting from the operation of a process that violates both spatial and spatiotemporal separability. He argued that, on this interpretation, the nonseparability of the process is a consequence of physical property holism; and that the resulting account yields genuine understanding of how the correlations come about without any violation of relativity theory or Local Action. Subsequent work by Clifton, Dickson and Myrvold cast doubt on whether the account can be squared with relativity theory’s requirement of Lorentz invariance but leaves no doubt of an spatially entangled holism in the theory. Paul Davies and John Gribbin further observe that Wheeler's delayed choice experiment shows how the quantum world displays a sort of holism in time as well as space.
In the holistic approach of David Bohm, any collection of quantum objects constitutes an indivisible whole within an implicate and explicate order. Bohm said there is no scientific evidence to support the dominant view that the universe consists of a huge, finite number of minute particles, and offered instead a view of undivided wholeness: "ultimately, the entire universe (with all its 'particles', including those constituting human beings, their laboratories, observing instruments, etc.) has to be understood as a single undivided whole, in which analysis into separately and independently existent parts has no fundamental status".
Chaos and complexity
Scientific holism holds that the behavior of a system cannot be perfectly predicted, no matter how much data is available. Natural systems can produce surprisingly unexpected behavior, and it is suspected that behavior of such systems might be computationally irreducible, which means it would not be possible to even approximate the system state without a full simulation of all the events occurring in the system. Key properties of the higher level behavior of certain classes of systems may be mediated by rare "surprises" in the behavior of their elements due to the principle of interconnectivity, thus evading predictions except by brute force simulation.
Ecology
Holistic thinking can be applied to ecology, combining biological, chemical, physical, economic, ethical, and political insights. The complexity grows with the area, so that it is necessary to reduce the characteristic of the view in other ways, for example to a specific time of duration.
Medicine
In primary care the term "holistic," has been used to describe approaches that take into account social considerations and other intuitive judgements. The term holism, and so-called approaches, appear in psychosomatic medicine in the 1970s, when they were considered one possible way to conceptualize psychosomatic phenomena. Instead of charting one-way causal links from psyche to soma, or vice versa, it aimed at a systemic model, where multiple biological, psychological and social factors were seen as interlinked.
Other, alternative approaches in the 1970s were psychosomatic and somatopsychic approaches, which concentrated on causal links only from psyche to soma, or from soma to psyche, respectively. At present it is commonplace in psychosomatic medicine to state that psyche and soma cannot really be separated for practical or theoretical purposes.
The term systems medicine first appeared in 1992 and takes an integrative approach to all of the body and environment.
Social science
Economics
Some economists use a causal holism theory in their work. That is they view the discipline in the manner of Ludwig Wittgenstein and claim that it can't be defined by necessary and sufficient conditions.
Education reform
The Taxonomy of Educational Objectives identifies many levels of cognitive functioning, which it is claimed may be used to create a more holistic education. In authentic assessment, rather than using computers to score multiple choice tests, a standards based assessment uses trained scorers to score open-response items using holistic scoring methods. In projects such as the North Carolina Writing Project, scorers are instructed not to count errors, or count numbers of points or supporting statements. The scorer is instead instructed to judge holistically whether "as a whole" is it more a "2" or a "3". Critics question whether such a process can be as objective as computer scoring, and the degree to which such scoring methods can result in different scores from different scorers.
Anthropology
Anthropology is holistic in two senses. First, it is concerned with all human beings across times and places, and with all dimensions of humanity (evolutionary, biophysical, sociopolitical, economic, cultural, psychological, etc.) Further, many academic programs following this approach take a "four-field" approach to anthropology that encompasses physical anthropology, archeology, linguistics, and cultural anthropology or social anthropology.
Some anthropologists disagree, and consider holism to be an artifact from 19th century social evolutionary thought that inappropriately imposes scientific positivism upon cultural anthropology.
The term "holism" is additionally used within social and cultural anthropology to refer to a methodological analysis of a society as a whole, in which component parts are treated as functionally relative to each other. One definition says: "as a methodological ideal, holism implies ... that one does not permit oneself to believe that our own established institutional boundaries (e.g. between politics, sexuality, religion, economics) necessarily may be found also in foreign societies."
Psychology of perception
A major holist movement in the early twentieth century was gestalt psychology. The claim was that perception is not an aggregation of atomic sense data but a field, in which there is a figure and a ground. Background has holistic effects on the perceived figure. Gestalt psychologists included Wolfgang Koehler, Max Wertheimer, Kurt Koffka. Koehler claimed the perceptual fields corresponded to electrical fields in the brain. Karl Lashley did experiments with gold foil pieces inserted in monkey brains purporting to show that such fields did not exist. However, many of the perceptual illusions and visual phenomena exhibited by the gestaltists were taken over (often without credit) by later perceptual psychologists. Gestalt psychology had influence on Fritz Perls' gestalt therapy, although some old-line gestaltists opposed the association with counter-cultural and New Age trends later associated with gestalt therapy. Gestalt theory was also influential on phenomenology. Aron Gurwitsch wrote on the role of the field of consciousness in gestalt theory in relation to phenomenology. Maurice Merleau-Ponty made much use of holistic psychologists such as work of Kurt Goldstein in his "Phenomenology of Perception."
Teleological psychology
Alfred Adler believed that the individual (an integrated whole expressed through a self-consistent unity of thinking, feeling, and action, moving toward an unconscious, fictional final goal), must be understood within the larger wholes of society, from the groups to which he belongs (starting with his face-to-face relationships), to the larger whole of mankind. The recognition of our social embeddedness and the need for developing an interest in the welfare of others, as well as a respect for nature, is at the heart of Adler's philosophy of living and principles of psychotherapy.
Edgar Morin, the French philosopher and sociologist, can be considered a holist based on the transdisciplinary nature of his work.
Skeptical reception
According to skeptics, the phrase "holistic science" is often misused by pseudosciences. In the book Science and Pseudoscience in Clinical Psychology it's noted that "Proponents of pseudoscientific claims, especially in organic medicine, and mental health, often resort to the "mantra of holism" to explain away negative findings. When invoking the mantra, they typically maintain that scientific claims can be evaluated only within the context of broader claims and therefore cannot be evaluated in isolation." This is an invocation of Karl Popper's demarcation problem and in a posting to Ask a Philosopher Massimo Pigliucci clarifies Popper by positing, "Instead of thinking of science as making progress by inductive generalization (which doesn’t work because no matter how many times a given theory may have been confirmed thus far, it is always possible that new, contrary, data will emerge tomorrow), we should say that science makes progress by conclusively disconfirming theories that are, in fact, wrong."
Victor J. Stenger states that "holistic healing is associated with the rejection of classical, Newtonian physics. Yet, holistic healing retains many ideas from eighteenth and nineteenth century physics. Its proponents are blissfully unaware that these ideas, especially superluminal holism, have been rejected by modern physics as well".
Some quantum mystics interpret the wave function of quantum mechanics as a vibration in a holistic ether that pervades the universe and wave function collapse as the result of some cosmic consciousness. This is a misinterpretation of the effects of quantum entanglement as a violation of relativistic causality and quantum field theory.
See also
Antireductionism
Emergence
Holarchy
Holism
Holism in ecological anthropology
Holistic management
Holistic health
Holon (philosophy)
Interdisciplinarity
Organicism
Scientific reductionism
Systems thinking
References
Further reading
Article "Patterns of Wholeness: Introducing Holistic Science" by Brian Goodwin, from the journal Resurgence
Article "From Control to Participation" by Brian Goodwin, from the journal Resurgence
Complex systems theory
Holism
Systems theory | 0.801497 | 0.978448 | 0.784223 |
Biogeography | Biogeography is the study of the distribution of species and ecosystems in geographic space and through geological time. Organisms and biological communities often vary in a regular fashion along geographic gradients of latitude, elevation, isolation and habitat area. Phytogeography is the branch of biogeography that studies the distribution of plants. Zoogeography is the branch that studies distribution of animals. Mycogeography is the branch that studies distribution of fungi, such as mushrooms.
Knowledge of spatial variation in the numbers and types of organisms is as vital to us today as it was to our early human ancestors, as we adapt to heterogeneous but geographically predictable environments. Biogeography is an integrative field of inquiry that unites concepts and information from ecology, evolutionary biology, taxonomy, geology, physical geography, palaeontology, and climatology.
Modern biogeographic research combines information and ideas from many fields, from the physiological and ecological constraints on organismal dispersal to geological and climatological phenomena operating at global spatial scales and evolutionary time frames.
The short-term interactions within a habitat and species of organisms describe the ecological application of biogeography. Historical biogeography describes the long-term, evolutionary periods of time for broader classifications of organisms. Early scientists, beginning with Carl Linnaeus, contributed to the development of biogeography as a science.
The scientific theory of biogeography grows out of the work of Alexander von Humboldt (1769–1859), Francisco Jose de Caldas (1768–1816), Hewett Cottrell Watson (1804–1881), Alphonse de Candolle (1806–1893), Alfred Russel Wallace (1823–1913), Philip Lutley Sclater (1829–1913) and other biologists and explorers.
Introduction
The patterns of species distribution across geographical areas can usually be explained through a combination of historical factors such as: speciation, extinction, continental drift, and glaciation. Through observing the geographic distribution of species, we can see associated variations in sea level, river routes, habitat, and river capture. Additionally, this science considers the geographic constraints of landmass areas and isolation, as well as the available ecosystem energy supplies.
Over periods of ecological changes, biogeography includes the study of plant and animal species in: their past and/or present living refugium habitat; their interim living sites; and/or their survival locales. As writer David Quammen put it, "...biogeography does more than ask Which species? and Where. It also asks Why? and, what is sometimes more crucial, Why not?."
Modern biogeography often employs the use of Geographic Information Systems (GIS), to understand the factors affecting organism distribution, and to predict future trends in organism distribution.
Often mathematical models and GIS are employed to solve ecological problems that have a spatial aspect to them.
Biogeography is most keenly observed on the world's islands. These habitats are often much more manageable areas of study because they are more condensed than larger ecosystems on the mainland. Islands are also ideal locations because they allow scientists to look at habitats that new invasive species have only recently colonized and can observe how they disperse throughout the island and change it. They can then apply their understanding to similar but more complex mainland habitats. Islands are very diverse in their biomes, ranging from the tropical to arctic climates. This diversity in habitat allows for a wide range of species study in different parts of the world.
One scientist who recognized the importance of these geographic locations was Charles Darwin, who remarked in his journal "The Zoology of Archipelagoes will be well worth examination". Two chapters in On the Origin of Species were devoted to geographical distribution.
History
18th century
The first discoveries that contributed to the development of biogeography as a science began in the mid-18th century, as Europeans explored the world and described the biodiversity of life. During the 18th century most views on the world were shaped around religion and for many natural theologists, the bible. Carl Linnaeus, in the mid-18th century, improved our classifications of organisms through the exploration of undiscovered territories by his students and disciples. When he noticed that species were not as perpetual as he believed, he developed the Mountain Explanation to explain the distribution of biodiversity; when Noah's ark landed on Mount Ararat and the waters receded, the animals dispersed throughout different elevations on the mountain. This showed different species in different climates proving species were not constant. Linnaeus' findings set a basis for ecological biogeography. Through his strong beliefs in Christianity, he was inspired to classify the living world, which then gave way to additional accounts of secular views on geographical distribution. He argued that the structure of an animal was very closely related to its physical surroundings. This was important to a George Louis Buffon's rival theory of distribution.
Closely after Linnaeus, Georges-Louis Leclerc, Comte de Buffon observed shifts in climate and how species spread across the globe as a result. He was the first to see different groups of organisms in different regions of the world. Buffon saw similarities between some regions which led him to believe that at one point continents were connected and then water separated them and caused differences in species. His hypotheses were described in his work, the 36 volume Histoire Naturelle, générale et particulière, in which he argued that varying geographical regions would have different forms of life. This was inspired by his observations comparing the Old and New World, as he determined distinct variations of species from the two regions. Buffon believed there was a single species creation event, and that different regions of the world were homes for varying species, which is an alternate view than that of Linnaeus. Buffon's law eventually became a principle of biogeography by explaining how similar environments were habitats for comparable types of organisms. Buffon also studied fossils which led him to believe that the Earth was over tens of thousands of years old, and that humans had not lived there long in comparison to the age of the Earth.
19th century
Following the period of exploration came the Age of Enlightenment in Europe, which attempted to explain the patterns of biodiversity observed by Buffon and Linnaeus. At the birth of the 19th century, Alexander von Humboldt, known as the "founder of plant geography", developed the concept of physique generale to demonstrate the unity of science and how species fit together. As one of the first to contribute empirical data to the science of biogeography through his travel as an explorer, he observed differences in climate and vegetation. The Earth was divided into regions which he defined as tropical, temperate, and arctic and within these regions there were similar forms of vegetation. This ultimately enabled him to create the isotherm, which allowed scientists to see patterns of life within different climates. He contributed his observations to findings of botanical geography by previous scientists, and sketched this description of both the biotic and abiotic features of the Earth in his book, Cosmos.
Augustin de Candolle contributed to the field of biogeography as he observed species competition and the several differences that influenced the discovery of the diversity of life. He was a Swiss botanist and created the first Laws of Botanical Nomenclature in his work, Prodromus. He discussed plant distribution and his theories eventually had a great impact on Charles Darwin, who was inspired to consider species adaptations and evolution after learning about botanical geography. De Candolle was the first to describe the differences between the small-scale and large-scale distribution patterns of organisms around the globe.
Several additional scientists contributed new theories to further develop the concept of biogeography. Charles Lyell developed the Theory of Uniformitarianism after studying fossils. This theory explained how the world was not created by one sole catastrophic event, but instead from numerous creation events and locations. Uniformitarianism also introduced the idea that the Earth was actually significantly older than was previously accepted. Using this knowledge, Lyell concluded that it was possible for species to go extinct. Since he noted that Earth's climate changes, he realized that species distribution must also change accordingly. Lyell argued that climate changes complemented vegetation changes, thus connecting the environmental surroundings to varying species. This largely influenced Charles Darwin in his development of the theory of evolution.
Charles Darwin was a natural theologist who studied around the world, and most importantly in the Galapagos Islands. Darwin introduced the idea of natural selection, as he theorized against previously accepted ideas that species were static or unchanging. His contributions to biogeography and the theory of evolution were different from those of other explorers of his time, because he developed a mechanism to describe the ways that species changed. His influential ideas include the development of theories regarding the struggle for existence and natural selection. Darwin's theories started a biological segment to biogeography and empirical studies, which enabled future scientists to develop ideas about the geographical distribution of organisms around the globe.
Alfred Russel Wallace studied the distribution of flora and fauna in the Amazon Basin and the Malay Archipelago in the mid-19th century. His research was essential to the further development of biogeography, and he was later nicknamed the "father of Biogeography". Wallace conducted fieldwork researching the habits, breeding and migration tendencies, and feeding behavior of thousands of species. He studied butterfly and bird distributions in comparison to the presence or absence of geographical barriers. His observations led him to conclude that the number of organisms present in a community was dependent on the amount of food resources in the particular habitat. Wallace believed species were dynamic by responding to biotic and abiotic factors. He and Philip Sclater saw biogeography as a source of support for the theory of evolution as they used Darwin's conclusion to explain how biogeography was similar to a record of species inheritance. Key findings, such as the sharp difference in fauna either side of the Wallace Line, and the sharp difference that existed between North and South America prior to their relatively recent faunal interchange, can only be understood in this light. Otherwise, the field of biogeography would be seen as a purely descriptive one.
20th and 21st century
Moving on to the 20th century, Alfred Wegener introduced the Theory of Continental Drift in 1912, though it was not widely accepted until the 1960s. This theory was revolutionary because it changed the way that everyone thought about species and their distribution around the globe. The theory explained how continents were formerly joined in one large landmass, Pangea, and slowly drifted apart due to the movement of the plates below Earth's surface. The evidence for this theory is in the geological similarities between varying locations around the globe, the geographic distribution of some fossils (including the mesosaurs) on various continents, and the jigsaw puzzle shape of the landmasses on Earth. Though Wegener did not know the mechanism of this concept of Continental Drift, this contribution to the study of biogeography was significant in the way that it shed light on the importance of environmental and geographic similarities or differences as a result of climate and other pressures on the planet. Importantly, late in his career Wegener recognised that testing his theory required measurement of continental movement rather than inference from fossils species distributions.
In 1958 paleontologist Paul S. Martin published A Biogeography of Reptiles and Amphibians in the Gómez Farias Region, Tamaulipas, Mexico, which has been described as "ground-breaking" and "a classic treatise in historical biogeography". Martin applied several disciplines including ecology, botany, climatology, geology, and Pleistocene dispersal routes to examine the herpetofauna of a relatively small and largely undisturbed area, but ecologically complex, situated on the threshold of temperate – tropical (nearctic and neotropical) regions, including semiarid lowlands at 70 meters elevation and the northernmost cloud forest in the western hemisphere at over 2200 meters.
The publication of The Theory of Island Biogeography by Robert MacArthur and E.O. Wilson in 1967 showed that the species richness of an area could be predicted in terms of such factors as habitat area, immigration rate and extinction rate. This added to the long-standing interest in island biogeography. The application of island biogeography theory to habitat fragments spurred the development of the fields of conservation biology and landscape ecology.
Classic biogeography has been expanded by the development of molecular systematics, creating a new discipline known as phylogeography. This development allowed scientists to test theories about the origin and dispersal of populations, such as island endemics. For example, while classic biogeographers were able to speculate about the origins of species in the Hawaiian Islands, phylogeography allows them to test theories of relatedness between these populations and putative source populations on various continents, notably in Asia and North America.
Biogeography continues as a point of study for many life sciences and geography students worldwide, however it may be under different broader titles within institutions such as ecology or evolutionary biology.
In recent years, one of the most important and consequential developments in biogeography has been to show how multiple organisms, including mammals like monkeys and reptiles like squamates, overcame barriers such as large oceans that many biogeographers formerly believed were impossible to cross. See also Oceanic dispersal.
Modern applications
Biogeography now incorporates many different fields including but not limited to physical geography, geology, botany and plant biology, zoology, general biology, and modelling. A biogeographer's main focus is on how the environment and humans affect the distribution of species as well as other manifestations of Life such as species or genetic diversity. Biogeography is being applied to biodiversity conservation and planning, projecting global environmental changes on species and biomes, projecting the spread of infectious diseases, invasive species, and for supporting planning for the establishment of crops. Technological evolving and advances have allowed for generating a whole suite of predictor variables for biogeographic analysis, including satellite imaging and processing of the Earth. Two main types of satellite imaging that are important within modern biogeography are Global Production Efficiency Model (GLO-PEM) and Geographic Information Systems (GIS). GLO-PEM uses satellite-imaging gives "repetitive, spatially contiguous, and time specific observations of vegetation". These observations are on a global scale. GIS can show certain processes on the earth's surface like whale locations, sea surface temperatures, and bathymetry. Current scientists also use coral reefs to delve into the history of biogeography through the fossilized reefs.
Two global information systems are either dedicated to, or have strong focus on, biogeography (in the form of the spatial location of observations of organisms), namely the Global Biodiversity Information Facility (GBIF: 2.57 billion species occurrence records reported as at August 2023) and, for marine species only, the Ocean Biodiversity Information System (OBIS, originally the Ocean Biogeographic Information System: 116 million species occurrence records reported as at August 2023), while at a national scale, similar compilations of species occurrence records also exist such as the U.K. National Biodiversity Network, the Atlas of Living Australia, and many others. In the case of the oceans, in 2017 Costello et al. analyzed the distribution of 65,000 species of marine animals and plants as then documented in OBIS, and used the results to distinguish 30 distinct marine realms, split between continental-shelf and offshore deep-sea areas.
Since it is self evident that compilations of species occurrence records cannot cover with any completeness, areas that have received either limited or no sampling, a number of methods have been developed to produce arguably more complete "predictive" or "modelled" distributions for species based on their associated environmental or other preferences (such as availability of food or other habitat requirements); this approach is known as either Environmental niche modelling (ENM) or Species distribution modelling (SDM). Depending on the reliability of the source data and the nature of the models employed (including the scales for which data are available), maps generated from such models may then provide better representations of the "real" biogeographic distributions of either individual species, groups of species, or biodiversity as a whole, however it should also be borne in mind that historic or recent human activities (such as hunting of great whales, or other human-induced exterminations) may have altered present-day species distributions from their potential "full" ecological footprint. Examples of predictive maps produced by niche modelling methods based on either GBIF (terrestrial) or OBIS (marine, plus some freshwater) data are the former Lifemapper project at the University of Kansas (now continued as a part of BiotaPhy) and AquaMaps, which as at 2023 contain modelled distributions for around 200,000 terrestrial, and 33,000 species of teleosts, marine mammals and invertebrates, respectively. One advantage of ENM/SDM is that in addition to showing current (or even past) modelled distributions, insertion of changed parameters such as the anticipated effects of climate change can also be used to show potential changes in species distributions that may occur in the future based on such scenarios.
Paleobiogeography
Paleobiogeography goes one step further to include paleogeographic data and considerations of plate tectonics. Using molecular analyses and corroborated by fossils, it has been possible to demonstrate that perching birds evolved first in the region of Australia or the adjacent Antarctic (which at that time lay somewhat further north and had a temperate climate). From there, they spread to the other Gondwanan continents and Southeast Asia – the part of Laurasia then closest to their origin of dispersal – in the late Paleogene, before achieving a global distribution in the early Neogene. Not knowing that at the time of dispersal, the Indian Ocean was much narrower than it is today, and that South America was closer to the Antarctic, one would be hard pressed to explain the presence of many "ancient" lineages of perching birds in Africa, as well as the mainly South American distribution of the suboscines.
Paleobiogeography also helps constrain hypotheses on the timing of biogeographic events such as vicariance and geodispersal, and provides unique information on the formation of regional biotas. For example, data from species-level phylogenetic and biogeographic studies tell us that the Amazonian teleost fauna accumulated in increments over a period of tens of millions of years, principally by means of allopatric speciation, and in an arena extending over most of the area of tropical South America (Albert & Reis 2011). In other words, unlike some of the well-known insular faunas (Galapagos finches, Hawaiian drosophilid flies, African rift lake cichlids), the species-rich Amazonian ichthyofauna is not the result of recent adaptive radiations.
For freshwater organisms, landscapes are divided naturally into discrete drainage basins by watersheds, episodically isolated and reunited by erosional processes. In regions like the Amazon Basin (or more generally Greater Amazonia, the Amazon basin, Orinoco basin, and Guianas) with an exceptionally low (flat) topographic relief, the many waterways have had a highly reticulated history over geological time. In such a context, stream capture is an important factor affecting the evolution and distribution of freshwater organisms. Stream capture occurs when an upstream portion of one river drainage is diverted to the downstream portion of an adjacent basin. This can happen as a result of tectonic uplift (or subsidence), natural damming created by a landslide, or headward or lateral erosion of the watershed between adjacent basins.
Concepts and fields
Biogeography is a synthetic science, related to geography, biology, soil science, geology, climatology, ecology and evolution.
Some fundamental concepts in biogeography include:
allopatric speciation – the splitting of a species by evolution of geographically isolated populations
evolution – change in genetic composition of a population
extinction – disappearance of a species
dispersal – movement of populations away from their point of origin, related to migration
endemic areas
geodispersal – the erosion of barriers to biotic dispersal and gene flow, that permit range expansion and the merging of previously isolated biotas
range and distribution
vicariance – the formation of barriers to biotic dispersal and gene flow, that tend to subdivide species and biotas, leading to speciation and extinction; vicariance biogeography is the field that studies these patterns
Comparative biogeography
The study of comparative biogeography can follow two main lines of investigation:
Systematic biogeography, the study of biotic area relationships, their distribution, and hierarchical classification
Evolutionary biogeography, the proposal of evolutionary mechanisms responsible for organismal distributions. Possible mechanisms include widespread taxa disrupted by continental break-up or individual episodes of long-distance movement.
Biogeographic regionalisations
There are many types of biogeographic units used in biogeographic regionalisation schemes, as there are many criteria (species composition, physiognomy, ecological aspects) and hierarchization schemes: biogeographic realms (ecozones), bioregions (sensu stricto), ecoregions, zoogeographical regions, floristic regions, vegetation types, biomes, etc.
The terms biogeographic unit, biogeographic area can be used for these categories, regardless of rank.
In 2008, an International Code of Area Nomenclature was proposed for biogeography. It achieved limited success; some studies commented favorably on it, but others were much more critical, and it "has not yet gained a significant following". Similarly, a set of rules for paleobiogeography has achieved limited success. In 2000, Westermann suggested that the difficulties in getting formal nomenclatural rules established in this field might be related to "the curious fact that neither paleo- nor neobiogeographers are organized in any formal groupings or societies, nationally (so far as I know) or internationally — an exception among active disciplines."
See also
Allen's rule
Bergmann's rule
Biogeographic realm
Bibliography of biology
Biogeography-based optimization
Center of origin
Concepts and Techniques in Modern Geography
Distance decay
Ecological land classification
Geobiology
Macroecology
Marine ecoregions
Max Carl Wilhelm Weber
Miklos Udvardy
Phytochorion – Plant region
Sky island
Systematic and evolutionary biogeography association
Notes and references
Further reading
Albert, J. S., & R. E. Reis (2011). Historical Biogeography of Neotropical Freshwater Fishes. University of California Press, Berkeley. 424 pp.
Cox, C. B. (2001). The biogeographic regions reconsidered. Journal of Biogeography, 28: 511–523, .
Ebach, M.C. (2015). Origins of biogeography. The role of biological classification in early plant and animal geography. Dordrecht: Springer, xiv + 173 pp., .
Lieberman, B. S. (2001). "Paleobiogeography: using fossils to study global change, plate tectonics, and evolution". Kluwer Academic, Plenum Publishing, .
Lomolino, M. V., & Brown, J. H. (2004). Foundations of biogeography: classic papers with commentaries. University of Chicago Press, .
Millington, A., Blumler, M., & Schickhoff, U. (Eds.). (2011). The SAGE handbook of biogeography. Sage, London, .
Nelson, G.J. (1978). From Candolle to Croizat: Comments on the history of biogeography. Journal of the History of Biology, 11: 269–305.
Udvardy, M. D. F. (1975). A classification of the biogeographical provinces of the world. IUCN Occasional Paper no. 18. Morges, Switzerland: IUCN.
External links
The International Biogeography Society
Systematic & Evolutionary Biogeographical Society (archived 5 December 2008)
Early Classics in Biogeography, Distribution, and Diversity Studies: To 1950
Early Classics in Biogeography, Distribution, and Diversity Studies: 1951–1975
Some Biogeographers, Evolutionists and Ecologists: Chrono-Biographical Sketches
Major journals
Journal of Biogeography homepage (archived 15 December 2004)
Global Ecology and Biogeography homepage. .
Ecography homepage.
Landscape ecology
Physical oceanography
Physical geography
Environmental terminology
Habitat
Earth sciences | 0.788813 | 0.994162 | 0.784208 |
Protist | A protist or protoctist is any eukaryotic organism that is not an animal, land plant, or fungus. Protists do not form a natural group, or clade, but are a polyphyletic grouping of several independent clades that evolved from the last eukaryotic common ancestor.
Protists were historically regarded as a separate taxonomic kingdom known as Protista or Protoctista. With the advent of phylogenetic analysis and electron microscopy studies, the use of Protista as a formal taxon was gradually abandoned. In modern classifications, protists are spread across several eukaryotic clades called supergroups, such as Archaeplastida (photoautotrophs that includes land plants), SAR, Obazoa (which includes fungi and animals), Amoebozoa and Excavata.
Protists represent an extremely large genetic and ecological diversity in all environments, including extreme habitats. Their diversity, larger than for all other eukaryotes, has only been discovered in recent decades through the study of environmental DNA and is still in the process of being fully described. They are present in all ecosystems as important components of the biogeochemical cycles and trophic webs. They exist abundantly and ubiquitously in a variety of forms that evolved multiple times independently, such as free-living algae, amoebae and slime moulds, or as important parasites. Together, they compose an amount of biomass that doubles that of animals. They exhibit varied types of nutrition (such as phototrophy, phagotrophy or osmotrophy), sometimes combining them (in mixotrophy). They present unique adaptations not present in multicellular animals, fungi or land plants. The study of protists is termed protistology.
Definition
There is not a single accepted definition of what protists are. As a paraphyletic assemblage of diverse biological groups, they have historically been regarded as a catch-all taxon that includes any eukaryotic organism (i.e., living beings whose cells possess a nucleus) that is not an animal, a land plant or a dikaryon fungus. Because of this definition by exclusion, protists encompass almost all of the broad spectrum of biological characteristics expected in eukaryotes.
They are generally unicellular, microscopic eukaryotes. Some species can be purely phototrophic (generally called algae), or purely heterotrophic (traditionally called protozoa), but there is a wide range of mixotrophic protists which exhibit both phagotrophy and phototrophy together. They have different life cycles, trophic levels, modes of locomotion, and cellular structures. Some protists can be pathogens.
Examples of basic protist forms that do not represent evolutionary cohesive lineages include:
Algae, which are photosynthetic protists. Traditionally called "protophyta", they are found within most of the big evolutionary lineages or supergroups, intermingled with heterotrophic protists which are traditionally called "protozoa". There are many multicellular and colonial examples of algae, including kelp, red algae, some types of diatoms, and some lineages of green algae.
Flagellates, which bear eukaryotic flagella. They are found in all lineages, reflecting that the common ancestor of all living eukaryotes was a flagellated heterotroph.
Amoebae, which usually lack flagella but move through changes in the shape and motion of their protoplasm to produce pseudopodia. They have evolved independently several times, leading to major radiations of these lifeforms. Many lineages lack a solid shape ("naked amoebae"). Some of them have special forms, such as the "heliozoa", amoebae with microtubule-supported pseudopodia radiating from the cell, with at least three independent origins. Others, referred to as "testate amoebae", grow a shell around the cell made from organic or inorganic material.
Slime molds, which are amoebae capable of producing stalked reproductive structures that bear spores, often through aggregative multicellularity (numerous amoebae aggregating together). This type of multicellularity has evolved at least seven times among protists.
Fungus-like protists, which can produce hyphae-like structures and are often saprophytic. They have evolved multiple times, often very distantly from true fungi. For example, the oomycetes (water molds) or the myxomycetes.
Parasitic protists, such as Plasmodium falciparum, the cause of malaria.
The names of some protists (called ambiregnal protists), because of their mixture of traits similar to both animals and plants or fungi (e.g. slime molds and flagellated algae like euglenids), have been published under either or both of the ICN and the ICZN codes.
Classification
The evolutionary relationships of protists have been explained through molecular phylogenetics, the sequencing of entire genomes and transcriptomes, and electron microscopy studies of the flagellar apparatus and cytoskeleton. New major lineages of protists and novel biodiversity continue to be discovered, resulting in dramatic changes to the eukaryotic tree of life. The newest classification systems of eukaryotes, revised in 2019, do not recognize the formal taxonomic ranks (kingdom, phylum, class, order...) and instead only recognize clades of related organisms, making the classification more stable in the long term and easier to update. In this new cladistic scheme, the protists are divided into various wide branches informally named supergroups:
Archaeplastida — consists of groups that have evolved from a photosynthetic common ancestor that obtained chloroplasts directly through a single event of endosymbiosis with a cyanobacterium:
Picozoa (1 species), non-photosynthetic predators.
Glaucophyta (26 species), unicellular algae found in freshwater and terrestrial environments.
Rhodophyta (5,000–6,000 species), mostly multicellular marine algae that lost chlorophyll and only harvest light energy through phycobiliproteins.
Rhodelphidia (2 species), predators with non-photosynthetic plastid.
Viridiplantae or Chloroplastida, containing both green algae and land plants which are not protists. The green algae comprise many lineages of varying diversity, such as Chlorophyta (7,000), Prasinodermophyta (10), Zygnematophyceae (4,000), Charophyceae (877), Klebsormidiophyceae (48) or Coleochaetophyceae (36).
Sar, SAR or Harosa – a clade of three highly diverse lineages exclusively containing protists.
Stramenopiles is a wide clade of photosynthetic and heterotrophic organisms that evolved from a common ancestor with hairs in one of their two flagella. The photosynthetic stramenopiles, called Ochrophyta, are a monophyletic group that acquired chloroplasts from secondary endosymbiosis with a red alga. Among these, the best known are: the unicellular or colonial Bacillariophyta (>60,000 species), known as diatoms; the filamentous or genuinely multicellular Phaeophyta (2,000 species), known as brown algae; and the Chrysomonadea (>1,200 species). The heterotrophic stramenopiles are more diverse in forms, ranging from fungi-like organisms such as the Hyphochytrea, Oomycota and Labyrinthulea, to various kinds of protozoa such as the flagellates Opalinata and Bicosoecida.
Alveolata contains three of the most well-known groups of protists: Apicomplexa, a parasitic group with species harmful to humans and animals; Dinoflagellata, an ecologically important group as a main component of the marine microplankton and a main cause of algal blooms; and Ciliophora (4,500 species), the extremely diverse and well-studied group of mostly free-living heterotrophs known as ciliates.
Rhizaria is a morphologically diverse lineage mostly comprising heterotrophic amoebae, flagellates and amoeboflagellates, and some unusual algae (Chlorarachniophyta) and spore-forming parasites. The most familiar rhizarians are Foraminifera and Radiolaria, groups of large and abundant marine amoebae, many of them macroscopic. Much of the rhizarian diversity lies within the phylum Cercozoa, filled with free-living flagellates which usually have pseudopodia, as well as Phaeodaria, a group previously considered radiolarian. Other groups comprise various amoebae like Vampyrellida or are important parasites like Phytomyxea, Paramyxida or Haplosporida.
Haptista — includes the Haptophyta algae and the heterotrophic Centrohelida, which are "heliozoan"-type amoebae.
Cryptista — closely related to Archaeplastida, it includes the Cryptophyta algae, with a plastid of red algal origin, and two obscure relatives with two flagella, katablepharids and Palpitomonas.
Discoba — includes many lineages previously grouped under the paraphyletic "Excavata": the Jakobida, flagellates with bacterial-like mitochondrial genomes; Tsukubamonas, a free-living flagellate; and the Discicristata clade, which unites well-known phyla Heterolobosea and Euglenozoa. Heterolobosea includes amoebae, flagellates and amoeboflagellates with complex life cycles, and the unusual Acrasida, a group of slime molds. Euglenozoa encompasses a clade of algae with chloroplasts of green algal origin and many groups of anaerobic, parasitic or free-living heterotrophs.
Metamonada — a clade of completely anaerobic protozoa, primarily flagellates. Some are gut symbionts of animals, others are free-living (for example, Paratrimastix pyriformis), and others are well-known parasites (for example, Giardia lamblia).
Amorphea — unites two huge clades:
Amoebozoa (2,400 species) is a large group of heterotrophic protists, mostly amoebae. Many lineages are slime molds that produce spore-releasing fruiting bodies, such as Myxogastria, Dictyostelia and Protosporangiida, and are often studied by mycologists. Within the non-fruiting amoebae, the Tubulinea contain many naked amoebae (such as Amoeba itself) and a well-studied order of testate amoebae known as Arcellinida. Other non-fruiting amoebozoans are Variosea, Discosea and Archamoebae.
Obazoa includes the two kingdoms Metazoa (animals) and Fungi, and their closest protist relatives inside a clade known as Opisthokonta. The opisthokont protists are Nucleariida, Ichthyosporea, Pluriformea, Filasterea, Choanoflagellata and the elusive Tunicaraptor (1 species). They are flagellated or amoeboid heterotrophs of vital importance in the search for the genes that allow animal multicellularity. Sister groups to Opisthokonta are Apusomonadida (28 species) and Breviatea (4 species).
Many smaller lineages do not belong to any of these supergroups, and are usually poorly known groups with limited data, often referred to as 'orphan groups'. Some, such as the CRuMs clade, Malawimonadida and Ancyromonadida, appear to be related to Amorphea. Others, like Hemimastigophora (10 species) and Provora (7 species), appear to be related to or within Diaphoretickes, a clade that unites SAR, Archaeplastida, Haptista and Cryptista. A mysterious protist species, Meteora sporadica, is more closely related to the latter two of these orphan groups.
Although the root of the tree is still unresolved, one possible topology of the eukaryotic tree of life is:
History
Early concepts
From the start of the 18th century, the popular term "infusion animals" (later infusoria) referred to protists, bacteria and small invertebrate animals. In the mid-18th century, while Swedish scientist Carl von Linnaeus largely ignored the protists, his Danish contemporary Otto Friedrich Müller was the first to introduce protists to the binomial nomenclature system.
In the early 19th century, German naturalist Georg August Goldfuss introduced Protozoa (meaning 'early animals') as a class within Kingdom Animalia, to refer to four very different groups: infusoria (ciliates), corals, phytozoa (such as Cryptomonas) and jellyfish. Later, in 1845, Carl Theodor von Siebold was the first to establish Protozoa as a phylum of exclusively unicellular animals consisting of two classes: Infusoria (ciliates) and Rhizopoda (amoebae, foraminifera). Other scientists did not consider all of them part of the animal kingdom, and by the middle of the century they were regarded within the groupings of Protozoa (early animals), Protophyta (early plants), Phytozoa (animal-like plants) and Bacteria (mostly considered plants). Microscopic organisms were increasingly constrained in the plant/animal dichotomy. In 1858, the palaeontolgist Richard Owen was the first to define Protozoa as a separate kingdom of eukaryotic organisms, with "nucleated cells" and the "common organic characters" of plants and animals, although he also included sponges within protozoa.
Origin of the protist kingdom
In 1860, British naturalist John Hogg proposed Protoctista (meaning 'first-created beings') as the name for a fourth kingdom of nature (the other kingdoms being Linnaeus' plant, animal and mineral) which comprised all the lower, primitive organisms, including protophyta, protozoa and sponges, at the merging bases of the plant and animal kingdoms.
In 1866 the 'father of protistology', German scientist Ernst Haeckel, addressed the problem of classifying all these organisms as a mixture of animal and vegetable characters, and proposed Protistenreich (Kingdom Protista) as the third kingdom of life, comprising primitive forms that were "neither animals nor plants". He grouped both bacteria and eukaryotes, both unicellular and multicellular organisms, as Protista. He retained the Infusoria in the animal kingdom, until German zoologist Otto Butschli demonstrated that they were unicellular. At first, he included sponges and fungi, but in later publications he explicitly restricted Protista to predominantly unicellular organisms or colonies incapable of forming tissues. He clearly separated Protista from true animals on the basis that the defining character of protists was the absence of sexual reproduction, while the defining character of animals was the blastula stage of animal development. He also returned the terms protozoa and protophyta as subkingdoms of Protista.
Butschli considered the kingdom to be too polyphyletic and rejected the inclusion of bacteria. He fragmented the kingdom into protozoa (only nucleated, unicellular animal-like organisms), while bacteria and the protophyta were a separate grouping. This strengthened the old dichotomy of protozoa/protophyta from German scientist Carl Theodor von Siebold, and the German naturalists asserted this view over the worldwide scientific community by the turn of the century. However, British biologist C. Clifford Dobell in 1911 brought attention to the fact that protists functioned very differently compared to the animal and vegetable cellular organization, and gave importance to Protista as a group with a different organization that he called "acellularity", shifting away from the dogma of German cell theory. He coined the term protistology and solidified it as a branch of study independent from zoology and botany.
In 1938, American biologist Herbert Copeland resurrected Hogg's label, arguing that Haeckel's term Protista included anucleated microbes such as bacteria, which the term Protoctista (meaning "first established beings") did not. Under his four-kingdom classification (Monera, Protoctista, Plantae, Animalia), the protists and bacteria were finally split apart, recognizing the difference between anucleate (prokaryotic) and nucleate (eukaryotic) organisms. To firmly separate protists from plants, he followed Haeckel's blastular definition of true animals, and proposed defining true plants as those with chlorophyll a and b, carotene, xanthophyll and production of starch. He also was the first to recognize that the unicellular/multicellular dichotomy was invalid. Still, he kept fungi within Protoctista, together with red algae, brown algae and protozoans. This classification was the basis for Whittaker's later definition of Fungi, Animalia, Plantae and Protista as the four kingdoms of life.
In the popular five-kingdom scheme published by American plant ecologist Robert Whittaker in 1969, Protista was defined as eukaryotic "organisms which are unicellular or unicellular-colonial and which form no tissues". Just as the prokaryotic/eukaryotic division was becoming mainstream, Whittaker, after a decade from Copeland's system, recognized the fundamental division of life between the prokaryotic Monera and the eukaryotic kingdoms: Animalia (ingestion), Plantae (photosynthesis), Fungi (absorption) and the remaining Protista.
In the five-kingdom system of American evolutionary biologist Lynn Margulis, the term "protist" was reserved for microscopic organisms, while the more inclusive kingdom Protoctista (or protoctists) included certain large multicellular eukaryotes, such as kelp, red algae, and slime molds. Some use the term protist interchangeably with Margulis' protoctist, to encompass both single-celled and multicellular eukaryotes, including those that form specialized tissues but do not fit into any of the other traditional kingdoms.
Phylogenetics and modern concepts
The five-kingdom model remained the accepted classification until the development of molecular phylogenetics in the late 20th century, when it became apparent that protists are a paraphyletic group from which animals, fungi and plants evolved, and the three-domain system (Bacteria, Archaea, Eukarya) became prevalent. Today, protists are not treated as a formal taxon, but the term is commonly used for convenience in two ways:
Phylogenetic definition: protists are a paraphyletic group. A protist is any eukaryote that is not an animal, land plant or fungus, thus excluding many unicellular groups like the fungal Microsporidia, Chytridiomycetes and yeasts, and the non-unicellular Myxozoan animals included in Protista in the past.
Functional definition: protists are essentially those eukaryotes that are never multicellular, that either exist as independent cells, or if they occur in colonies, do not show differentiation into tissues. While in popular usage, this definition excludes the variety of non-colonial multicellularity types that protists exhibit, such as aggregative (e.g. choanoflagellates) or complex multicellularity (e.g. brown algae).
Kingdoms Protozoa and Chromista
There is, however, one classification of protists based on traditional ranks that lasted until the 21st century. The British protozoologist Thomas Cavalier-Smith, since 1998, developed a six-kingdom model: Bacteria, Animalia, Plantae, Fungi, Protozoa and Chromista. In his context, paraphyletic groups take preference over clades: both protist kingdoms Protozoa and Chromista contain paraphyletic phyla such as Apusozoa, Eolouka or Opisthosporidia. Additionally, red and green algae are considered true plants, while the fungal groups Microsporidia, Rozellida and Aphelida are considered protozoans under the phylum Opisthosporidia. This scheme endured until 2021, the year of his last publication.
Diversity
Species diversity
According to molecular data, protists dominate eukaryotic diversity, accounting for a vast majority of environmental DNA sequences or operational taxonomic units (OTUs). However, their species diversity is severely underestimated by traditional methods that differentiate species based on morphological characteristics. The number of described protistan species is very low (ranging from 26,000 to 74,400 as of 2012) in comparison to the diversity of plants, animals and fungi, which are historically and biologically well-known and studied. The predicted number of species also varies greatly, ranging from 1.4×10 to 1.6×10, and in several groups the number of predicted species is arbitrarily doubled. Most of these predictions are highly subjective.
Molecular techniques such as DNA barcoding are being used to compensate for the lack of morphological diagnoses, but this has revealed an unknown vast diversity of protists that is difficult to accurately process because of the exceedingly large genetic divergence between the different protistan groups. Several different molecular markers need to be used to survey the vast protistan diversity, because there is no universal marker that can be applied to all lineages.
Biomass
Protists make up a large portion of the biomass in both marine and terrestrial ecosystems. It has been estimated that protists account for 4 gigatons (Gt) of biomass in the entire planet Earth. This amount is smaller than 1% of all biomass, but is still double the amount estimated for all animals (2 Gt). Together, protists, animals, archaea (7 Gt) and fungi (12 Gt) account for less than 10% of the total biomass of the planet, because plants (450 Gt) and bacteria (70 Gt) are the remaining 80% and 15% respectively.
Ecology
Protists are highly abundant and diverse in all types of ecosystems, especially free-living (i.e. non-parasitic) groups. An unexpectedly enormous, taxonomically undescribed diversity of eukaryotic microbes is detected everywhere in the form of environmental DNA or RNA. The richest protist communities appear in soil, followed by ocean and freshwater habitats.
Phagotrophic protists (consumers) are the most diverse functional group in all ecosystems, with three main taxonomical groups of phagotrophs: Rhizaria (mainly Cercozoa in freshwater and soil habitats, and Radiolaria in oceans), ciliates (most abundant in freshwater and second most abundant in soil) and non-photosynthetic stramenopiles (third most represented overall, higher in soil than in oceans). Phototrophic protists (producers) appear in lower proportions, probably constrained by intense predation. They exist in similar abundance in both oceans and soil. They are mostly dinophytes in oceans, chrysophytes in freshwater, and Archaeplastida in soil.
Marine
Marine protists are highly diverse, have a fundamental impact on biogeochemical cycles (particularly, the carbon cycle) and are at the base of the marine trophic networks as part of the plankton.
Phototrophic marine protists located in the photic zone as phytoplankton are vital primary producers in the oceanic systems. They fix as much carbon as all terrestrial plants together. The smallest fractions, the picoplankton (<2 μm) and nanoplankton (2–20 μm), are dominated by several different algae (prymnesiophytes, pelagophytes, prasinophytes); fractions larger than 5 μm are instead dominated by diatoms and dinoflagellates. The heterotrophic fraction of marine picoplankton encompasses primarily early-branching stramenopiles (e.g. bicosoecids and labyrinthulomycetes), as well as alveolates, ciliates and radiolarians; protists of lower frequency include cercozoans and cryptophytes.
Mixotrophic marine protists, while not very researched, are present abundantly and ubiquitously in the global oceans, on a wide range of marine habitats. In metabarcoding analyses, they constitute more than 12% of the environmental sequences. They are an important and underestimated source of carbon in eutrophic and oligotrophic habitats. Their abundance varies seasonally. Planktonic protists are classified into various functional groups or 'mixotypes' that present different biogeographies:
Constitutive mixotrophs, also called 'phytoplankton that eat', have the innate ability to photosynthesize. They have diverse feeding behaviors: some require phototrophy, others phagotrophy, and others are obligate mixotrophs. They are responsible for harmful algal blooms. They dominate the eukaryotic microbial biomass in the photic zone, in eutrophic and oligotrophic waters across all climate zones, even in non-bloom conditions. They account for significant, often dominant predation of bacteria.
Non-constitutive mixotrophs acquire the ability to photosynthesize by stealing chloroplasts from their prey. They can be divided into two: generalists, which can use chloroplasts stolen from a variety of prey (e.g. oligotrich ciliates), or specialists, which have developed the need to only acquire chloroplasts from a few specific prey. The specialists are further divided into two: plastidic, those which contain differentiated plastids (e.g. Mesodinium, Dinophysis), and endosymbiotic, those which contain endosymbionts (e.g. mixotrophic Rhizaria such as Foraminifera and Radiolaria, dinoflagellates like Noctiluca). Both plastidic and generalist non-constitutive mixotrophs have similar biogeographies and low abundance, mostly found in eutrophic coastal waters. Generalist ciliates can account for up to 50% of ciliate communities in the photic zone. The endosymbiotic mixotrophs are the most abundant non-constitutive type.
Freshwater
Freshwater planktonic protist communities are characterized by a higher "beta diversity" (i.e. highly heterogeneous between samples) than soil and marine plankton. The high diversity can be a result of the hydrological dynamic of recruiting organisms from different habitats through extreme floods. The main freshwater producers (chrysophytes, cryptophytes and dinophytes) behave alternatively as consumers (mixotrophs). At the same time, strict consumers (non-photosynthetic) are less abundant in freshwater, implying that the consumer role is partly taken by these mixotrophs.
Soil
Soil protist communities are ecologically the richest. This may be due to the complex and highly dynamic distribution of water in the sediment, which creates extremely heterogenous environmental conditions. The constantly changing environment promotes the activity of only one part of the community at a time, while the rest remains inactive; this phenomenon promotes high microbial diversity in prokaryotes as well as protists. Only a small fraction of the detected diversity of soil-dwelling protists has been described (8.1% as of 2017). Soil protists are also morphologically and functionally diverse, with four major categories:
Photoautotrophic soil protists, or algae, are as abundant as their marine counterparts. Given the importance of marine algae, soil algae may provide a larger contribution to the global carbon cycle than previously thought, but the magnitude of their carbon fixation has yet to be quantified. Most soil algae belong to the supergroups Stramenopiles (diatoms, Xanthophyceae and Eustigmatophyceae) and Archaeplastida (Chlorophyceae and Trebouxiophyceae). There is also the presence of environmental DNA from dinoflagellates and haptophytes in soil, but no living forms have been seen.
Fungus-like protists are present abundantly in soil. Most environmental sequences belong to the Oomycetes (Stramenopiles), an osmotrophic and saprotrophic group that contains free-living and parasitic species of other protists, fungi, plants and animals. Another important group in soil are slime molds (found in Amoebozoa, Opisthokonta, Rhizaria and Heterolobosea), which reproduce by forming fruiting bodies known as sporocarps (originated from a single cell) and sorocarps (from aggregations of cells).
Phagotrophic protists are abundant and essential in soil ecosystems. As bacterial grazers, they have a significant role in the foodweb: they excrete nitrogen in the form of NH, making it available to plants and other microbes. Many soil protists are also mycophagous, and facultative (i.e. non-obligate) mycophagy is a widespread evolutionary feeding mode among soil protozoa. Amoeboflagellates like the glissomonads and cercomonads (in Rhizaria) are among the most abundant soil protists: they possess both flagella and pseudopodia, a morphological variability well suited for foraging between soil particles. Testate amoebae (e.g. arcellinids and euglyphids) have shells that protect against desiccation and predation, and their contribution to the silica cycle through the biomineralization of shells is as important as that of forest trees.
Parasitic soil protists (in Apicomplexa) are diverse, ubiquitous and have an important role as parasites of soil-dwelling invertebrate animals. In Neotropical forests, environmental DNA from the apicomplexan gregarines dominates protist diversity.
Parasitic
Parasitic protists represent around 15–20% of all environmental DNA in marine and soil systems, but only around 5% in freshwater systems, where chytrid fungi likely fill that ecological niche. In oceanic systems, parasitoids (i.e. those which kill their hosts, e.g. Syndiniales) are more abundant. In soil ecosystems, true parasites (i.e. those which do not kill their hosts) are primarily animal-hosted Apicomplexa (Alveolata) and plant-hosted oomycetes (Stramenopiles) and plasmodiophorids (Rhizaria). In freshwater ecosystems, parasitoids are mainly Perkinsea and Syndiniales (Alveolata), as well as the fungal Chytridiomycota. True parasites in freshwater are mostly oomycetes, Apicomplexa and Ichthyosporea.
Some protists are significant parasites of animals (e.g.; five species of the parasitic genus Plasmodium cause malaria in humans and many others cause similar diseases in other vertebrates), plants (the oomycete Phytophthora infestans causes late blight in potatoes) or even of other protists.
Around 100 protist species can infect humans. Two papers from 2013 have proposed virotherapy, the use of viruses to treat infections caused by protozoa.
Researchers from the Agricultural Research Service are taking advantage of protists as pathogens to control red imported fire ant (Solenopsis invicta) populations in Argentina. Spore-producing protists such as Kneallhazia solenopsae (recognized as a sister clade or the closest relative to the fungus kingdom now) can reduce red fire ant populations by 53–100%. Researchers have also been able to infect phorid fly parasitoids of the ant with the protist without harming the flies. This turns the flies into a vector that can spread the pathogenic protist between red fire ant colonies.
Biology
Physiological adaptations
While, in general, protists are typical eukaryotic cells and follow the same principles of physiology and biochemistry described for those cells within the "higher" eukaryotes (animals, fungi or plants), they have evolved a variety of unique physiological adaptations that do not appear in those eukaryotes.
Osmoregulation. Freshwater protists without cell walls are able to regulate their osmosis through contractile vacuoles, specialized organelles that periodically excrete fluid high in potassium and sodium through a cycle of diastole and systole. The cycle stops when the cells are placed in a medium with different salinity, until the cell adapts.
Energetic adaptations. The last eukaryotic common ancestor was aerobic, bearing mitochondria for oxidative metabolism. Many lineages of free-living and parasitic protists have independently evolved and adapted to inhabit anaerobic or microaerophilic habitats, by modifying the early mitochondria into hydrogenosomes, organelles that generate ATP anaerobically through fermentation of pyruvate. In a parallel manner, in the microaerophilic trypanosomatid protists, the fermentative glycosome evolved from the peroxisome.
Sensory adaptations. Many flagellates and probably all motile algae exhibit a positive phototaxis (i.e. they swim or glide toward a source of light). For this purpose, they exhibit three kinds of photoreceptors or "eyespots": (1) receptors with light antennae, found in many green algae, dinoflagellates and cryptophytes; (2) receptors with opaque screens; and (3) complex ocelloids with intracellular lenses, found in one group of predatory dinoflagellates, the Warnowiaceae. Additionally, some ciliates orient themselves in relation to the Earth's gravitational field while moving (geotaxis), and others swim in relation to the concentration of dissolved oxygen in the water.
Endosymbiosis. Protists have an accentuated tendency to include endosymbionts in their cells, and these have produced new physiological opportunities. Some associations are more permanent, such as Paramecium bursaria and its endosymbiont Chlorella; others more transient. Many protists contain captured chloroplasts, chloroplast-mitochondrial complexes, and even eyespots from algae. The xenosomes are bacterial endosymbionts found in ciliates, sometimes with a methanogenic role inside anaerobic ciliates.
Sexual reproduction
Protists generally reproduce asexually under favorable environmental conditions, but tend to reproduce sexually under stressful conditions, such as starvation or heat shock. Oxidative stress, which leads to DNA damage, also appears to be an important factor in the induction of sex in protists.
Eukaryotes emerged in evolution more than 1.5 billion years ago. The earliest eukaryotes were protists. Although sexual reproduction is widespread among multicellular eukaryotes, it seemed unlikely until recently, that sex could be a primordial and fundamental characteristic of eukaryotes. The main reason for this view was that sex appeared to be lacking in certain pathogenic protists whose ancestors branched off early from the eukaryotic family tree. However, several of these "early-branching" protists that were thought to predate the emergence of meiosis and sex (such as Giardia lamblia and Trichomonas vaginalis) are now known to descend from ancestors capable of meiosis and meiotic recombination, because they have a set core of meiotic genes that are present in sexual eukaryotes. Most of these meiotic genes were likely present in the common ancestor of all eukaryotes, which was likely capable of facultative (non-obligate) sexual reproduction.
This view was further supported by a 2011 study on amoebae. Amoebae have been regarded as asexual organisms, but the study describes evidence that most amoeboid lineages are ancestrally sexual, and that the majority of asexual groups likely arose recently and independently. Even in the early 20th century, some researchers interpreted phenomena related to chromidia (chromatin granules free in the cytoplasm) in amoebae as sexual reproduction.
Sex in pathogenic protists
Some commonly found protist pathogens such as Toxoplasma gondii are capable of infecting and undergoing asexual reproduction in a wide variety of animals – which act as secondary or intermediate host – but can undergo sexual reproduction only in the primary or definitive host (for example: felids such as domestic cats in this case).
Some species, for example Plasmodium falciparum, have extremely complex life cycles that involve multiple forms of the organism, some of which reproduce sexually and others asexually. However, it is unclear how frequently sexual reproduction causes genetic exchange between different strains of Plasmodium in nature and most populations of parasitic protists may be clonal lines that rarely exchange genes with other members of their species.
The pathogenic parasitic protists of the genus Leishmania have been shown to be capable of a sexual cycle in the invertebrate vector, likened to the meiosis undertaken in the trypanosomes.
Fossil record
Mesoproterozoic
By definition, all eukaryotes before the existence of plants, animals and fungi are considered protists. For that reason, this section contains information about the deep ancestry of all eukaryotes.
All living eukaryotes, including protists, evolved from the last eukaryotic common ancestor (LECA). Descendants of this ancestor are known as "crown-group" or "modern" eukaryotes. Molecular clocks suggest that LECA originated between 1200 and more than 1800 million years ago (Ma). Based on all molecular predictions, modern eukaryotes reached morphological and ecological diversity before 1000 Ma in the form of multicellular algae capable of sexual reproduction, and unicellular protists capable of phagocytosis and locomotion. However, the fossil record of modern eukaryotes is very scarce around this period, which contradicts the predicted diversity.
Instead, the fossil record of this period contains "stem-group eukaryotes". These fossils cannot be assigned to any known crown group, so they probably belong to extinct lineages that originated before LECA. They appear continuously throughout the Mesoproterozoic fossil record (1650–1000 Ma). They present defining eukaryote characteristics such as complex cell wall ornamentation and cell membrane protrusions, which require a flexible endomembrane system. However, they had a major distinction from crown eukaryores: the composition of their cell membrane. Unlike crown eukaryotes, which produce "crown sterols" for their cell membranes (e.g. cholesterol and ergosterol), stem eukaryotes produced "protosterols", which appear earlier in the biosynthetic pathway.
Crown sterols, while metabolically more expensive, may have granted several evolutionary advantages for LECA's descendants. Specific unsaturation patterns in crown sterols protect against osmotic shock during desiccation and rehydration cycles. Crown sterols can also receive ethyl groups, thus enhancing cohesion between lipids and adapting cells against extreme cold and heat. Moreover, the additional steps in the biosynthetic pathway allow cells to regulate the proportion of different sterols in their membranes, in turn allowing for a wider habitable temperature range and unique mechanisms such as asymmetric cell division or membrane repair under exposure to UV light. A more speculative role of these sterols is their protection against the Proterozoic changing oxygen levels. It is theorized that all of these sterol-based mechanisms allowed LECA's descendants to live as extremophiles of their time, diversifying into ecological niches that experienced cycles of desiccation and rehydration, daily extremes of high and low temperatures, and elevated UV radiation (such as mudflats, rivers, agitated shorelines and subaerial soil).
In contrast, the named mechanisms were absent in stem-group eukaryotes, as they were only capable of producing protosterols. Instead, these protosterol-based life forms occupied open marine waters. They were facultative anaerobes that thrived in Mesoproterozoic waters, which at the time were low on oxygen. Eventually, during the Tonian period (Neoproterozoic era), oxygen levels increased and the crown eukaryotes were able to expand to open marine environments thanks to their preference for more oxygenated habitats. Stem eukaryotes may have been driven to extinction as a result of this competition. Additionally, their protosterol membranes may have posed a disadvantage during the cold of the Cryogenian "Snowball Earth" glaciations and the extreme global heat that came afterwards.
Neoproterozoic
Modern eukaryotes began to appear abundantly in the Tonian period (1000–720 Ma), fueled by the proliferation of red algae. The oldest fossils assigned to modern eukaryotes belong to two photosynthetic protists: the multicellular red alga Bangiomorpha (from 1050 Ma), and the chlorophyte green alga Proterocladus (from 1000 Ma). Abundant fossils of heterotrophic protists appear later, around 900 Ma, with the emergence of fungi. For example, the oldest fossils of Amoebozoa are vase-shaped microfossils resembling modern testate amoebae, found in 800 million-year-old rocks. Radiolarian shells are found abundantly in the fossil record after the Cambrian period (~500 Ma), but more recent paleontological studies are beginning to interpret some Precambrian fossils as the earliest evidence of radiolarians.
See also
Evolution of sexual reproduction
Protist locomotion
Footnotes
References
Bibliography
General
Hausmann, K., N. Hulsmann, R. Radek. Protistology. Schweizerbart'sche Verlagsbuchshandlung, Stuttgart, 2003.
Margulis, L., J.O. Corliss, M. Melkonian, D.J. Chapman. Handbook of Protoctista. Jones and Bartlett Publishers, Boston, 1990.
Margulis, L., K.V. Schwartz. Five Kingdoms: An Illustrated Guide to the Phyla of Life on Earth, 3rd ed. New York: W.H. Freeman, 1998.
Margulis, L., L. Olendzenski, H.I. McKhann. Illustrated Glossary of the Protoctista, 1993.
Margulis, L., M.J. Chapman. Kingdoms and Domains: An Illustrated Guide to the Phyla of Life on Earth. Amsterdam: Academic Press/Elsevier, 2009.
Schaechter, M. Eukaryotic microbes. Amsterdam, Academic Press, 2012.
Physiology, ecology and paleontology
Fontaneto, D. Biogeography of Microscopic Organisms. Is Everything Small Everywhere? Cambridge University Press, Cambridge, 2011.
Moore, R. C., and other editors. Treatise on Invertebrate Paleontology. Protista, part B (vol. 1, Charophyta, vol. 2, Chrysomonadida, Coccolithophorida, Charophyta, Diatomacea & Pyrrhophyta), part C (Sarcodina, Chiefly "Thecamoebians" and Foraminiferida) and part D (Chiefly Radiolaria and Tintinnina). Boulder, Colorado: Geological Society of America; & Lawrence, Kansas: University of Kansas Press.
External links
UniEuk Taxonomy App
Tree of Life: Eukaryotes
Tsukii, Y. (1996). Protist Information Server (database of protist images). Laboratory of Biology, Hosei University. Protist Information Server. Updated: March 22, 2016.
Obsolete eukaryote taxa
Paraphyletic groups | 0.784902 | 0.999025 | 0.784137 |
Environmental sociology | Environmental sociology is the study of interactions between societies and their natural environment. The field emphasizes the social factors that influence environmental resource management and cause environmental issues, the processes by which these environmental problems are socially constructed and define as social issues, and societal responses to these problems.
Environmental sociology emerged as a subfield of sociology in the late 1970s in response to the emergence of the environmental movement in the 1960s. It represents a relatively new area of inquiry focusing on an extension of earlier sociology through inclusion of physical context as related to social factors.
Definition
Environmental sociology is typically defined as the sociological study of socio-environmental interactions, although this definition immediately presents the problem of integrating human cultures with the rest of the environment. Different aspects of human interaction with the natural environment are studied by environmental sociologists including population and demography, organizations and institutions, science and technology, health and illness, consumption and sustainability practices, culture and identity, and social inequality and environmental justice. Although the focus of the field is the relationship between society and environment in general, environmental sociologists typically place special emphasis on studying the social factors that cause environmental problems, the societal impacts of those problems, and efforts to solve the problems. In addition, considerable attention is paid to the social processes by which certain environmental conditions become socially defined as problems. Most research in environmental sociology examines contemporary societies.
History
Environmental sociology emerged as a coherent subfield of inquiry after the environmental movement of the 1960s and early 1970s. The works of William R. Catton, Jr. and Riley Dunlap, among others, challenged the constricted anthropocentrism of classical sociology. In the late 1970s, they called for a new holistic, or systems perspective, which lead to a marked shift in the field’s focus. Since the 1970s, general sociology has noticeably transformed to include environmental forces in social explanations. Environmental sociology has now solidified as a respected, interdisciplinary field of study in academia.
Concepts
Existential dualism
The duality of the human condition rests with cultural uniqueness and evolutionary traits. From one perspective, humans are embedded in the ecosphere and co-evolved alongside other species. Humans share the same basic ecological dependencies as other inhabitants of nature. From the other perspectives, humans are distinguished from other species because of their innovative capacities, distinct cultures and varied institutions. Human creations have the power to independently manipulate, destroy, and transcend the limits of the natural environment.
According to Buttel (2004), there are five major traditions in environmental sociology today: the treadmill of production and other eco-Marxisms, ecological modernization and other sociologies of environmental reform, cultural-environmental sociologies, neo-Malthusianisms, and the new ecological paradigm. In practice, this means five different theories of what to blame for environmental degradation, i.e., what to research or consider as important. These ideas are listed below in the order in which they were invented. Ideas that emerged later built on earlier ideas, and contradicted them.
Neo-Malthusianism
Works such as Hardin's "Tragedy of the Commons" (1969) reformulated Malthusian thought about abstract population increases causing famines into a model of individual selfishness at larger scales causing degradation of common pool resources such as the air, water, the oceans, or general environmental conditions. Hardin offered privatization of resources or government regulation as solutions to environmental degradation caused by tragedy of the commons conditions. Many other sociologists shared this view of solutions well into the 1970s (see Ophuls). There have been many critiques of this view particularly political scientist Elinor Ostrom, or economists Amartya Sen and Ester Boserup.
Even though much of mainstream journalism considers Malthusianism the only view of environmentalism, most sociologists would disagree with Malthusianism since social organizational issues of environmental degradation are more demonstrated to cause environmental problems than abstract population or selfishness per se. For examples of this critique, Ostrom in her book Governing the Commons: The Evolution of Institutions for Collective Action (1990) argues that instead of self-interest always causing degradation, it can sometimes motivate people to take care of their common property resources. To do this they must change the basic organizational rules of resource use. Her research provides evidence for sustainable resource management systems, around common pool resources that have lasted for centuries in some areas of the world.
Amartya Sen argues in his book Poverty and Famines: An Essay on Entitlement and Deprivation (1980) that population expansion fails to cause famines or degradation as Malthusians or Neo-Malthusians argue. Instead, in documented cases a lack of political entitlement to resources that exist in abundance, causes famines in some populations. He documents how famines can occur even in the midst of plenty or in the context of low populations. He argues that famines (and environmental degradation) would only occur in non-functioning democracies or unrepresentative states.
Ester Boserup argues in her book The Conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure (1965) from inductive, empirical case analysis that Malthus's more deductive conception of a presumed one-to-one relationship with agricultural scale and population is actually reversed. Instead of agricultural technology and scale determining and limiting population as Malthus attempted to argue, Boserup argued the world is full of cases of the direct opposite: that population changes and expands agricultural methods.
Eco-Marxist scholar Allan Schnaiberg (below) argues against Malthusianism with the rationale that under larger capitalist economies, human degradation moved from localized, population-based degradation to organizationally caused degradation of capitalist political economies to blame. He gives the example of the organized degradation of rainforest areas which states and capitalists push people off the land before it is degraded by organizational means. Thus, many authors are critical of Malthusianism, from sociologists (Schnaiberg) to economists (Sen and Boserup), to political scientists (Ostrom), and all focus on how a country's social organization of its extraction can degrade the environment independent of abstract population.
New Ecological Paradigm
In the 1970s, the New Ecological Paradigm (NEP) conception critiqued the claimed lack of human-environmental focus in the classical sociologists and the sociological priorities their followers created. This was critiqued as the Human Exemptionalism Paradigm (HEP). The HEP viewpoint claims that human-environmental relationships were unimportant sociologically because humans are 'exempt' from environmental forces via cultural change. This view was shaped by the leading Western worldview of the time and the desire for sociology to establish itself as an independent discipline against the then popular racist-biological environmental determinism where environment was all. In this HEP view, human dominance was felt to be justified by the uniqueness of culture, argued to be more adaptable than biological traits. Furthermore, culture also has the capacity to accumulate and innovate, making it capable of solving all natural problems. Therefore, as humans were not conceived of as governed by natural conditions, they were felt to have complete control of their own destiny. Any potential limitation posed by the natural world was felt to be surpassed using human ingenuity. Research proceeded accordingly without environmental analysis.
In the 1970s, sociological scholars Riley Dunlap and William R. Catton, Jr. began recognizing the limits of what would be termed the Human Excemptionalism Paradigm. Catton and Dunlap (1978) suggested a new perspective that took environmental variables into full account. They coined a new theoretical outlook for sociology, the New Ecological Paradigm, with assumptions contrary to HEP.
The NEP recognizes the innovative capacity of humans, but says that humans are still ecologically interdependent as with other species. The NEP notes the power of social and cultural forces but does not profess social determinism. Instead, humans are impacted by the cause, effect, and feedback loops of ecosystems. The Earth has a finite level of natural resources and waste repositories. Thus, the biophysical environment can impose constraints on human activity. They discussed a few harbingers of this NEP in 'hybridized' theorizing about topics that were neither exclusively social nor environmental explanations of environmental conditions. It was additionally a critique of Malthusian views of the 1960s and 1970s.
Dunlap and Catton's work immediately received a critique from Buttel who argued to the contrary that classical sociological foundations could be found for environmental sociology, particularly in Weber's work on ancient "agrarian civilizations" and Durkheim's view of the division of labor as built on a material premise of specialization/specialization in response to material scarcity. This environmental aspect of Durkheim has been discussed by Schnaiberg (1971) as well.
Treadmill of Production Theory
The Treadmill of Production is a theory coined and popularized by Schnaiberg as a way to answer for the increase in U.S. environmental degradation post World War II. At its simplest, this theory states that the more product or commodities are created, the more resources will be used, and the higher the impact will be. The treadmill is a metaphor of being caught in the cycle of continuous growth which never stops, demanding more resources and as a result causing more environmental damage.
Eco-Marxism
In the middle of the HEP/NEP debate Neo-Marxist ideas of conflict sociology were applied to environmental conflicts. Therefore, some sociologists wanted to stretch Marxist ideas of social conflict to analyze environmental social movements from the Marxist materialist framework instead of interpreting them as a cultural "New Social Movement", separate from material concerns. So "Eco-Marxism" was developed based on using Neo-Marxist Conflict theories concepts of the relative autonomy of the state and applying them to environmental conflict.
Two people following this school were James O'Connor (The Fiscal Crisis of the State, 1971) and later Allan Schnaiberg.
Later, a different trend developed in eco-Marxism via the attention brought to the importance of metabolic analysis in Marx's thought by John Bellamy Foster. Contrary to previous assumptions that classical theorists in sociology all had fallen within a Human Exemptionalist Paradigm, Foster argued that Marx's materialism lead him to theorize labor as the metabolic process between humanity and the rest of nature. In Promethean interpretations of Marx that Foster critiques, there was an assumption his analysis was very similar to the anthropocentric views critiqued by early environmental sociologists. Instead, Foster argued Marx himself was concerned about the Metabolic rift generated by capitalist society's social metabolism, particularly in industrial agriculture—Marx had identified an "irreparable rift in the interdependent process of social metabolism," created by capitalist agriculture that was destroying the productivity of the land and creating wastes in urban sites that failed to be reintegrated into the land and thus lead toward destruction of urban workers health simultaneously. Reviewing the contribution of this thread of eco-marxism to current environmental sociology, Pellow and Brehm conclude, "The metabolic rift is a productive development in the field because it connects current research to classical theory and links sociology with an interdisciplinary array of scientific literatures focused on ecosystem dynamics."
Foster emphasized that his argument presupposed the "magisterial work" of Paul Burkett, who had developed a closely related "red-green" perspective rooted in a direct examination of Marx's value theory. Burkett and Foster proceeded to write a number of articles together on Marx's ecological conceptions, reflecting their shared perspective
More recently, Jason W. Moore, inspired by Burkett's value-analytical approach to Marx's ecology and arguing that Foster's work did not in itself go far enough, has sought to integrate the notion of metabolic rift with world systems theory, incorporating Marxian value-related conceptions. For Moore, the modern world-system is a capitalist world-ecology, joining the accumulation of capital, the pursuit of power, and the production of nature in dialectical unity. Central to Moore's perspective is a philosophical re-reading of Marx's value theory, through which abstract social labor and abstract social nature are dialectically bound. Moore argues that the emergent law of value, from the sixteenth century, was evident in the extraordinary shift in the scale, scope, and speed of environmental change. What took premodern civilizations centuries to achieve—such as the deforestation of Europe in the medieval era—capitalism realized in mere decades. This world-historical rupture, argues Moore, can be explained through a law of value that regards labor productivity as the decisive metric of wealth and power in the modern world. From this standpoint, the genius of capitalist development has been to appropriate uncommodified natures—including uncommodified human natures—as a means of advancing labor productivity in the commodity system.
Societal-environment dialectic
In 1975, the highly influential work of Allan Schnaiberg transfigured environmental sociology, proposing a societal-environmental dialectic, though within the 'neo-Marxist' framework of the relative autonomy of the state as well. This conflictual concept has overwhelming political salience. First, the economic synthesis states that the desire for economic expansion will prevail over ecological concerns. Policy will decide to maximize immediate economic growth at the expense of environmental disruption. Secondly, the managed scarcity synthesis concludes that governments will attempt to control only the most dire of environmental problems to prevent health and economic disasters. This will give the appearance that governments act more environmentally consciously than they really do. Third, the ecological synthesis generates a hypothetical case where environmental degradation is so severe that political forces would respond with sustainable policies. The driving factor would be economic damage caused by environmental degradation. The economic engine would be based on renewable resources at this point. Production and consumption methods would adhere to sustainability regulations.
These conflict-based syntheses have several potential outcomes. One is that the most powerful economic and political forces will preserve the status quo and bolster their dominance. Historically, this is the most common occurrence. Another potential outcome is for contending powerful parties to fall into a stalemate. Lastly, tumultuous social events may result that redistribute economic and political resources.
In 1980,the highly influential work of Allan Schnaiberg entitled The Environment: From Surplus to Scarcity (1980)
was a large contribution to this theme of a societal-environmental dialectic.
Ecological modernization and reflexive modernization
By the 1980s, a critique of eco-Marxism was in the offing, given empirical data from countries (mostly in Western Europe like the Netherlands, Western Germany and somewhat the United Kingdom) that were attempting to wed environmental protection with economic growth instead of seeing them as separate. This was done through both state and capital restructuring. Major proponents of this school of research are Arthur P.J. Mol and Gert Spaargaren. Popular examples of ecological modernization would be "cradle to cradle" production cycles, industrial ecology, large-scale organic agriculture, biomimicry, permaculture, agroecology and certain strands of sustainable development—all implying that economic growth is possible if that growth is well organized with the environment in mind.
Reflexive modernization
The many volumes of the German sociologist Ulrich Beck first argued from the late 1980s that our risk society is potentially being transformed by the environmental social movements of the world into structural change without rejecting the benefits of modernization and industrialization. This is leading to a form of 'reflexive modernization' with a world of reduced risk and better modernization process in economics, politics, and scientific practices as they are made less beholden to a cycle of protecting risk from correction (which he calls our state's organized irresponsibility)—politics creates ecodisasters, then claims responsibility in an accident, yet nothing remains corrected because it challenges the very structure of the operation of the economy and the private dominance of development, for example. Beck's idea of a reflexive modernization looks forward to how our ecological and social crises in the late 20th century are leading toward transformations of the whole political and economic system's institutions, making them more "rational" with ecology in mind.
Neo-Liberalism
Neo-liberalism includes deregulation, free market capitalism, and aims at reducing government spending. These Neo-liberal policies greatly affect environmental sociology. Since Neo-liberalism includes deregulation and essentially less government involvement, this leads to the commodification and privatization of unowned, state-owned, or common property resources. Diana Liverman and Silvina Vilas mentions that this results in payments for environmental services; deregulation and cuts in public expenditure for environmental management; the opening up of trade and investment; and transfer of environmental management to local or nongovernmental institutions. The privatization of these resources have impacts on society, the economy, and to the environment. An example that has greatly affected society is the privatization of water.
Social construction of the environment
Additionally in the 1980s, with the rise of postmodernism in the western academy and the appreciation of discourse as a form of power, some sociologists turned to analyzing environmental claims as a form of social construction more than a 'material' requirement. Proponents of this school include John A. Hannigan, particularly in Environmental Sociology: A Social Constructionist Perspective (1995). Hannigan argues for a 'soft constructionism' (environmental problems are materially real though they require social construction to be noticed) over a 'hard constructionism' (the claim that environmental problems are entirely social constructs).
Although there was sometimes acrimonious debate between the constructivist and realist "camps" within environmental sociology in the 1990s, the two sides have found considerable common ground as both increasingly accept that while most environmental problems have a material reality they nonetheless become known only via human processes such as scientific knowledge, activists' efforts, and media attention. In other words, most environmental problems have a real ontological status despite our knowledge/awareness of them stemming from social processes, processes by which various conditions are constructed as problems by scientists, activists, media and other social actors. Correspondingly, environmental problems must all be understood via social processes, despite any material basis they may have external to humans. This interactiveness is now broadly accepted, but many aspects of the debate continue in contemporary research in the field.
Events
Modern environmentalism
United States
The 1960s built strong cultural momentum for environmental causes, giving birth to the modern environmental movement and large questioning in sociologists interested in analyzing the movement. Widespread green consciousness moved vertically within society, resulting in a series of policy changes across many states in the U.S. and Europe in the 1970s. In the United States, this period was known as the "Environmental Decade" with the creation of the United States Environmental Protection Agency and passing of the Endangered Species Act, Clean Water Act, and amendments to the Clean Air Act. Earth Day of 1970, celebrated by millions of participants, represented the modern age of environmental thought. The environmental movement continued with incidences such as Love Canal.
Historical studies
While the current mode of thought expressed in environmental sociology was not prevalent until the 1970s, its application is now used in analysis of ancient peoples. Societies including Easter Island, the Anaszi, and the Mayans were argued to have ended abruptly, largely due to poor environmental management. This has been challenged in later work however as the exclusive cause (biologically trained Jared Diamond's Collapse (2005); or more modern work on Easter Island). The collapse of the Mayans sent a historic message that even advanced cultures are vulnerable to ecological suicide—though Diamond argues now it was less of a suicide than an environmental climate change that led to a lack of an ability to adapt—and a lack of elite willingness to adapt even when faced with the signs much earlier of nearing ecological problems. At the same time, societal successes for Diamond included New Guinea and Tikopia island whose inhabitants have lived sustainably for 46,000 years.
John Dryzek et al. argue in Green States and Social Movements: Environmentalism in the United States, United Kingdom, Germany, and Norway (2003) that there may be a common global green environmental social movement, though its specific outcomes are nationalist, falling into four 'ideal types' of interaction between environmental movements and state power. They use as their case studies environmental social movements and state interaction from Norway, the United Kingdom, the United States, and Germany. They analyze the past 30 years of environmentalism and the different outcomes that the green movement has taken in different state contexts and cultures.
Recently and roughly in temporal order below, much longer-term comparative historical studies of environmental degradation are found by sociologists. There are two general trends: many employ world systems theory—analyzing environmental issues over long periods of time and space; and others employ comparative historical methods. Some utilize both methods simultaneously, sometimes without reference to world systems theory (like Whitaker, see below).
Stephen G. Bunker (d. 2005) and Paul S. Ciccantell collaborated on two books from a world-systems theory view, following commodity chains through history of the modern world system, charting the changing importance of space, time, and scale of extraction and how these variables influenced the shape and location of the main nodes of the world economy over the past 500 years. Their view of the world was grounded in extraction economies and the politics of different states that seek to dominate the world's resources and each other through gaining hegemonic control of major resources or restructuring global flows in them to benefit their locations.
The three volume work of environmental world-systems theory by Sing C. Chew analyzed how "Nature and Culture" interact over long periods of time, starting with World Ecological Degradation (2001) In later books, Chew argued that there were three "Dark Ages" in world environmental history characterized by periods of state collapse and reorientation in the world economy associated with more localist frameworks of community, economy, and identity coming to dominate the nature/culture relationships after state-facilitated environmental destruction delegitimized other forms. Thus recreated communities were founded in these so-called 'Dark Ages,' novel religions were popularized, and perhaps most importantly to him the environment had several centuries to recover from previous destruction. Chew argues that modern green politics and bioregionalism is the start of a similar movement of the present day potentially leading to wholesale system transformation. Therefore, we may be on the edge of yet another global "dark age" which is bright instead of dark on many levels since he argues for human community returning with environmental healing as empires collapse.
More case oriented studies were conducted by historical environmental sociologist Mark D. Whitaker analyzing China, Japan, and Europe over 2,500 years in his book Ecological Revolution (2009). He argued that instead of environmental movements being "New Social Movements" peculiar to current societies, environmental movements are very old—being expressed via religious movements in the past (or in the present like in ecotheology) that begin to focus on material concerns of health, local ecology, and economic protest against state policy and its extractions. He argues past or present is very similar: that we have participated with a tragic common civilizational process of environmental degradation, economic consolidation, and lack of political representation for many millennia which has predictable outcomes. He argues that a form of bioregionalism, the bioregional state, is required to deal with political corruption in present or in past societies connected to environmental degradation.
After looking at the world history of environmental degradation from very different methods, both sociologists Sing Chew and Mark D. Whitaker came to similar conclusions and are proponents of (different forms of) bioregionalism.
Related journals
Among the key journals in this field are:
Environmental Sociology
Human Ecology
Human Ecology Review
Nature and Culture
Organization & Environment
Population and Environment
Rural Sociology
Society and Natural Resources
See also
Bibliography of sociology
Ecological anthropology
Ecological design
Ecological economics
Ecological modernization theory
Enactivism
Environmental design
Environmental design and planning
Environmental economics
Environmental policy
Environmental racism
Environmental racism in Europe
Environmental social science
Ethnoecology
Political ecology
Sociology of architecture
Sociology of disaster
Climate change
References
Notes
Dunlap, Riley E., Frederick H. Buttel, Peter Dickens, and August Gijswijt (eds.) 2002. Sociological Theory and the Environment: Classical Foundations, Contemporary Insights (Rowman & Littlefield, ).
Dunlap, Riley E., and William Michelson (eds.) 2002.Handbook of Environmental Sociology (Greenwood Press, )
Freudenburg, William R., and Robert Gramling. 1989. "The Emergence of Environmental Sociology: Contributions of Riley E. Dunlap and William R. Catton, Jr.", Sociological Inquiry 59(4): 439–452
Harper, Charles. 2004. Environment and Society: Human Perspectives on Environmental Issues. Upper Saddle River, New Jersey: Pearson Education, Inc.
Humphrey, Craig R., and Frederick H. Buttel. 1982.Environment, Energy, and Society. Belmont, California: Wadsworth Publishing Company.
Humphrey, Craig R., Tammy L. Lewis and Frederick H. Buttel. 2002. Environment, Energy and Society: A New Synthesis. Belmont, California: Wadsworth/Thompson Learning.
Mehta, Michael, and Eric Ouellet. 1995. Environmental Sociology: Theory and Practice, Toronto: Captus Press.
Redclift, Michael, and Graham Woodgate, eds. 1997.International Handbook of Environmental Sociology (Edgar Elgar, 1997; )
Schnaiberg, Allan. 1980. The Environment: From Surplus to Scarcity. New York: Oxford University Press.
Further reading
Hannigan, John, "Environmental Sociology", Routledge, 2014.
Zehner, Ozzie, Green Illusions: The Dirty Secrets of Clean Energy and the Future of Environmentalism, University of Nebraska Press, 2012. An environmental sociology text forming a critique of energy production and green consumerism.
External links
ASA Section on Environment and Technology
ESA Environment & Society Research Network
ISA Research Committee on Environment and Society (RC24)
Canadian Sociological Association (CSA) Environment Research Cluster | 0.794793 | 0.986531 | 0.784088 |
Earth system science | Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science.
Definition
The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability".
Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include:
Variability: Many of the Earth System's natural 'modes' and variabilities across space and time are beyond human experience, because of the stability of the recent Holocene. Much Earth System science therefore relies on studies of the Earth's past behaviour and models to anticipate future behaviour in response to pressures.
Life: Biological processes play a much stronger role in the functioning and responses of the Earth System than previously thought. It appears to be integral to every part of the Earth System.
Connectivity: Processes are connected in ways and across depths and lateral distances that were previously unknown and inconceivable.
Non-linear: The behaviour of the Earth System is typified by strong non-linearities. This means that abrupt change can result when relatively small changes in a 'forcing function' push the System across a 'threshold'.
History
For millennia, humans have speculated how the physical and living elements on the surface of the Earth combine, with gods and goddesses frequently posited to embody specific elements. The notion that the Earth, itself, is alive was a regular theme of Greek philosophy and religion.
Early scientific interpretations of the Earth system began in the field of geology, initially in the Middle East and China, and largely focused on aspects such as the age of the Earth and the large-scale processes involved in mountain and ocean formation. As geology developed as a science, understanding of the interplay of different facets of the Earth system increased, leading to the inclusion of factors such as the Earth's interior, planetary geology, living systems and Earth-like worlds.
In many respects, the foundational concepts of Earth System science can be seen in the natural philosophy 19th century geographer Alexander von Humboldt. In the 20th century, Vladimir Vernadsky (1863–1945) saw the functioning of the biosphere as a geological force generating a dynamic disequilibrium, which in turn promoted the diversity of life.
In parallel, the field of systems science was developing across numerous other scientific fields, driven in part by the increasing availability and power of computers, and leading to the development of climate models that began to allow the detailed and interacting simulations of the Earth's weather and climate. Subsequent extension of these models has led to the development of "Earth system models" (ESMs) that include facets such as the cryosphere and the biosphere.
In the 1980s, where a NASA committee called the Earth System Science Committee was formed in 1983. The earliest reports of NASA's ESSC, Earth System Science: Overview (1986), and the book-length Earth System Science: A Closer View (1988), constitute a major landmark in the formal development of Earth system science. Early works discussing Earth system science, like these NASA reports, generally emphasized the increasing human impacts on the Earth system as a primary driver for the need of greater integration among the life and geo-sciences, making the origins of Earth system science parallel to the beginnings of global change studies and programs.
Climate science
Climatology and climate change have been central to Earth System science since its inception, as evidenced by the prominent place given to climate change in the early NASA reports discussed above. The Earth's climate system is a prime example of an emergent property of the whole planetary system, that is, one which cannot be fully understood without regarding it as a single integrated entity. It is also a system where human impacts have been growing rapidly in recent decades, lending immense importance to the successful development and advancement of Earth System science research. As just one example of the centrality of climatology to the field, leading American climatologist Michael E. Mann is the Director of one of the earliest centers for Earth System science research, the Earth System Science Center at Pennsylvania State University, and its mission statement reads, "the Earth System Science Center (ESSC) maintains a mission to describe, model, and understand the Earth's climate system".
Education
Earth System science can be studied at a postgraduate level at some universities. In general education, the American Geophysical Union, in cooperation with the Keck Geology Consortium and with support from five divisions within the National Science Foundation, convened a workshop in 1996, "to define common educational goals among all disciplines in the Earth sciences". In its report, participants noted that, "The fields that make up the Earth and space sciences are currently undergoing a major advancement that promotes understanding the Earth as a number of interrelated systems". Recognizing the rise of this systems approach, the workshop report recommended that an Earth System science curriculum be developed with support from the National Science Foundation.
In 2000, the Earth System Science Education Alliance (ESSEA) was begun, and currently includes the participation of 40+ institutions, with over 3,000 teachers having completed an ESSEA course as of fall 2009".
Related concepts
The concept of earth system law (still in its infancy as per 2021) is a sub-discipline of earth system governance, itself a subfield of earth system sciences analyzed from a social sciences perspective.
See also
References
External links
Earth system science at Nature.com
Global natural environment
Complex systems theory | 0.788715 | 0.994132 | 0.784087 |
Natural environment | The natural environment or natural world encompasses all biotic and abiotic things occurring naturally, meaning in this case not artificial. The term is most often applied to Earth or some parts of Earth. This environment encompasses the interaction of all living species, climate, weather and natural resources that affect human survival and economic activity.
The concept of the natural environment can be distinguished as components:
Complete ecological units that function as natural systems without massive civilized human intervention, including all vegetation, microorganisms, soil, rocks, plateaus, mountains, the atmosphere and natural phenomena that occur within their boundaries and their nature.
Universal natural resources and physical phenomena that lack clear-cut boundaries, such as air, water and climate, as well as energy, radiation, electric charge and magnetism, not originating from civilized human actions.
In contrast to the natural environment is the built environment. Built environments are where humans have fundamentally transformed landscapes such as urban settings and agricultural land conversion, the natural environment is greatly changed into a simplified human environment. Even acts which seem less extreme, such as building a mud hut or a photovoltaic system in the desert, the modified environment becomes an artificial one. Though many animals build things to provide a better environment for themselves, they are not human, hence beaver dams and the works of mound-building termites are thought of as natural.
People cannot find absolutely natural environments on Earth, and naturalness usually varies in a continuum, from 100% natural in one extreme to 0% natural in the other. The massive environmental changes of humanity in the Anthropocene have fundamentally effected all natural environments including: climate change, biodiversity loss and pollution from plastic and other chemicals in the air and water. More precisely, we can consider the different aspects or components of an environment, and see that their degree of naturalness is not uniform. If, for instance, in an agricultural field, the mineralogic composition and the structure of its soil are similar to those of an undisturbed forest soil, but the structure is quite different.
Composition
Earth science generally recognizes four spheres, the lithosphere, the hydrosphere, the atmosphere and the biosphere as correspondent to rocks, water, air and life respectively. Some scientists include as part of the spheres of the Earth, the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere, as well as the pedosphere (to soil) as an active and intermixed sphere. Earth science (also known as geoscience, the geographical sciences or the Earth Sciences), is an all-embracing term for the sciences related to the planet Earth. There are four major disciplines in earth sciences, namely geography, geology, geophysics and geodesy. These major disciplines use physics, chemistry, biology, chronology and mathematics to build a qualitative and quantitative understanding of the principal areas or spheres of Earth.
Geological activity
The Earth's crust or lithosphere, is the outermost solid surface of the planet and is chemically, physically and mechanically different from underlying mantle. It has been generated greatly by igneous processes in which magma cools and solidifies to form solid rock. Beneath the lithosphere lies the mantle which is heated by the decay of radioactive elements. The mantle though solid is in a state of rheic convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Volcanoes result primarily from the melting of subducted crust material or of rising mantle at mid-ocean ridges and mantle plumes.
Water on Earth
Most water is found in various kinds of natural body of water.
Oceans
An ocean is a major body of saline water and a component of the hydrosphere. Approximately 71% of the surface of the Earth (an area of some 362 million square kilometers) is covered by ocean, a continuous body of water that is customarily divided into several principal oceans and smaller seas. More than half of this area is over 3,000 meters (9,800 ft) deep. Average oceanic salinity is around 35 parts per thousand (ppt) (3.5%), and nearly all seawater has a salinity in the range of 30 to 38 ppt. Though generally recognized as several separate oceans, these waters comprise one global, interconnected body of salt water often referred to as the World Ocean or global ocean. The deep seabeds are more than half the Earth's surface, and are among the least-modified natural environments. The major oceanic divisions are defined in part by the continents, various archipelagos and other criteria, these divisions are : (in descending order of size) the Pacific Ocean, the Atlantic Ocean, the Indian Ocean, the Southern Ocean and the Arctic Ocean.
Rivers
A river is a natural watercourse, usually freshwater, flowing toward an ocean, a lake, a sea or another river. A few rivers simply flow into the ground and dry up completely without reaching another body of water.
The water in a river is usually in a channel, made up of a stream bed between banks. In larger rivers there is often also a wider floodplain shaped by waters over-topping the channel. Flood plains may be very wide in relation to the size of the river channel. Rivers are a part of the hydrological cycle. Water within a river is generally collected from precipitation through surface runoff, groundwater recharge, springs and the release of water stored in glaciers and snowpacks.
Small rivers may also be called by several other names, including stream, creek and brook. Their current is confined within a bed and stream banks. Streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity. The study of streams and waterways in general is known as surface hydrology.
Lakes
A lake (from Latin lacus) is a terrain feature, a body of water that is localized to the bottom of basin. A body of water is considered a lake when it is inland, is not part of an ocean and is larger and deeper than a pond.
Natural lakes on Earth are generally found in mountainous areas, rift zones and areas with ongoing or recent glaciation. Other lakes are found in endorheic basins or along the courses of mature rivers. In some parts of the world, there are many lakes because of chaotic drainage patterns left over from the last ice age. All lakes are temporary over geologic time scales, as they will slowly fill in with sediments or spill out of the basin containing them.
Ponds
A pond is a body of standing water, either natural or human-made, that is usually smaller than a lake. A wide variety of human-made bodies of water are classified as ponds, including water gardens designed for aesthetic ornamentation, fish ponds designed for commercial fish breeding and solar ponds designed to store thermal energy. Ponds and lakes are distinguished from streams by their current speed. While currents in streams are easily observed, ponds and lakes possess thermally driven micro-currents and moderate wind-driven currents. These features distinguish a pond from many other aquatic terrain features, such as stream pools and tide pools.
Human impact on water
Humans impact the water in different ways such as modifying rivers (through dams and stream channelization), urbanization and deforestation. These impact lake levels, groundwater conditions, water pollution, thermal pollution, and marine pollution. Humans modify rivers by using direct channel manipulation. We build dams and reservoirs and manipulate the direction of the rivers and water path. Dams can usefully create reservoirs and hydroelectric power. However, reservoirs and dams may negatively impact the environment and wildlife. Dams stop fish migration and the movement of organisms downstream. Urbanization affects the environment because of deforestation and changing lake levels, groundwater conditions, etc. Deforestation and urbanization go hand in hand. Deforestation may cause flooding, declining stream flow and changes in riverside vegetation. The changing vegetation occurs because when trees cannot get adequate water they start to deteriorate, leading to a decreased food supply for the wildlife in an area.
Atmosphere, climate and weather
The atmosphere of the Earth serves as a key factor in sustaining the planetary ecosystem. The thin layer of gases that envelops the Earth is held in place by the planet's gravity. Dry air consists of 78% nitrogen, 21% oxygen, 1% argon, inert gases and carbon dioxide. The remaining gases are often referred to as trace gases. The atmosphere includes greenhouse gases such as carbon dioxide, methane, nitrous oxide and ozone. Filtered air includes trace amounts of many other chemical compounds. Air also contains a variable amount of water vapor and suspensions of water droplets and ice crystals seen as clouds. Many natural substances may be present in tiny amounts in an unfiltered air sample, including dust, pollen and spores, sea spray, volcanic ash and meteoroids. Various industrial pollutants also may be present, such as chlorine (elementary or in compounds), fluorine compounds, elemental mercury, and sulphur compounds such as sulphur dioxide (SO2).
The ozone layer of the Earth's atmosphere plays an important role in reducing the amount of ultraviolet (UV) radiation that reaches the surface. As DNA is readily damaged by UV light, this serves to protect life at the surface. The atmosphere also retains heat during the night, thereby reducing the daily temperature extremes.
Layers of the atmosphere
Principal layers
Earth's atmosphere can be divided into five main layers. These layers are mainly determined by whether temperature increases or decreases with altitude. From highest to lowest, these layers are:
Exosphere: The outermost layer of Earth's atmosphere extends from the exobase upward, mainly composed of hydrogen and helium.
Thermosphere: The top of the thermosphere is the bottom of the exosphere, called the exobase. Its height varies with solar activity and ranges from about . The International Space Station orbits in this layer, between . In another way, the thermosphere is Earth's second highest atmospheric layer, extending from approximately 260,000 feet at the mesopause to the thermopause at altitudes ranging from 1,600,000 to 3,300,000 feet.
Mesosphere: The mesosphere extends from the stratopause to . It is the layer where most meteors burn up upon entering the atmosphere.
Stratosphere: The stratosphere extends from the tropopause to about . The stratopause, which is the boundary between the stratosphere and mesosphere, typically is at .
Troposphere: The troposphere begins at the surface and extends to between at the poles and at the equator, with some variation due to weather. The troposphere is mostly heated by transfer of energy from the surface, so on average the lowest part of the troposphere is warmest and temperature decreases with altitude. The tropopause is the boundary between the troposphere and stratosphere.
Other layers
Within the five principal layers determined by temperature there are several layers determined by other properties.
The ozone layer is contained within the stratosphere. It is mainly located in the lower portion of the stratosphere from about , though the thickness varies seasonally and geographically. About 90% of the ozone in our atmosphere is contained in the stratosphere.
The ionosphere: The part of the atmosphere that is ionized by solar radiation, stretches from and typically overlaps both the exosphere and the thermosphere. It forms the inner edge of the magnetosphere.
The homosphere and heterosphere: The homosphere includes the troposphere, stratosphere and mesosphere. The upper part of the heterosphere is composed almost completely of hydrogen, the lightest element.
The planetary boundary layer is the part of the troposphere that is nearest the Earth's surface and is directly affected by it, mainly through turbulent diffusion.
Effects of global warming
The dangers of global warming are being increasingly studied by a wide global consortium of scientists. These scientists are increasingly concerned about the potential long-term effects of global warming on our natural environment and on the planet. Of particular concern is how climate change and global warming caused by anthropogenic, or human-made releases of greenhouse gases, most notably carbon dioxide, can act interactively and have adverse effects upon the planet, its natural environment and humans' existence. It is clear the planet is warming, and warming rapidly. This is due to the greenhouse effect, which is caused by greenhouse gases, which trap heat inside the Earth's atmosphere because of their more complex molecular structure which allows them to vibrate and in turn trap heat and release it back towards the Earth. This warming is also responsible for the extinction of natural habitats, which in turn leads to a reduction in wildlife population. The most recent report from the Intergovernmental Panel on Climate Change (the group of the leading climate scientists in the world) concluded that the earth will warm anywhere from 2.7 to almost 11 degrees Fahrenheit (1.5 to 6 degrees Celsius) between 1990 and 2100.
Efforts have been increasingly focused on the mitigation of greenhouse gases that are causing climatic changes, on developing adaptative strategies to global warming, to assist humans, other animal, and plant species, ecosystems, regions and nations in adjusting to the effects of global warming. Some examples of recent collaboration to address climate change and global warming include:
The United Nations Framework Convention Treaty and convention on Climate Change, to stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.
The Kyoto Protocol, which is the protocol to the international Framework Convention on Climate Change treaty, again with the objective of reducing greenhouse gases in an effort to prevent anthropogenic climate change.
The Western Climate Initiative, to identify, evaluate, and implement collective and cooperative ways to reduce greenhouse gases in the region, focusing on a market-based cap-and-trade system.
A significantly profound challenge is to identify the natural environmental dynamics in contrast to environmental changes not within natural variances. A common solution is to adapt a static view neglecting natural variances to exist. Methodologically, this view could be defended when looking at processes which change slowly and short time series, while the problem arrives when fast processes turns essential in the object of the study.
Climate
Climate looks at the statistics of temperature, humidity, atmospheric pressure, wind, rainfall, atmospheric particle count and other meteorological elements in a given region over long periods of time. Weather, on the other hand, is the present condition of these same elements over periods up to two weeks.
Climates can be classified according to the average and typical ranges of different variables, most commonly temperature and precipitation. The most commonly used classification scheme is the one originally developed by Wladimir Köppen. The Thornthwaite system, in use since 1948, uses evapotranspiration as well as temperature and precipitation information to study animal species diversity and the potential impacts of climate changes.
Weather
Weather is a set of all the phenomena occurring in a given atmospheric area at a given time. Most weather phenomena occur in the troposphere, just below the stratosphere. Weather refers, generally, to day-to-day temperature and precipitation activity, whereas climate is the term for the average atmospheric conditions over longer periods of time. When used without qualification, "weather" is understood to be the weather of Earth.
Weather occurs due to density (temperature and moisture) differences between one place and another. These differences can occur due to the sun angle at any particular spot, which varies by latitude from the tropics. The strong temperature contrast between polar and tropical air gives rise to the jet stream. Weather systems in the mid-latitudes, such as extratropical cyclones, are caused by instabilities of the jet stream flow. Because the Earth's axis is tilted relative to its orbital plane, sunlight is incident at different angles at different times of the year. On the Earth's surface, temperatures usually range ±40 °C (100 °F to −40 °F) annually. Over thousands of years, changes in the Earth's orbit have affected the amount and distribution of solar energy received by the Earth and influenced long-term climate.
Surface temperature differences in turn cause pressure differences. Higher altitudes are cooler than lower altitudes due to differences in compressional heating. Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. The atmosphere is a chaotic system, and small changes to one part of the system can grow to have large effects on the system as a whole. Human attempts to control the weather have occurred throughout human history, and there is evidence that civilized human activity such as agriculture and industry has inadvertently modified weather patterns.
Life
Evidence suggests that life on Earth has existed for about 3.7 billion years. All known life forms share fundamental molecular mechanisms, and based on these observations, theories on the origin of life attempt to find a mechanism explaining the formation of a primordial single cell organism from which all life originates. There are many different hypotheses regarding the path that might have been taken from simple organic molecules via pre-cellular life to protocells and metabolism.
Although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized by organization, metabolism, growth, adaptation, response to stimuli and reproduction. Life may also be said to be simply the characteristic state of organisms. In biology, the science of living organisms, "life" is the condition which distinguishes active organisms from inorganic matter, including the capacity for growth, functional activity and the continual change preceding death.
A diverse variety of living organisms (life forms) can be found in the biosphere on Earth, and properties common to these organisms—plants, animals, fungi, protists, archaea, and bacteria—are a carbon- and water-based cellular form with complex organization and heritable genetic information. Living organisms undergo metabolism, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations. More complex living organisms can communicate through various means.
Ecosystems
An ecosystem (also called an environment) is a natural unit consisting of all plants, animals, and micro-organisms (biotic factors) in an area functioning together with all of the non-living physical (abiotic) factors of the environment.
Central to the ecosystem concept is the idea that living organisms are continually engaged in a highly interrelated set of relationships with every other element constituting the environment in which they exist. Eugene Odum, one of the founders of the science of ecology, stated: "Any unit that includes all of the organisms (i.e.: the "community") in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e.: exchange of materials between living and nonliving parts) within the system is an ecosystem."
The human ecosystem concept is then grounded in the deconstruction of the human/nature dichotomy, and the emergent premise that all species are ecologically integrated with each other, as well as with the abiotic constituents of their biotope.
A more significant number or variety of species or biological diversity of an ecosystem may contribute to greater resilience of an ecosystem because there are more species present at a location to respond to change and thus "absorb" or reduce its effects. This reduces the effect before the ecosystem's structure changes to a different state. This is not universally the case and there is no proven relationship between the species diversity of an ecosystem and its ability to provide goods and services on a sustainable level.
The term ecosystem can also pertain to human-made environments, such as human ecosystems and human-influenced ecosystems. It can describe any situation where there is relationship between living organisms and their environment. Fewer areas on the surface of the earth today exist free from human contact, although some genuine wilderness areas continue to exist without any forms of human intervention.
Biogeochemical cycles
Global biogeochemical cycles are critical to life, most notably those of water, oxygen, carbon, nitrogen and phosphorus.
The nitrogen cycle is the transformation of nitrogen and nitrogen-containing compounds in nature. It is a cycle which includes gaseous components.
The water cycle, is the continuous movement of water on, above, and below the surface of the Earth. Water can change states among liquid, vapour, and ice at various places in the water cycle. Although the balance of water on Earth remains fairly constant over time, individual water molecules can come and go.
The carbon cycle is the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of the Earth.
The oxygen cycle is the movement of oxygen within and between its three main reservoirs: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is photosynthesis, which is responsible for the modern Earth's atmospheric composition and life.
The phosphorus cycle is the movement of phosphorus through the lithosphere, hydrosphere, and biosphere. The atmosphere does not play a significant role in the movements of phosphorus, because phosphorus and phosphorus compounds are usually solids at the typical ranges of temperature and pressure found on Earth.
Wilderness
Wilderness is generally defined as a natural environment on Earth that has not been significantly modified by human activity. The WILD Foundation goes into more detail, defining wilderness as: "The most intact, undisturbed wild natural areas left on our planet – those last truly wild places that humans do not control and have not developed with roads, pipelines or other industrial infrastructure." Wilderness areas and protected parks are considered important for the survival of certain species, ecological studies, conservation, solitude, and recreation. Wilderness is deeply valued for cultural, spiritual, moral, and aesthetic reasons. Some nature writers believe wilderness areas are vital for the human spirit and creativity.
The word, "wilderness", derives from the notion of wildness; in other words that which is not controllable by humans. The word etymology is from the Old English wildeornes, which in turn derives from wildeor meaning wild beast (wild + deor = beast, deer). From this point of view, it is the wildness of a place that makes it a wilderness. The mere presence or activity of people does not disqualify an area from being "wilderness". Many ecosystems that are, or have been, inhabited or influenced by activities of people may still be considered "wild". This way of looking at wilderness includes areas within which natural processes operate without very noticeable human interference.
Wildlife includes all non-domesticated plants, animals and other organisms. Domesticating wild plant and animal species for human benefit has occurred many times all over the planet, and has a major impact on the environment, both positive and negative. Wildlife can be found in all ecosystems. Deserts, rain forests, plains, and other areas—including the most developed urban sites—all have distinct forms of wildlife. While the term in popular culture usually refers to animals that are untouched by civilized human factors, most scientists agree that wildlife around the world is (now) impacted by human activities.
Challenges
It is the common understanding of natural environment that underlies environmentalism — a broad political, social and philosophical movement that advocates various actions and policies in the interest of protecting what nature remains in the natural environment, or restoring or expanding the role of nature in this environment. While true wilderness is increasingly rare, wild nature (e.g., unmanaged forests, uncultivated grasslands, wildlife, wildflowers) can be found in many locations previously inhabited by humans.
Goals for the benefit of people and natural systems, commonly expressed by environmental scientists and environmentalists include:
Elimination of pollution and toxicants in air, water, soil, buildings, manufactured goods, and food.
Preservation of biodiversity and protection of endangered species.
Conservation and sustainable use of resources such as water, land, air, energy, raw materials, and natural resources.
Halting human-induced global warming, which represents pollution, a threat to biodiversity, and a threat to human populations.
Shifting from fossil fuels to renewable energy in electricity, heating and cooling, and transportation, which addresses pollution, global warming, and sustainability. This may include public transportation and distributed generation, which have benefits for traffic congestion and electric reliability.
Shifting from meat-intensive diets to largely plant-based diets in order to help mitigate biodiversity loss and climate change.
Establishment of nature reserves for recreational purposes and ecosystem preservation.
Sustainable and less polluting waste management including waste reduction (or even zero waste), reuse, recycling, composting, waste-to-energy, and anaerobic digestion of sewage sludge.
Reducing profligate consumption and clamping down on illegal fishing and logging.
Slowing and stabilisation of human population growth.
Reducing the import of second hand electronic appliances from developed countries to developing countries.
Criticism
In some cultures the term environment is meaningless because there is no separation between people and what they view as the natural world, or their surroundings. Specifically in the United States and Arabian countries many native cultures do not recognize the "environment", or see themselves as environmentalists.
See also
Biophilic design
Citizen's dividend
Conservation movement
Environmental history of the United States
Gaia hypothesis
Geological engineering
Greening
Index of environmental articles
List of conservation topics
List of environmental books
List of environmental issues
List of environmental websites
Natural capital
Natural history
Natural landscape
Nature-based solutions
Sustainability
Sustainable agriculture
Timeline of environmental history
References
Further reading
Allaby, Michael, and Chris Park, eds. A dictionary of environment and conservation (Oxford University Press, 2013), with a British emphasis.
External links
UNEP - United Nations Environment Programme
BBC - Science and Nature.
Science.gov – Environment & Environmental Quality
Habitat
Earth | 0.784616 | 0.998972 | 0.783809 |
Signal | Signal refers to both the process and the result of transmission of data over some media accomplished by embedding some variation. Signals are important in multiple subject fields including signal processing, information theory and biology.
In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The IEEE Transactions on Signal Processing includes audio, video, speech, image, sonar, and radar as examples of signals. A signal may also be defined as observable change in a quantity over space or time (a time series), even if it does not carry information.
In nature, signals can be actions done by an organism to alert other organisms, ranging from the release of plant chemicals to warn nearby plants of a predator, to sounds or motions made by animals to alert other animals of food. Signaling occurs in all organisms even at cellular levels, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability of animals to communicate with each other by developing ways of signaling. In human engineering, signals are typically provided by a sensor, and often the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, and a speaker does the reverse.
Another important property of a signal is its entropy or information content. Information theory serves as the formal study of signals and their content. The information of a signal is often accompanied by noise, which primarily refers to unwanted modifications of signals, but is often extended to include unwanted signals conflicting with desired signals (crosstalk). The reduction of noise is covered in part under the heading of signal integrity. The separation of desired signals from background noise is the field of signal recovery, one branch of which is estimation theory, a probabilistic approach to suppressing random disturbances.
Engineering disciplines such as electrical engineering have advanced the design, study, and implementation of systems involving transmission, storage, and manipulation of information. In the latter half of the 20th century, electrical engineering itself separated into several disciplines: electronic engineering and computer engineering developed to specialize in the design and analysis of systems that manipulate physical signals, while design engineering developed to address the functional design of signals in user–machine interfaces.
Definitions
Definitions specific to sub-fields are common:
In electronics and telecommunications, signal refers to any time-varying voltage, current, or electromagnetic wave that carries information.
In signal processing, signals are analog and digital representations of analog physical quantities.
In information theory, a signal is a codified message, that is, the sequence of states in a communication channel that encodes a message.
In a communication system, a transmitter encodes a message to create a signal, which is carried to a receiver by the communication channel. For example, the words "Mary had a little lamb" might be the message spoken into a telephone. The telephone transmitter converts the sounds into an electrical signal. The signal is transmitted to the receiving telephone by wires; at the receiver it is reconverted into sounds.
In telephone networks, signaling, for example common-channel signaling, refers to phone number and other digital control information rather than the actual voice signal.
Classification
Signals can be categorized in various ways. The most common distinction is between discrete and continuous spaces that the functions are defined over, for example, discrete and continuous-time domains. Discrete-time signals are often referred to as time series in other fields. Continuous-time signals are often referred to as continuous signals.
A second important distinction is between discrete-valued and continuous-valued. Particularly in digital signal processing, a digital signal may be defined as a sequence of discrete values, typically associated with an underlying continuous-valued physical process. In digital electronics, digital signals are the continuous-time waveform signals in a digital system, representing a bit-stream.
Signals may also be categorized by their spatial distributions as either point source signals (PSSs) or distributed source signals (DSSs).
In Signals and Systems, signals can be classified according to many criteria, mainly: according to the different feature of values, classified into analog signals and digital signals; according to the determinacy of signals, classified into deterministic signals and random signals; according to the strength of signals, classified into energy signals and power signals.
Analog and digital signals
Two main types of signals encountered in practice are analog and digital. The figure shows a digital signal that results from approximating an analog signal by its values at particular time instants. Digital signals are quantized, while analog signals are continuous.
Analog signal
An analog signal is any continuous signal for which the time-varying feature of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the sound pressure. It differs from a digital signal, in which the continuous quantity is a representation of a sequence of discrete values which can only take on one of a finite number of values.
The term analog signal usually refers to electrical signals; however, analog signals may use other mediums such as mechanical, pneumatic or hydraulic. An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information.
Any information may be conveyed by an analog signal; often such a signal is a measured response to changes in physical phenomena, such as sound, light, temperature, position, or pressure. The physical variable is converted to an analog signal by a transducer. For example, in sound recording, fluctuations in air pressure (that is to say, sound) strike the diaphragm of a microphone which induces corresponding electrical fluctuations. The voltage or the current is said to be an analog of the sound.
Digital signal
A digital signal is a signal that is constructed from a discrete set of waveforms of a physical quantity so as to represent a sequence of discrete values. A logic signal is a digital signal with only two possible values, and describes an arbitrary bit stream. Other types of digital signals can represent three-valued logic or higher valued logics.
Alternatively, a digital signal may be considered to be the sequence of codes represented by such a physical quantity. The physical quantity may be a variable electric current or voltage, the intensity, phase or polarization of an optical or other electromagnetic field, acoustic pressure, the magnetization of a magnetic storage media, etc. Digital signals are present in all digital electronics, notably computing equipment and data transmission.
With digital signals, system noise, provided it is not too great, will not affect system operation whereas noise always degrades the operation of analog signals to some degree.
Digital signals often arise via sampling of analog signals, for example, a continually fluctuating voltage on a line that can be digitized by an analog-to-digital converter circuit, wherein the circuit will read the voltage level on the line, say, every 50 microseconds and represent each reading with a fixed number of bits. The resulting stream of numbers is stored as digital data on a discrete-time and quantized-amplitude signal. Computers and other digital devices are restricted to discrete time.
Energy and power
According to the strengths of signals, practical signals can be classified into two categories: energy signals and power signals.
Energy signals: Those signals' energy are equal to a finite positive value, but their average powers are 0;
Power signals: Those signals' average power are equal to a finite positive value, but their energy are infinite.
Deterministic and random
Deterministic signals are those whose values at any time are predictable and can be calculated by a mathematical equation.
Random signals are signals that take on random values at any given time instant and must be modeled stochastically.
Even and odd
An even signal satisfies the condition
or equivalently if the following equation holds for all and in the domain of :
An odd signal satisfies the condition
or equivalently if the following equation holds for all and in the domain of :
Periodic
A signal is said to be periodic if it satisfies the condition:
or
Where:
= fundamental time period,
= fundamental frequency.
The same can be applied to . A periodic signal will repeat for every period.
Time discretization
Signals can be classified as continuous or discrete time. In the mathematical abstraction, the domain of a continuous-time signal is the set of real numbers (or some interval thereof), whereas the domain of a discrete-time (DT) signal is the set of integers (or other subsets of real numbers). What these integers represent depends on the nature of the signal; most often it is time.
A continuous-time signal is any function which is defined at every time t in an interval, most commonly an infinite interval. A simple source for a discrete-time signal is the sampling of a continuous signal, approximating the signal by a sequence of its values at particular time instants.
Amplitude quantization
If a signal is to be represented as a sequence of digital data, it is impossible to maintain exact precision – each number in the sequence must have a finite number of digits. As a result, the values of such a signal must be quantized into a finite set for practical representation. Quantization is the process of converting a continuous analog audio signal to a digital signal with discrete numerical values of integers.
Examples of signals
Naturally occurring signals can be converted to electronic signals by various sensors. Examples include:
Motion. The motion of an object can be considered to be a signal and can be monitored by various sensors to provide electrical signals. For example, radar can provide an electromagnetic signal for following aircraft motion. A motion signal is one-dimensional (time), and the range is generally three-dimensional. Position is thus a 3-vector signal; position and orientation of a rigid body is a 6-vector signal. Orientation signals can be generated using a gyroscope.
Sound. Since a sound is a vibration of a medium (such as air), a sound signal associates a pressure value to every value of time and possibly three space coordinates indicating the direction of travel. A sound signal is converted to an electrical signal by a microphone, generating a voltage signal as an analog of the sound signal. Sound signals can be sampled at a discrete set of time points; for example, compact discs (CDs) contain discrete signals representing sound, recorded at 44,100 Hz; since CDs are recorded in stereo, each sample contains data for a left and right channel, which may be considered to be a 2-vector signal. The CD encoding is converted to an electrical signal by reading the information with a laser, converting the sound signal to an optical signal.
Images. A picture or image consists of a brightness or color signal, a function of a two-dimensional location. The object's appearance is presented as emitted or reflected light, an electromagnetic signal. It can be converted to voltage or current waveforms using devices such as the charge-coupled device. A 2D image can have a continuous spatial domain, as in a traditional photograph or painting; or the image can be discretized in space, as in a digital image. Color images are typically represented as a combination of monochrome images in three primary colors.
Videos. A video signal is a sequence of images. A point in a video is identified by its two-dimensional position in the image and by the time at which it occurs, so a video signal has a three-dimensional domain. Analog video has one continuous domain dimension (across a scan line) and two discrete dimensions (frame and line).
Biological membrane potentials. The value of the signal is an electric potential (voltage). The domain is more difficult to establish. Some cells or organelles have the same membrane potential throughout; neurons generally have different potentials at different points. These signals have very low energies, but are enough to make nervous systems work; they can be measured in aggregate by electrophysiology techniques.
The output of a thermocouple, which conveys temperature information.
The output of a pH meter which conveys acidity information.
Signal processing
Signal processing is the manipulation of signals. A common example is signal transmission between different locations. The embodiment of a signal in electrical form is made by a transducer that converts the signal from its original form to a waveform expressed as a current or a voltage, or electromagnetic radiation, for example, an optical signal or radio transmission. Once expressed as an electronic signal, the signal is available for further processing by electrical devices such as electronic amplifiers and filters, and can be transmitted to a remote location by a transmitter and received using radio receivers.
Signals and systems
In electrical engineering (EE) programs, signals are covered in a class and field of study known as signals and systems. Depending on the school, undergraduate EE students generally take the class as juniors or seniors, normally depending on the number and level of previous linear algebra and differential equation classes they have taken.
The field studies input and output signals, and the mathematical representations between them known as systems, in four domains: time, frequency, s and z. Since signals and systems are both studied in these four domains, there are 8 major divisions of study. As an example, when working with continuous-time signals (t), one might transform from the time domain to a frequency or s domain; or from discrete time (n) to frequency or z domains. Systems also can be transformed between these domains like signals, with continuous to s and discrete to z.
Signals and systems is a subset of the field of mathematical modeling. It involves circuit analysis and design via mathematical modeling and some numerical methods, and was updated several decades ago with dynamical systems tools including differential equations, and recently, Lagrangians. Students are expected to understand the modeling tools as well as the mathematics, physics, circuit analysis, and transformations between the 8 domains.
Because mechanical engineering (ME) topics like friction, dampening etc. have very close analogies in signal science (inductance, resistance, voltage, etc.), many of the tools originally used in ME transformations (Laplace and Fourier transforms, Lagrangians, sampling theory, probability, difference equations, etc.) have now been applied to signals, circuits, systems and their components, analysis and design in EE. Dynamical systems that involve noise, filtering and other random or chaotic attractors and repellers have now placed stochastic sciences and statistics between the more deterministic discrete and continuous functions in the field. (Deterministic as used here means signals that are completely determined as functions of time).
EE taxonomists are still not decided where signals and systems falls within the whole field of signal processing vs. circuit analysis and mathematical modeling, but the common link of the topics that are covered in the course of study has brightened boundaries with dozens of books, journals, etc. called "Signals and Systems", and used as text and test prep for the EE, as well as, recently, computer engineering exams.
Gallery
See also
Current loop – a signaling system in widespread use for process control
Signal-to-noise ratio
Notes
References
Further reading
Engineering concepts
Digital signal processing
Signal processing
Telecommunication theory | 0.787724 | 0.99496 | 0.783754 |
Adaptive radiation | In evolutionary biology, adaptive radiation is a process in which organisms diversify rapidly from an ancestral species into a multitude of new forms, particularly when a change in the environment makes new resources available, alters biotic interactions or opens new environmental niches. Starting with a single ancestor, this process results in the speciation and phenotypic adaptation of an array of species exhibiting different morphological and physiological traits. The prototypical example of adaptive radiation is finch speciation on the Galapagos ("Darwin's finches"), but examples are known from around the world.
Characteristics
Four features can be used to identify an adaptive radiation:
A common ancestry of component species: specifically a recent ancestry. Note that this is not the same as a monophyly in which all descendants of a common ancestor are included.
A phenotype-environment correlation: a significant association between environments and the morphological and physiological traits used to exploit those environments.
Trait utility: the performance or fitness advantages of trait values in their corresponding environments.
Rapid speciation: presence of one or more bursts in the emergence of new species around the time that ecological and phenotypic divergence is underway.
Conditions
Adaptive radiations are thought to be triggered by an ecological opportunity or a new adaptive zone. Sources of ecological opportunity can be the loss of antagonists (competitors or predators), the evolution of a key innovation, or dispersal to a new environment. Any one of these ecological opportunities has the potential to result in an increase in population size and relaxed stabilizing (constraining) selection. As genetic diversity is positively correlated with population size the expanded population will have more genetic diversity compared to the ancestral population. With reduced stabilizing selection phenotypic diversity can also increase. In addition, intraspecific competition will increase, promoting divergent selection to use a wider range of resources. This ecological release provides the potential for ecological speciation and thus adaptive radiation.
Occupying a new environment might take place under the following conditions:
A new habitat has opened up: a volcano, for example, can create new ground in the middle of the ocean. This is the case in places like Hawaii and the Galapagos. For aquatic species, the formation of a large new lake habitat could serve the same purpose; the tectonic movement that formed the East African Rift, ultimately leading to the creation of the Rift Valley Lakes, is an example of this. An extinction event could effectively achieve this same result, opening up niches that were previously occupied by species that no longer exist.
This new habitat is relatively isolated. When a volcano erupts on the mainland and destroys an adjacent forest, it is likely that the terrestrial plant and animal species that used to live in the destroyed region will recolonize without evolving greatly. However, if a newly formed habitat is isolated, the species that colonize it will likely be somewhat random and uncommon arrivals.
The new habitat has a wide availability of niche space. The rare colonist can only adaptively radiate into as many forms as there are niches.
Relationship between mass-extinctions and mass adaptive radiations
A 2020 study found there to be no direct causal relationship between the proportionally most comparable mass radiations and extinctions in terms of "co-occurrence of species", substantially challenging the hypothesis of "creative mass extinctions".
Examples
Darwin's finches
Darwin's finches on the Galapagos Islands are a model system for the study of adaptive radiation. Today represented by approximately 15 species, Darwin's finches are Galapagos endemics famously adapted for a specialized feeding behavior (although one species, the Cocos finch (Pinaroloxias inornata), is not found in the Galapagos but on the island of Cocos south of Costa Rica). Darwin's finches are not actually finches in the true sense, but are members of the tanager family Thraupidae, and are derived from a single ancestor that arrived in the Galapagos from mainland South America perhaps just 3 million years ago. Excluding the Cocos finch, each species of Darwin's finch is generally widely distributed in the Galapagos and fills the same niche on each island. For the ground finches, this niche is a diet of seeds, and they have thick bills to facilitate the consumption of these hard materials. The ground finches are further specialized to eat seeds of a particular size: the large ground finch (Geospiza magnirostris) is the largest species of Darwin's finch and has the thickest beak for breaking open the toughest seeds, the small ground finch (Geospiza fuliginosa) has a smaller beak for eating smaller seeds, and the medium ground finch (Geospiza fortis) has a beak of intermediate size for optimal consumption of intermediately sized seeds (relative to G. magnirostris and G. fuliginosa). There is some overlap: for example, the most robust medium ground finches could have beaks larger than those of the smallest large ground finches. Because of this overlap, it can be difficult to tell the species apart by eye, though their songs differ. These three species often occur sympatrically, and during the rainy season in the Galapagos when food is plentiful, they specialize little and eat the same, easily accessible foods. It was not well-understood why their beaks were so adapted until Peter and Rosemary Grant studied their feeding behavior in the long dry season, and discovered that when food is scarce, the ground finches use their specialized beaks to eat the seeds that they are best suited to eat and thus avoid starvation.
The other finches in the Galapagos are similarly uniquely adapted for their particular niche. The cactus finches (Geospiza sp.) have somewhat longer beaks than the ground finches that serve the dual purpose of allowing them to feed on Opuntia cactus nectar and pollen while these plants are flowering, but on seeds during the rest of the year. The warbler-finches (Certhidea sp.) have short, pointed beaks for eating insects. The woodpecker finch (Camarhynchus pallidus) has a slender beak which it uses to pick at wood in search of insects; it also uses small sticks to reach insect prey inside the wood, making it one of the few animals that use tools.
The mechanism by which the finches initially diversified is still an area of active research. One proposition is that the finches were able to have a non-adaptive, allopatric speciation event on separate islands in the archipelago, such that when they reconverged on some islands, they were able to maintain reproductive isolation. Once they occurred in sympatry, niche specialization was favored so that the different species competed less directly for resources. This second, sympatric event was adaptive radiation.
Cichlids of the African Great Lakes
The haplochromine cichlid fishes in the Great Lakes of the East African Rift (particularly in Lake Tanganyika, Lake Malawi, and Lake Victoria) form the most speciose modern example of adaptive radiation. These lakes are believed to be home to about 2,000 different species of cichlid, spanning a wide range of ecological roles and morphological characteristics. Cichlids in these lakes fill nearly all of the roles typically filled by many fish families, including those of predators, scavengers, and herbivores, with varying dentitions and head shapes to match their dietary habits. In each case, the radiation events are only a few million years old, making the high level of speciation particularly remarkable. Several factors could be responsible for this diversity: the availability of a multitude of niches probably favored specialization, as few other fish taxa are present in the lakes (meaning that sympatric speciation was the most probable mechanism for initial specialization). Also, continual changes in the water level of the lakes during the Pleistocene (which often turned the largest lakes into several smaller ones) could have created the conditions for secondary allopatric speciation.
Tanganyika cichlids
Lake Tanganyika is the site from which nearly all the cichlid lineages of East Africa (including both riverine and lake species) originated. Thus, the species in the lake constitute a single adaptive radiation event but do not form a single monophyletic clade. Lake Tanganyika is also the least speciose of the three largest African Great Lakes, with only around 200 species of cichlid; however, these cichlids are more morphologically divergent and ecologically distinct than their counterparts in lakes Malawi and Victoria, an artifact of Lake Tanganyika's older cichlid fauna. Lake Tanganyika itself is believed to have formed 9–12 million years ago, putting a recent cap on the age of the lake's cichlid fauna. Many of Tanganyika's cichlids live very specialized lifestyles. The giant or emperor cichlid (Boulengerochromis microlepis) is a piscivore often ranked the largest of all cichlids (though it competes for this title with South America's Cichla temensis, the speckled peacock bass). It is thought that giant cichlids spawn only a single time, breeding in their third year and defending their young until they reach a large size, before dying of starvation some time thereafter. The three species of Altolamprologus are also piscivores, but with laterally compressed bodies and thick scales enabling them to chase prey into thin cracks in rocks without damaging their skin. Plecodus straeleni has evolved large, strangely curved teeth that are designed to scrape scales off of the sides of other fish, scales being its main source of food. Gnathochromis permaxillaris possesses a large mouth with a protruding upper lip, and feeds by opening this mouth downward onto the sandy lake bottom, sucking in small invertebrates. A number of Tanganyika's cichlids are shell-brooders, meaning that mating pairs lay and fertilize their eggs inside of empty shells on the lake bottom. Lamprologus callipterus is a unique egg-brooding species, with 15 cm-long males amassing collections of shells and guarding them in the hopes of attracting females (about 6 cm in length) to lay eggs in these shells. These dominant males must defend their territories from three types of rival: (1) other dominant males looking to steal shells; (2) younger, "sneaker" males looking to fertilize eggs in a dominant male's territory; and (3) tiny, 2–4 cm "parasitic dwarf" males that also attempt to rush in and fertilize eggs in the dominant male's territory. These parasitic dwarf males never grow to the size of dominant males, and the male offspring of dominant and parasitic dwarf males grow with 100% fidelity into the form of their fathers. A number of other highly specialized Tanganyika cichlids exist aside from these examples, including those adapted for life in open lake water up to 200m deep.
Malawi cichlids
The cichlids of Lake Malawi constitute a "species flock" of up to 1000 endemic species. Only seven cichlid species in Lake Malawi are not a part of the species flock: the Eastern happy (Astatotilapia calliptera), the sungwa (Serranochromis robustus), and five tilapia species (genera Oreochromis and Coptodon). All of the other cichlid species in the lake are descendants of a single original colonist species, which itself was descended from Tanganyikan ancestors. The common ancestor of Malawi's species flock is believed to have reached the lake 3.4 million years ago at the earliest, making Malawi cichlids' diversification into their present numbers particularly rapid. Malawi's cichlids span a similarly range of feeding behaviors to those of Tanganyika, but also show signs of a much more recent origin. For example, all members of the Malawi species flock are mouth-brooders, meaning the female keeps her eggs in her mouth until they hatch; in almost all species, the eggs are also fertilized in the female's mouth, and in a few species, the females continue to guard their fry in their mouth after they hatch. Males of most species display predominantly blue coloration when mating. However, a number of particularly divergent species are known from Malawi, including the piscivorous Nimbochromis livingtonii, which lies on its side in the substrate until small cichlids, perhaps drawn to its broken white patterning, come to inspect the predator - at which point they are swiftly eaten.
Victoria's cichlids
Lake Victoria's cichlids are also a species flock, once composed of some 500 or more species. The deliberate introduction of the Nile Perch (Lates niloticus) in the 1950s proved disastrous for Victoria cichlids, and the collective biomass of the Victoria cichlid species flock has decreased substantially and an unknown number of species have become extinct. However, the original range of morphological and behavioral diversity seen in the lake's cichlid fauna is still mostly present today, if endangered. These again include cichlids specialized for niches across the trophic spectrum, as in Tanganyika and Malawi, but again, there are standouts. Victoria is famously home to many piscivorous cichlid species, some of which feed by sucking the contents out of mouthbrooding females' mouths. Victoria's cichlids constitute a far younger radiation than even that of Lake Malawi, with estimates of the age of the flock ranging from 200,000 years to as little as 14,000.
Adaptive radiation in Hawaii
Hawaii has served as the site of a number of adaptive radiation events, owing to its isolation, recent origin, and large land area. The three most famous examples of these radiations are presented below, though insects like the Hawaiian drosophilid flies and Hyposmocoma moths have also undergone adaptive radiation.
Hawaiian honeycreepers
The Hawaiian honeycreepers form a large, highly morphologically diverse species group of birds that began radiating in the early days of the Hawaiian archipelago. While today only 17 species are known to persist in Hawaii (3 more may or may not be extinct), there were more than 50 species prior to Polynesian colonization of the archipelago (between 18 and 21 species have gone extinct since the discovery of the islands by westerners). The Hawaiian honeycreepers are known for their beaks, which are specialized to satisfy a wide range of dietary needs: for example, the beak of the ʻakiapōlāʻau (Hemignathus wilsoni) is characterized by a short, sharp lower mandible for scraping bark off of trees, and the much longer, curved upper mandible is used to probe the wood underneath for insects. Meanwhile, the ʻiʻiwi (Drepanis coccinea) has a very long curved beak for reaching nectar deep in Lobelia flowers. An entire clade of Hawaiian honeycreepers, the tribe Psittirostrini, is composed of thick-billed, mostly seed-eating birds, like the Laysan finch (Telespiza cantans). In at least some cases, similar morphologies and behaviors appear to have evolved convergently among the Hawaiian honeycreepers; for example, the short, pointed beaks of Loxops and Oreomystis evolved separately despite once forming the justification for lumping the two genera together. The Hawaiian honeycreepers are believed to have descended from a single common ancestor some 15 to 20 million years ago, though estimates range as low as 3.5 million years.
Hawaiian silverswords
Adaptive radiation is not a strictly vertebrate phenomenon, and examples are also known from among plants. The most famous example of adaptive radiation in plants is quite possibly the Hawaiian silverswords, named for alpine desert-dwelling Argyroxiphium species with long, silvery leaves that live for up to 20 years before growing a single flowering stalk and then dying. The Hawaiian silversword alliance consists of twenty-eight species of Hawaiian plants which, aside from the namesake silverswords, includes trees, shrubs, vines, cushion plants, and more. The silversword alliance is believed to have originated in Hawaii no more than 6 million years ago, making this one of Hawaii's youngest adaptive radiation events. This means that the silverswords evolved on Hawaii's modern high islands, and descended from a single common ancestor that arrived on Kauai from western North America. The closest modern relatives of the silverswords today are California tarweeds of the family Asteraceae.
Hawaiian lobelioids
Hawaii is also the site of a separate major floral adaptive radiation event: the Hawaiian lobelioids. The Hawaiian lobelioids are significantly more speciose than the silverswords, perhaps because they have been present in Hawaii for so much longer: they descended from a single common ancestor who arrived in the archipelago up to 15 million years ago. Today the Hawaiian lobelioids form a clade of over 125 species, including succulents, trees, shrubs, epiphytes, etc. Many species have been lost to extinction and many of the surviving species endangered.
Caribbean anoles
Anole lizards are distributed broadly in the New World, from the Southeastern US to South America. With over 400 species currently recognized, often placed in a single genus (Anolis), they constitute one of the largest radiation events among all lizards. Anole radiation on the mainland has largely been a process of speciation, and is not adaptive to any great degree, but anoles on each of the Greater Antilles (Cuba, Hispaniola, Puerto Rico, and Jamaica) have adaptively radiated in separate, convergent ways. On each of these islands, anoles have evolved with such a consistent set of morphological adaptations that each species can be assigned to one of six "ecomorphs": trunk–ground, trunk–crown, grass–bush, crown–giant, twig, and trunk. Take for example crown–giants from each of these islands: the Cuban Anolis luteogularis, Hispaniola's Anolis ricordii, Puerto Rico's Anolis cuvieri, and Jamaica's Anolis garmani (Cuba and Hispaniola are both home to more than one species of crown–giant). These anoles are all large, canopy-dwelling species with large heads and large lamellae (scales on the undersides of the fingers and toes that are important for traction in climbing), and yet none of these species are particularly closely related and appear to have evolved these similar traits independently. The same can be said of the other five ecomorphs across the Caribbean's four largest islands. Much like in the case of the cichlids of the three largest African Great Lakes, each of these islands is home to its own convergent Anolis adaptive radiation event.
Other examples
Presented above are the most well-documented examples of modern adaptive radiation, but other examples are known. Populations of three-spined sticklebacks have repeatedly diverged and evolved into distinct ecotypes. On Madagascar, birds of the family Vangidae are marked by very distinct beak shapes to suit their ecological roles. Madagascan mantellid frogs have radiated into forms that mirror other tropical frog faunas, with the brightly colored mantellas (Mantella) having evolved convergently with the Neotropical poison dart frogs of Dendrobatidae, while the arboreal Boophis species are the Madagascan equivalent of tree frogs and glass frogs. The pseudoxyrhophiine snakes of Madagascar have evolved into fossorial, arboreal, terrestrial, and semi-aquatic forms that converge with the colubroid faunas in the rest of the world. These Madagascan examples are significantly older than most of the other examples presented here: Madagascar's fauna has been evolving in isolation since the island split from India some 88 million years ago, and the Mantellidae originated around 50 mya. Older examples are known: the K-Pg extinction event, which caused the disappearance of the dinosaurs and most other reptilian megafauna 65 million years ago, is seen as having triggered a global adaptive radiation event that created the mammal diversity that exists today. Also the Cambrian Explosion, where vacant niches left by the extinction of Ediacaran biota during End-Ediacaran mass extinction were filled up by the emergence of new phyla.
See also
Cambrian explosion—the most notable evolutionary radiation event
Evolutionary radiation—a more general term to describe any radiation
List of adaptive radiated Hawaiian honeycreepers by form
List of adaptive radiated marsupials by form
Nonadaptive radiation
References
Further reading
Wilson, E. et al. Life on Earth, by Wilson, E.; Eisner, T.; Briggs, W.; Dickerson, R.; Metzenberg, R.; O'Brien, R.; Susman, M.; Boggs, W. (Sinauer Associates, Inc., Publishers, Stamford, Connecticut), c 1974. Chapters: The Multiplication of Species; Biogeography, pp 824–877. 40 Graphs, w species pictures, also Tables, Photos, etc. Includes Galápagos Islands, Hawaii, and Australia subcontinent, (plus St. Helena Island, etc.).
Leakey, Richard. The Origin of Humankind—on adaptive radiation in biology and human evolution, pp. 28–32, 1994, Orion Publishing.
Grant, P.R. 1999. The ecology and evolution of Darwin's Finches. Princeton University Press, Princeton, NJ.
Mayr, Ernst. 2001. What evolution is. Basic Books, New York, NY.
Gavrilets, S. and A. Vose. 2009. Dynamic patterns of adaptive radiation: evolution of mating preferences. In Butlin, R.K., J. Bridle, and D. Schluter (eds) Speciation and Patterns of Diversity, Cambridge University Press, page. 102–126.
Pinto, Gabriel, Luke Mahler, Luke J. Harmon, and Jonathan B. Losos. "Testing the Island Effect in Adaptive Radiation: Rates and Patterns of Morphological Diversification in Caribbean and Mainland Anolis Lizards." NCBI (2008): n. pag. Web. 28 Oct. 2014.
Schluter, Dolph. The ecology of adaptive radiation. Oxford University Press, 2000.
Speciation
Evolutionary biology terminology | 0.788257 | 0.994262 | 0.783734 |
Adaptation | In biology, adaptation has three related meanings. Firstly, it is the dynamic evolutionary process of natural selection that fits organisms to their environment, enhancing their evolutionary fitness. Secondly, it is a state reached by the population during that process. Thirdly, it is a phenotypic trait or adaptive trait, with a functional role in each individual organism, that is maintained and has evolved through natural selection.
Historically, adaptation has been described from the time of the ancient Greek philosophers such as Empedocles and Aristotle. In 18th and 19th century natural theology, adaptation was taken as evidence for the existence of a deity. Charles Darwin and Alfred Russel Wallace proposed instead that it was explained by natural selection.
Adaptation is related to biological fitness, which governs the rate of evolution as measured by change in allele frequencies. Often, two or more species co-adapt and co-evolve as they develop adaptations that interlock with those of the other species, such as with flowering plants and pollinating insects. In mimicry, species evolve to resemble other species; in mimicry this is a mutually beneficial co-evolution as each of a group of strongly defended species (such as wasps able to sting) come to advertise their defenses in the same way. Features evolved for one purpose may be co-opted for a different one, as when the insulating feathers of dinosaurs were co-opted for bird flight.
Adaptation is a major topic in the philosophy of biology, as it concerns function and purpose (teleology). Some biologists try to avoid terms which imply purpose in adaptation, not least because it suggests a deity's intentions, but others note that adaptation is necessarily purposeful.
History
Adaptation is an observable fact of life accepted by philosophers and natural historians from ancient times, independently of their views on evolution, but their explanations differed. Empedocles did not believe that adaptation required a final cause (a purpose), but thought that it "came about naturally, since such things survived." Aristotle did believe in final causes, but assumed that species were fixed.
In natural theology, adaptation was interpreted as the work of a deity and as evidence for the existence of God. William Paley believed that organisms were perfectly adapted to the lives they led, an argument that shadowed Gottfried Wilhelm Leibniz, who had argued that God had brought about "the best of all possible worlds." Voltaire's satire Dr. Pangloss is a parody of this optimistic idea, and David Hume also argued against design. Charles Darwin broke with the tradition by emphasising the flaws and limitations which occurred in the animal and plant worlds.
Jean-Baptiste Lamarck proposed a tendency for organisms to become more complex, moving up a ladder of progress, plus "the influence of circumstances", usually expressed as use and disuse. This second, subsidiary element of his theory is what is now called Lamarckism, a proto-evolutionary hypothesis of the inheritance of acquired characteristics, intended to explain adaptations by natural means.
Other natural historians, such as Buffon, accepted adaptation, and some also accepted evolution, without voicing their opinions as to the mechanism. This illustrates the real merit of Darwin and Alfred Russel Wallace, and secondary figures such as Henry Walter Bates, for putting forward a mechanism whose significance had only been glimpsed previously. A century later, experimental field studies and breeding experiments by people such as E. B. Ford and Theodosius Dobzhansky produced evidence that natural selection was not only the 'engine' behind adaptation, but was a much stronger force than had previously been thought.
General principles
What adaptation is
Adaptation is primarily a process rather than a physical form or part of a body. An internal parasite (such as a liver fluke) can illustrate the distinction: such a parasite may have a very simple bodily structure, but nevertheless the organism is highly adapted to its specific environment. From this we see that adaptation is not just a matter of visible traits: in such parasites critical adaptations take place in the life cycle, which is often quite complex. However, as a practical term, "adaptation" often refers to a product: those features of a species which result from the process. Many aspects of an animal or plant can be correctly called adaptations, though there are always some features whose function remains in doubt. By using the term adaptation for the evolutionary process, and adaptive trait for the bodily part or function (the product), one may distinguish the two different senses of the word.
Adaptation is one of the two main processes that explain the observed diversity of species, such as the different species of Darwin's finches. The other process is speciation, in which new species arise, typically through reproductive isolation. An example widely used today to study the interplay of adaptation and speciation is the evolution of cichlid fish in African lakes, where the question of reproductive isolation is complex.
Adaptation is not always a simple matter where the ideal phenotype evolves for a given environment. An organism must be viable at all stages of its development and at all stages of its evolution. This places constraints on the evolution of development, behaviour, and structure of organisms. The main constraint, over which there has been much debate, is the requirement that each genetic and phenotypic change during evolution should be relatively small, because developmental systems are so complex and interlinked. However, it is not clear what "relatively small" should mean, for example polyploidy in plants is a reasonably common large genetic change. The origin of eukaryotic endosymbiosis is a more dramatic example.
All adaptations help organisms survive in their ecological niches. The adaptive traits may be structural, behavioural or physiological. Structural adaptations are physical features of an organism, such as shape, body covering, armament, and internal organization. Behavioural adaptations are inherited systems of behaviour, whether inherited in detail as instincts, or as a neuropsychological capacity for learning. Examples include searching for food, mating, and vocalizations. Physiological adaptations permit the organism to perform special functions such as making venom, secreting slime, and phototropism, but also involve more general functions such as growth and development, temperature regulation, ionic balance and other aspects of homeostasis. Adaptation affects all aspects of the life of an organism.
The following definitions are given by the evolutionary biologist Theodosius Dobzhansky:
1. Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.
2. Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.
3. An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.
What adaptation is not
Adaptation differs from flexibility, acclimatization, and learning, all of which are changes during life which are not inherited. Flexibility deals with the relative capacity of an organism to maintain itself in different habitats: its degree of specialization. Acclimatization describes automatic physiological adjustments during life; learning means alteration in behavioural performance during life.
Flexibility stems from phenotypic plasticity, the ability of an organism with a given genotype (genetic type) to change its phenotype (observable characteristics) in response to changes in its habitat, or to move to a different habitat. The degree of flexibility is inherited, and varies between individuals. A highly specialized animal or plant lives only in a well-defined habitat, eats a specific type of food, and cannot survive if its needs are not met. Many herbivores are like this; extreme examples are koalas which depend on Eucalyptus, and giant pandas which require bamboo. A generalist, on the other hand, eats a range of food, and can survive in many different conditions. Examples are humans, rats, crabs and many carnivores. The tendency to behave in a specialized or exploratory manner is inherited—it is an adaptation. Rather different is developmental flexibility: "An animal or plant is developmentally flexible if when it is raised in or transferred to new conditions, it changes in structure so that it is better fitted to survive in the new environment," writes the evolutionary biologist John Maynard Smith.
If humans move to a higher altitude, respiration and physical exertion become a problem, but after spending time in high altitude conditions they acclimatize to the reduced partial pressure of oxygen, such as by producing more red blood cells. The ability to acclimatize is an adaptation, but the acclimatization itself is not. The reproductive rate declines, but deaths from some tropical diseases also go down. Over a longer period of time, some people are better able to reproduce at high altitudes than others. They contribute more heavily to later generations, and gradually by natural selection the whole population becomes adapted to the new conditions. This has demonstrably occurred, as the observed performance of long-term communities at higher altitude is significantly better than the performance of new arrivals, even when the new arrivals have had time to acclimatize.
Adaptedness and fitness
There is a relationship between adaptedness and the concept of fitness used in population genetics. Differences in fitness between genotypes predict the rate of evolution by natural selection. Natural selection changes the relative frequencies of alternative phenotypes, insofar as they are heritable. However, a phenotype with high adaptedness may not have high fitness. Dobzhansky mentioned the example of the Californian redwood, which is highly adapted, but a relict species in danger of extinction. Elliott Sober commented that adaptation was a retrospective concept since it implied something about the history of a trait, whereas fitness predicts a trait's future.
1. Relative fitness. The average contribution to the next generation by a genotype or a class of genotypes, relative to the contributions of other genotypes in the population. This is also known as Darwinian fitness, selection coefficient, and other terms.
2. Absolute fitness. The absolute contribution to the next generation by a genotype or a class of genotypes. Also known as the Malthusian parameter when applied to the population as a whole.
3. Adaptedness. The extent to which a phenotype fits its local ecological niche. Researchers can sometimes test this through a reciprocal transplant.
Sewall Wright proposed that populations occupy adaptive peaks on a fitness landscape. To evolve to another, higher peak, a population would first have to pass through a valley of maladaptive intermediate stages, and might be "trapped" on a peak that is not optimally adapted.
Types
Changes in habitat
Before Darwin, adaptation was seen as a fixed relationship between an organism and its habitat. It was not appreciated that as the climate changed, so did the habitat; and as the habitat changed, so did the biota. Also, habitats are subject to changes in their biota: for example, invasions of species from other areas. The relative numbers of species in a given habitat are always changing. Change is the rule, though much depends on the speed and degree of the change.
When the habitat changes, three main things may happen to a resident population: habitat tracking, genetic change or extinction. In fact, all three things may occur in sequence. Of these three effects only genetic change brings about adaptation.
When a habitat changes, the resident population typically moves to more suitable places; this is the typical response of flying insects or oceanic organisms, which have wide (though not unlimited) opportunity for movement. This common response is called habitat tracking. It is one explanation put forward for the periods of apparent stasis in the fossil record (the punctuated equilibrium theory).
Genetic change
Without mutation, the ultimate source of all genetic variation, there would be no genetic changes and no subsequent adaptation through evolution by natural selection. Genetic change occurs in a population when mutation increases or decreases in its initial frequency followed by random genetic drift, migration, recombination or natural selection act on this genetic variation. One example is that the first pathways of enzyme-based metabolism at the very origin of life on Earth may have been co-opted components of the already-existing purine nucleotide metabolism, a metabolic pathway that evolved in an ancient RNA world. The co-option requires new mutations and through natural selection, the population then adapts genetically to its present circumstances. Genetic changes may result in entirely new or gradual change to visible structures, or they may adjust physiological activity in a way that suits the habitat. The varying shapes of the beaks of Darwin's finches, for example, are driven by adaptive mutations in the ALX1 gene. The coat color of different wild mouse species matches their environments, whether black lava or light sand, owing to adaptive mutations in the melanocortin 1 receptor and other melanin pathway genes. Physiological resistance to the heart poisons (cardiac glycosides) that monarch butterflies store in their bodies to protect themselves from predators are driven by adaptive mutations in the target of the poison, the sodium pump, resulting in target site insensitivity. These same adaptive mutations and similar changes at the same amino acid sites were found to evolve in a parallel manner in distantly related insects that feed on the same plants, and even in a bird that feeds on monarchs through convergent evolution, a hallmark of adaptation. Convergence at the gene-level across distantly related species can arise because of evolutionary constraint.
Habitats and biota do frequently change over time and space. Therefore, it follows that the process of adaptation is never fully complete. Over time, it may happen that the environment changes little, and the species comes to fit its surroundings better and better, resulting in stabilizing selection. On the other hand, it may happen that changes in the environment occur suddenly, and then the species becomes less and less well adapted. The only way for it to climb back up that fitness peak is via the introduction of new genetic variation for natural selection to act upon. Seen like this, adaptation is a genetic tracking process, which goes on all the time to some extent, but especially when the population cannot or does not move to another, less hostile area. Given enough genetic change, as well as specific demographic conditions, an adaptation may be enough to bring a population back from the brink of extinction in a process called evolutionary rescue. Adaptation does affect, to some extent, every species in a particular ecosystem.
Leigh Van Valen thought that even in a stable environment, because of antagonistic species interactions and limited resources, a species must constantly had to adapt to maintain its relative standing. This became known as the Red Queen hypothesis, as seen in host-parasite interactions.
Existing genetic variation and mutation were the traditional sources of material on which natural selection could act. In addition, horizontal gene transfer is possible between organisms in different species, using mechanisms as varied as gene cassettes, plasmids, transposons and viruses such as bacteriophages.
Co-adaptation
In coevolution, where the existence of one species is tightly bound up with the life of another species, new or 'improved' adaptations which occur in one species are often followed by the appearance and spread of corresponding features in the other species. In other words, each species triggers reciprocal natural selection in the other. These co-adaptational relationships are intrinsically dynamic, and may continue on a trajectory for millions of years, as has occurred in the relationship between flowering plants and pollinating insects.
Mimicry
Bates' work on Amazonian butterflies led him to develop the first scientific account of mimicry, especially the kind of mimicry which bears his name: Batesian mimicry. This is the mimicry by a palatable species of an unpalatable or noxious species (the model), gaining a selective advantage as predators avoid the model and therefore also the mimic. Mimicry is thus an anti-predator adaptation. A common example seen in temperate gardens is the hoverfly (Syrphidae), many of which—though bearing no sting—mimic the warning coloration of aculeate Hymenoptera (wasps and bees). Such mimicry does not need to be perfect to improve the survival of the palatable species.
Bates, Wallace and Fritz Müller believed that Batesian and Müllerian mimicry provided evidence for the action of natural selection, a view which is now standard amongst biologists.
Trade-offs
All adaptations have a downside: horse legs are great for running on grass, but they cannot scratch their backs; mammals' hair helps temperature, but offers a niche for ectoparasites; the only flying penguins do is under water. Adaptations serving different functions may be mutually destructive. Compromise and makeshift occur widely, not perfection. Selection pressures pull in different directions, and the adaptation that results is some kind of compromise.
Examples
Consider the antlers of the Irish elk, (often supposed to be far too large; in deer antler size has an allometric relationship to body size). Antlers serve positively for defence against predators, and to score victories in the annual rut. But they are costly in terms of resources. Their size during the last glacial period presumably depended on the relative gain and loss of reproductive capacity in the population of elks during that time. As another example, camouflage to avoid detection is destroyed when vivid coloration is displayed at mating time. Here the risk to life is counterbalanced by the necessity for reproduction.
Stream-dwelling salamanders, such as Caucasian salamander or Gold-striped salamander have very slender, long bodies, perfectly adapted to life at the banks of fast small rivers and mountain brooks. Elongated body protects their larvae from being washed out by current. However, elongated body increases risk of desiccation and decreases dispersal ability of the salamanders; it also negatively affects their fecundity. As a result, fire salamander, less perfectly adapted to the mountain brook habitats, is in general more successful, have a higher fecundity and broader geographic range.
The peacock's ornamental train (grown anew in time for each mating season) is a famous adaptation. It must reduce his maneuverability and flight, and is hugely conspicuous; also, its growth costs food resources. Darwin's explanation of its advantage was in terms of sexual selection: "This depends on the advantage which certain individuals have over other individuals of the same sex and species, in exclusive relation to reproduction." The kind of sexual selection represented by the peacock is called 'mate choice,' with an implication that the process selects the more fit over the less fit, and so has survival value. The recognition of sexual selection was for a long time in abeyance, but has been rehabilitated.
The conflict between the size of the human foetal brain at birth, (which cannot be larger than about 400 cm3, else it will not get through the mother's pelvis) and the size needed for an adult brain (about 1400 cm3), means the brain of a newborn child is quite immature. The most vital things in human life (locomotion, speech) just have to wait while the brain grows and matures. That is the result of the birth compromise. Much of the problem comes from our upright bipedal stance, without which our pelvis could be shaped more suitably for birth. Neanderthals had a similar problem.
As another example, the long neck of a giraffe brings benefits but at a cost. The neck of a giraffe can be up to in length. The benefits are that it can be used for inter-species competition or for foraging on tall trees where shorter herbivores cannot reach. The cost is that a long neck is heavy and adds to the animal's body mass, requiring additional energy to build the neck and to carry its weight around.
Shifts in function
Pre-adaptation
Pre-adaptation occurs when a population has characteristics which by chance are suited for a set of conditions not previously experienced. For example, the polyploid cordgrass Spartina townsendii is better adapted than either of its parent species to their own habitat of saline marsh and mud-flats. Among domestic animals, the White Leghorn chicken is markedly more resistant to vitamin B1 deficiency than other breeds; on a plentiful diet this makes no difference, but on a restricted diet this preadaptation could be decisive.
Pre-adaptation may arise because a natural population carries a huge quantity of genetic variability. In diploid eukaryotes, this is a consequence of the system of sexual reproduction, where mutant alleles get partially shielded, for example, by genetic dominance. Microorganisms, with their huge populations, also carry a great deal of genetic variability. The first experimental evidence of the pre-adaptive nature of genetic variants in microorganisms was provided by Salvador Luria and Max Delbrück who developed the Fluctuation Test, a method to show the random fluctuation of pre-existing genetic changes that conferred resistance to bacteriophages in Escherichia coli. The word is controversial because it is teleological and the entire concept of natural selection depends on the presence of genetic variation, regardless of the population size of a species in question.
Co-option of existing traits: exaptation
Features that now appear as adaptations sometimes arose by co-option of existing traits, evolved for some other purpose. The classic example is the ear ossicles of mammals, which we know from paleontological and embryological evidence originated in the upper and lower jaws and the hyoid bone of their synapsid ancestors, and further back still were part of the gill arches of early fish. The word exaptation was coined to cover these common evolutionary shifts in function. The flight feathers of birds evolved from the much earlier feathers of dinosaurs, which might have been used for insulation or for display.
Niche construction
Animals including earthworms, beavers and humans use some of their adaptations to modify their surroundings, so as to maximize their chances of surviving and reproducing. Beavers create dams and lodges, changing the ecosystems of the valleys around them. Earthworms, as Darwin noted, improve the topsoil in which they live by incorporating organic matter. Humans have constructed extensive civilizations with cities in environments as varied as the Arctic and hot deserts.
In all three cases, the construction and maintenance of ecological niches helps drive the continued selection of the genes of these animals, in an environment that the animals have modified.
Non-adaptive traits
Some traits do not appear to be adaptive as they have a neutral or deleterious effect on fitness in the current environment. Because genes often have pleiotropic effects, not all traits may be functional: they may be what Stephen Jay Gould and Richard Lewontin called spandrels, features brought about by neighbouring adaptations, on the analogy with the often highly decorated triangular areas between pairs of arches in architecture, which began as functionless features.
Another possibility is that a trait may have been adaptive at some point in an organism's evolutionary history, but a change in habitats caused what used to be an adaptation to become unnecessary or even maladapted. Such adaptations are termed vestigial. Many organisms have vestigial organs, which are the remnants of fully functional structures in their ancestors. As a result of changes in lifestyle the organs became redundant, and are either not functional or reduced in functionality. Since any structure represents some kind of cost to the general economy of the body, an advantage may accrue from their elimination once they are not functional. Examples: wisdom teeth in humans; the loss of pigment and functional eyes in cave fauna; the loss of structure in endoparasites.
Extinction and coextinction
If a population cannot move or change sufficiently to preserve its long-term viability, then it will become extinct, at least in that locale. The species may or may not survive in other locales. Species extinction occurs when the death rate over the entire species exceeds the birth rate for a long enough period for the species to disappear. It was an observation of Van Valen that groups of species tend to have a characteristic and fairly regular rate of extinction.
Just as there is co-adaptation, there is also coextinction, the loss of a species due to the extinction of another with which it is coadapted, as with the extinction of a parasitic insect following the loss of its host, or when a flowering plant loses its pollinator, or when a food chain is disrupted.
Origin of adaptive capacities
The first stage in the evolution of life on earth is often hypothesized to be the RNA world in which short self-replicating RNA molecules proliferated before the evolution of DNA and proteins. By this hypothesis, life started when RNA chains began to self-replicate, initiating the three mechanisms of Darwinian selection: heritability, variation of type, and competition for resources. The fitness of an RNA replicator (its per capita rate of increase) would likely have been a function of its intrinsic adaptive capacities, determined by its nucleotide sequence, and the availability of resources. The three primary adaptive capacities may have been: (1) replication with moderate fidelity, giving rise to heritability while allowing variation of type, (2) resistance to decay, and (3) acquisition of resources. These adaptive capacities would have been determined by the folded configurations of the RNA replicators resulting from their nucleotide sequences.
Philosophical issues
Adaptation raises philosophical issues concerning how biologists speak of function and purpose, as this carries implications of evolutionary history – that a feature evolved by natural selection for a specific reason – and potentially of supernatural intervention – that features and organisms exist because of a deity's conscious intentions. In his biology, Aristotle introduced teleology to describe the adaptedness of organisms, but without accepting the supernatural intention built into Plato's thinking, which Aristotle rejected. Modern biologists continue to face the same difficulty. On the one hand, adaptation is purposeful: natural selection chooses what works and eliminates what does not. On the other hand, biologists by and large reject conscious purpose in evolution. The dilemma gave rise to a famous joke by the evolutionary biologist Haldane: "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public.'" David Hull commented that Haldane's mistress "has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it." Ernst Mayr stated that "adaptedness... is a posteriori result rather than an a priori goal-seeking", meaning that the question of whether something is an adaptation can only be determined after the event.
See also
Adaptive evolution in the human genome
Adaptive memory
Adaptive mutation
Adaptive system
Anti-predator adaptation
Body reactivity
Ecological trap
Evolutionary pressure
Evolvability
Intragenomic conflict
Neutral theory of molecular evolution
References
Sources
"Based on a conference held at the Mote Marine Laboratory in Sarasota, Fla., May 20–24, 1990."
"Papers by Dobzhansky and his collaborators, originally published 1937-1975 in various journals."
"Based on a conference held in Bellagio, Italy, June 25–30, 1989"
Biological evolution
Biology terminology
Evolutionary biology terminology | 0.786581 | 0.996225 | 0.783612 |
Ecophysiology | Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym.
Plants
Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis.
In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions.
Light
Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum efficiency; the x-intercept is the light intensity at which biochemical assimilation (gross assimilation) balances leaf respiration so that the net CO2 exchange of the leaf is zero, called light compensation point; and a horizontal asymptote representing the maximum assimilation rate. Sometimes after reaching the maximum assimilation declines for processes collectively known as photoinhibition.
As with most abiotic factors, light intensity (irradiance) can be both suboptimal and excessive. Suboptimal light (shade) typically occurs at the base of a plant canopy or in an understory environment. Shade tolerant plants have a range of adaptations to help them survive the altered quantity and quality of light typical of shade environments.
Excess light occurs at the top of canopies and on open ground when cloud cover is low and the sun's zenith angle is low, typically this occurs in the tropics and at high altitudes. Excess light incident on a leaf can result in photoinhibition and photodestruction. Plants adapted to high light environments have a range of adaptations to avoid or dissipate the excess light energy, as well as mechanisms that reduce the amount of injury caused.
Light intensity is also an important component in determining the temperature of plant organs (energy budget).
Temperature
In response to extremes of temperature, plants can produce various proteins. These protect them from the damaging effects of ice formation and falling rates of enzyme catalysis at low temperatures, and from enzyme denaturation and increased photorespiration at high temperatures. As temperatures fall, production of antifreeze proteins and dehydrins increases. As temperatures rise, production of heat shock proteins increases. Metabolic imbalances associated with temperature extremes result in the build-up of reactive oxygen species, which can be countered by antioxidant systems. Cell membranes are also affected by changes in temperature and can cause the membrane to lose its fluid properties and become a gel in cold conditions or to become leaky in hot conditions. This can affect the movement of compounds across the membrane. To prevent these changes, plants can change the composition of their membranes. In cold conditions, more unsaturated fatty acids are placed in the membrane and in hot conditions, more saturated fatty acids are inserted.
Plants can avoid overheating by minimising the amount of sunlight absorbed and by enhancing the cooling effects of wind and transpiration. Plants can reduce light absorption using reflective leaf hairs, scales, and waxes. These features are so common in warm dry regions that these habitats can be seen to form a 'silvery landscape' as the light scatters off the canopies. Some species, such as Macroptilium purpureum, can move their leaves throughout the day so that they are always orientated to avoid the sun (paraheliotropism). Knowledge of these mechanisms has been key to breeding for heat stress tolerance in agricultural plants.
Plants can avoid the full impact of low temperatures by altering their microclimate. For example, Raoulia plants found in the uplands of New Zealand are said to resemble 'vegetable sheep' as they form tight cushion-like clumps to insulate the most vulnerable plant parts and shield them from cooling winds. The same principle has been applied in agriculture by using plastic mulch to insulate the growing points of crops in cool climates in order to boost plant growth.
Water
Too much or too little water can damage plants. If there is too little water then tissues will dehydrate and the plant may die. If the soil becomes waterlogged then the soil will become anoxic (low in oxygen), which can kill the roots of the plant.
The ability of plants to access water depends on the structure of their roots and on the water potential of the root cells. When soil water content is low, plants can alter their water potential to maintain a flow of water into the roots and up to the leaves (Soil plant atmosphere continuum). This remarkable mechanism allows plants to lift water as high as 120 m by harnessing the gradient created by transpiration from the leaves.
In very dry soil, plants close their stomata to reduce transpiration and prevent water loss. The closing of the stomata is often mediated by chemical signals from the root (i.e., abscisic acid). In irrigated fields, the fact that plants close their stomata in response to drying of the roots can be exploited to 'trick' plants into using less water without reducing yields (see partial rootzone drying). The use of this technique was largely developed by Dr Peter Dry and colleagues in Australia
If drought continues, the plant tissues will dehydrate, resulting in a loss of turgor pressure that is visible as wilting. As well as closing their stomata, most plants can also respond to drought by altering their water potential (osmotic adjustment) and increasing root growth. Plants that are adapted to dry environments (Xerophytes) have a range of more specialized mechanisms to maintain water and/or protect tissues when desiccation occurs.
Waterlogging reduces the supply of oxygen to the roots and can kill a plant within days. Plants cannot avoid waterlogging, but many species overcome the lack of oxygen in the soil by transporting oxygen to the root from tissues that are not submerged. Species that are tolerant of waterlogging develop specialised roots near the soil surface and aerenchyma to allow the diffusion of oxygen from the shoot to the root. Roots that are not killed outright may also switch to less oxygen-hungry forms of cellular respiration. Species that are frequently submerged have evolved more elaborate mechanisms that maintain root oxygen levels, such as the aerial roots seen in mangrove forests.
However, for many terminally overwatered houseplants, the initial symptoms of waterlogging can resemble those due to drought. This is particularly true for flood-sensitive plants that show drooping of their leaves due to epinasty (rather than wilting).
concentration
is vital for plant growth, as it is the substrate for photosynthesis. Plants take in through stomatal pores on their leaves. At the same time as enters the stomata, moisture escapes. This trade-off between gain and water loss is central to plant productivity. The trade-off is all the more critical as Rubisco, the enzyme used to capture , is efficient only when there is a high concentration of in the leaf. Some plants overcome this difficulty by concentrating within their leaves using carbon fixation or Crassulacean acid metabolism. However, most species used carbon fixation and must open their stomata to take in whenever photosynthesis is taking place.
The concentration of in the atmosphere is rising due to deforestation and the combustion of fossil fuels. This would be expected to increase the efficiency of photosynthesis and possibly increase the overall rate of plant growth. This possibility has attracted considerable interest in recent years, as an increased rate of plant growth could absorb some of the excess and reduce the rate of global warming. Extensive experiments growing plants under elevated using Free-Air Concentration Enrichment have shown that photosynthetic efficiency does indeed increase. Plant growth rates also increase, by an average of 17% for above-ground tissue and 30% for below-ground tissue. However, detrimental impacts of global warming, such as increased instances of heat and drought stress, mean that the overall effect is likely to be a reduction in plant productivity. Reduced plant productivity would be expected to accelerate the rate of global warming. Overall, these observations point to the importance of avoiding further increases in atmospheric rather than risking runaway climate change.
Wind
Wind has three very different effects on plants.
It affects the exchanges of mass (water evaporation, ) and of energy (heat) between the plant and the atmosphere by renewing the air at the contact with the leaves (convection).
It is sensed as a signal driving a wind-acclimation syndrome by the plant known as thigmomorphogenesis, leading to modified growth and development and eventually to wind hardening.
Its drag force can damage the plant (leaf abrasion, wind ruptures in branches and stems and windthrows and toppling in trees and lodging in crops).
Exchange of mass and energy
Wind influences the way leaves regulate moisture, heat, and carbon dioxide. When no wind is present, a layer of still air builds up around each leaf. This is known as the boundary layer and in effect insulates the leaf from the environment, providing an atmosphere rich in moisture and less prone to convective heating or cooling. As wind speed increases, the leaf environment becomes more closely linked to the surrounding environment. It may become difficult for the plant to retain moisture as it is exposed to dry air. On the other hand, a moderately high wind allows the plant to cool its leaves more easily when exposed to full sunlight. Plants are not entirely passive in their interaction with wind. Plants can make their leaves less vulnerable to changes in wind speed, by coating their leaves in fine hairs (trichomes) to break up the airflow and increase the boundary layer. In fact, leaf and canopy dimensions are often finely controlled to manipulate the boundary layer depending on the prevailing environmental conditions.
Acclimation
Plants can sense the wind through the deformation of its tissues. This signal leads to inhibits the elongation and stimulates the radial expansion of their shoots, while increasing the development of their root system. This syndrome of responses known as thigmomorphogenesis results in shorter, stockier plants with strengthened stems, as well as to an improved anchorage. It was once believed that this occurs mostly in very windy areas. But it has been found that it happens even in areas with moderate winds, so that wind-induced signal were found to be a major ecological factor.
Trees have a particularly well-developed capacity to reinforce their trunks when exposed to wind. From the practical side, this realisation prompted arboriculturalists in the UK in the 1960s to move away from the practice of staking young amenity trees to offer artificial support.
Wind damage
Wind can damage most of the organs of the plants. Leaf abrasion (due to the rubbing of leaves and branches or to the effect of airborne particles such as sand) and leaf of branch breakage are rather common phenomena, that plants have to accommodate. In the more extreme cases, plants can be mortally damaged or uprooted by wind. This has been a major selective pressure acting over terrestrial plants. Nowadays, it is one of the major threatening for agriculture and forestry even in temperate zones. It is worse for agriculture in hurricane-prone regions, such as the banana-growing Windward Islands in the Caribbean.
When this type of disturbance occurs in natural systems, the only solution is to ensure that there is an adequate stock of seeds or seedlings to quickly take the place of the mature plants that have been lost- although, in many cases, a successional stage will be needed before the ecosystem can be restored to its former state.
Animals
Humans
The environment can have major influences on human physiology. Environmental effects on human physiology are numerous; one of the most carefully studied effects is the alterations in thermoregulation in the body due to outside stresses. This is necessary because in order for enzymes to function, blood to flow, and for various body organs to operate, temperature must remain at consistent, balanced levels.
Thermoregulation
To achieve this, the body alters three main things to achieve a constant, normal body temperature:
Heat transfer to the epidermis
The rate of evaporation
The rate of heat production
The hypothalamus plays an important role in thermoregulation. It connects to thermal receptors in the dermis, and detects changes in surrounding blood to make decisions of whether to stimulate internal heat production or to stimulate evaporation.
There are two main types of stresses that can be experienced due to extreme environmental temperatures: heat stress and cold stress.
Heat stress is physiologically combated in four ways: radiation, conduction, convection, and evaporation. Cold stress is physiologically combated by shivering, accumulation of body fat, circulatory adaptations (that provide an efficient transfer of heat to the epidermis), and increased blood flow to the extremities.
There is one part of the body fully equipped to deal with cold stress. The respiratory system protects itself against damage by warming the incoming air to 80-90 degrees Fahrenheit before it reaches the bronchi. This means that not even the most frigid of temperatures can damage the respiratory tract.
In both types of temperature-related stress, it is important to remain well-hydrated. Hydration reduces cardiovascular strain, enhances the ability of energy processes to occur, and reduces feelings of exhaustion.
Altitude
Extreme temperatures are not the only obstacles that humans face. High altitudes also pose serious physiological challenges on the body. Some of these effects are reduced arterial , the rebalancing of the acid-base content in body fluids, increased hemoglobin, increased RBC synthesis, enhanced circulation, and increased levels of the glycolysis byproduct 2,3 diphosphoglycerate, which promotes off-loading of O2 by hemoglobin in the hypoxic tissues.
Environmental factors can play a huge role in the human body's fight for homeostasis. However, humans have found ways to adapt, both physiologically and tangibly.
Scientists
George A. Bartholomew (1919–2006) was a founder of animal physiological ecology. He served on the faculty at UCLA from 1947 to 1989, and almost 1,200 individuals can trace their academic lineages to him. Knut Schmidt-Nielsen (1915–2007) was also an important contributor to this specific scientific field as well as comparative physiology.
Hermann Rahn (1912–1990) was an early leader in the field of environmental physiology. Starting out in the field of zoology with a Ph.D. from University of Rochester (1933), Rahn began teaching physiology at the University of Rochester in 1941. It is there that he partnered with Wallace O. Fenn to publish A Graphical Analysis of the Respiratory Gas Exchange in 1955. This paper included the landmark O2-CO2 diagram, which formed the basis for much of Rahn's future work. Rahn's research into applications of this diagram led to the development of aerospace medicine and advancements in hyperbaric breathing and high-altitude respiration. Rahn later joined the University at Buffalo in 1956 as the Lawrence D. Bell Professor and Chairman of the Department of Physiology. As Chairman, Rahn surrounded himself with outstanding faculty and made the University an international research center in environmental physiology.
See also
Comparative physiology
Evolutionary physiology
Ecology
Phylogenetic comparative methods
Plant physiology
Raymond B. Huey
Theodore Garland, Jr.
Tyrone Hayes
References
Further reading
Spicer, J. I., and K. J. Gaston. 1999. Physiological diversity and its ecological implications. Blackwell Science, Oxford, U.K. x + 241 pp.
. Definitions and Opinions by: G. A. Bartholomew, A. F. Bennett, W. D. Billings, B. F. Chabot, D. M. Gates, B. Heinrich, R. B. Huey, D. H. Janzen, J. R. King, P. A. McClure, B. K. McNab, P. C. Miller, P. S. Nobel, B. R. Strain.
Subfields of ecology
Physiology
Animal physiology
Plant physiology
Ecology terminology
Animal ecology
Plant ecology
Articles containing video clips | 0.805877 | 0.972235 | 0.783502 |
Biomedical engineering | Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as a clinical engineer.
Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, imaging technologies such as MRI and EKG/ECG, regenerative tissue growth, and the development of pharmaceutical drugs including biopharmaceuticals.
Subfields and related fields
Bioinformatics
Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data.
Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences.
Biomechanics
Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics.
Biomaterials
A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science.
Biomedical optics
Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies include optical coherence tomography (OCT), fluorescence microscopy, confocal microscopy, and photodynamic therapy (PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as the retina in the eye or the coronary arteries in the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently, adaptive optics is helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging.
Tissue engineering
Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME.
One of the goals of tissue engineering is to create artificial organs (via biological material) for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct.
Genetic engineering
Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research.
Neural engineering
Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices.
Pharmaceutical engineering
Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of Chemical Engineering, and Pharmaceutical Analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment.
Hospital and medical devices
This is an extremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism.
A medical device is intended for use in:
the diagnosis of disease or other conditions
in the cure, mitigation, treatment, or prevention of disease.
Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants.
Stereolithography is a practical example of medical modeling being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases.
Medical devices are regulated and classified (in the US) as follows (see also Regulation):
Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments, and other similar types of common equipment.
Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include X-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes.
Class III devices generally require premarket approval (PMA) or premarket notification (510k), a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, hip and knee joint implants, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants.
Medical imaging
Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means.
Alternatively, navigation-guided equipment utilizes electromagnetic tracking technology, such as catheter placement into the brain or feeding tube placement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in the GI tract.
Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy.
Medical implants
An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Bionics
Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools.
Biomedical sensors
In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals.
Clinical engineering
Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly.
Their inherent focus on practical implementation of technology has tended to keep them oriented more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, see safety engineering for a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items.
Rehabilitation engineering
Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.
The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation.
Regulatory issues
Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death"
Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be: 1) safe and 2) effective and 3) for all the manufactured devices (why is this part deleted?)
A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards (death, injuries, ...) in its intended use. Protective measures have to be introduced on the devices to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it.
A product is effective if it performs as specified by the manufacturer in the intended use. Effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device.
The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle.
The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical devices, drugs, biologics, and combination products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices).
In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide.
In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area.
The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the optimal extent of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments.
RoHS II
Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2.
RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled.
The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products.
IEC 60601
The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series.
The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard.
AS/NZS 3551:2012
AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards.
The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning.
Training and certification
Education
Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which includes enough biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology.
In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET.
In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at University of Guelph as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University.
As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program.
Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards.
Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or another engineering discipline (plus certain life science coursework), or life science (plus certain engineering coursework).
Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME.
Licensure/certification
As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine.
Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.
In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM.
The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure.
Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers.
Career prospects
In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions.
Notable figures
Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society
Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic.
Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators
Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics
Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR).
Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs
Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering
John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University.
Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering.
J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Robert M. Nerem – professor emeritus at Georgia Institute of Technology. Pioneer in regenerative tissue, biomechanics, and author of over 300 published works. His works have been cited more than 20,000 times cumulatively.
P. Hunter Peckham – Donnell Professor of Biomedical Engineering and Orthopaedics at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Nicholas A. Peppas – Chaired Professor in Engineering, University of Texas at Austin, pioneer in drug delivery, biomaterials, hydrogels and nanobiotechnology.
Robert Plonsey – professor emeritus at Duke University, pioneer of electrophysiology
Otto Schmitt (deceased) – biophysicist with significant contributions to BME, working with biomimetics
Ascher Shapiro (deceased) – Institute Professor at MIT, contributed to the development of the BME field, medical devices (e.g. intra-aortic balloons)
Gordana Vunjak-Novakovic – University Professor at Columbia University, pioneer in tissue engineering and bioreactor design
John G. Webster – professor emeritus at the University of Wisconsin–Madison, a pioneer in the field of instrumentation amplifiers for the recording of electrophysiological signals
Fred Weibell, coauthor of Biomedical Instrumentation and Measurements
U.A. Whitaker (deceased) – provider of the Whitaker Foundation, which supported research and education in BME by providing over $700 million to various universities, helping to create 30 BME programs and helping finance the construction of 13 buildings
See also
Biomedical Engineering and Instrumentation Program (BEIP)
References
Further reading
External links | 0.784833 | 0.998214 | 0.783432 |